document.write(''); ChatGPT iPhone app raises privacy concerns again - Simo Baha

ChatGPT iPhone app raises privacy concerns again

Since OpenAI introduced ChatGPT, privacy advocates have warned consumers about potential privacy threats posed by artificial intelligence applications. The appearance of the ChatGPT app on the Apple App Store has sparked a new round of caution.

“[B]”Before jumping headfirst into an app, beware of getting too personal with a bot and putting your privacy at risk,” warned Muskaan Saxena at Tech Radar.

The iOS app comes with obvious compromises that users should be aware of, he explained, including this caveat:

Anonymization, however, is not a ticket to privacy. Anonymous chats are stripped of information that could link them to specific users. “However, anonymization may not be an adequate way to protect consumer privacy, as anonymized data can still be re-verified by combining it with other sources of information,” Joy Stanford, VP of privacy and security at Platform.sh, creator. a cloud-based services platform for developers based in Paris, TechNewsWorld said.

“It’s been found to be relatively easy to anonymize information, especially if location information is used,” explained Jen Calthrider, lead researcher for Mozilla’s Privacy Unplugged project.

“Publicly, OpenAI says it doesn’t collect location data, but ChatGPT’s privacy policy says they may collect that data,” he told TechNewsWorld.

However, OpenAI warns users of the ChatGPT app that their information will be used to train its large language model. “They’re honest about it. They are not hiding anything,” Cultrider said.

Taking privacy seriously

Caleb Withers, a research assistant at the Center for a New American Security at the National Security and Defense Think Tank in Washington, explained that if a user enters their name, workplace and other personal information into a ChatGPT request, that information; will not be anonymous.

“You have to ask yourself: “This is something I would say to an OpenAI employee,” he told TechNewsWorld.

OpenAI has said it takes privacy seriously and is taking measures to protect user data, said Mark N. Venan, president and principal analyst at SmartTech Research in San Jose, California.

“However, it’s always a good idea to review the specific privacy policies and practices of any service to understand how your data is handled and what protections are in place,” he told TechNewsWorld.


No matter how dedicated an organization is to data security, there can be vulnerabilities that can be exploited by malicious players, added James McQuigan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Florida.

“It’s always important to be careful and consider the need to share sensitive information to ensure your data is as secure as possible,” he told TechNewsWorld.

“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in long and often unread end-user license agreements,” he added.

Built-in protections

McQuiggan noted that users of AI-generating apps have been known to include sensitive information such as birthdays, phone numbers, postal and email addresses in their surveys. “If an AI system is not properly third-party secured, it can be accessed by third parties and used for malicious purposes, such as identity theft or targeted advertising,” he said.

He added that generative AI applications can also inadvertently reveal sensitive information about users through the content they generate. “Therefore,” he continued, “users should be aware of the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”

Unlike desktop and laptop computers, mobile phones have some built-in security features that can prevent privacy invasion by apps running on them.

However, as McQuiggan points out, “While some measures, such as app permissions and privacy settings, may provide some level of protection, they may not fully protect your personal information from all types of privacy threats, as with any app loaded on a smartphone. »

Vena agreed that built-in measures such as app permissions, privacy settings and app store regulations offer some level of protection. “But they may not be enough to mitigate all privacy threats,” he said. “App developers and smartphone manufacturers have different approaches to privacy, and not all apps follow best practices.”

Even the way OpenAI works varies from desktop to mobile. “If you use ChatGPT on the website, you have the option to go into data management and opt out of your chat being used to improve ChatGPT. That setting doesn’t exist in the iOS app,” Caltrider noted.

Be aware of App Store privacy information

Caltrider also found the permissions used by OpenAI’s iOS app to be a bit unclear, noting that “You can check in the Google Play Store and see what permissions are being used. You can’t do that through the Apple App Store.”

He warned users against relying on privacy information found in app stores. “The research we’ve done on the Google Play Store’s security information shows that it’s really untrustworthy,” he noted.


“Research by others on the Apple App Store shows that it is also untrustworthy,” he continued. “Users should not trust data security information they find on application pages. They have to do their own research, which is difficult and complicated.”

“Companies need to be better about being honest about what they collect and share,” he added. “OpenAI is honest about how they plan to use the data they collect to train ChatGPT, but then say that once the data is anonymized, they can use it in many ways that go beyond the privacy policy.”

Stanford noted that Apple has some policies in place that could address some of the privacy threats posed by generative AI applications. They include:

  • Require user consent for data collection and sharing by applications that use generative AI technologies;
  • providing transparency and control over how data is used and by whom through the AppTracking transparency feature, which allows users to opt out of cross-app tracking;
  • Enforcing privacy standards and regulations for app developers through the App Store review process and rejecting apps that violate them.

However, he admitted. “These measures may not be sufficient to prevent generating AI applications from creating inappropriate, harmful or misleading content that could affect users’ privacy and security.”

Call the Federal AI Privacy Act

“OpenAI is just one company. There are several major language models, and many more will likely emerge in the near future,” added Hodan Omar, senior AI policy analyst at the Center for Data Innovation, a think tank that studies the intersection of data, technology, and technology. public policy in Washington

“We need to have a federal data privacy law to ensure that all companies adhere to clear standards,” he told TechNewsWorld.

“With the rapid growth and expansion of artificial intelligence,” added Cultrider, “there absolutely needs to be strong, strong watchdogs and regulations watching over the rest of us as it grows and spreads.”

Source link