OpenAI’s ChatGPT app for iPad, iPhone reaches 500k downloads

OpenAI only shipped its ChatGPT app for iPads and iPhones a week ago, but it’s already become one of the most popular apps of the past two years, with more than half a million downloads in the first six days. It is a real achievement, but also a challenge. that’s half a million potential data vulnerabilities.

Not one to rest on its laurels, this year’s favorite smart assistant (so far) is now also available in 41 other countries. There’s no doubt that this has been one of the most successful software/services investments of all time, but that doesn’t change the inherent risk of these technologies.

Save the red flag

The app’s popularity should raise a red flag for IT managers, who should redouble efforts to warn employees not to put valuable personal or corporate data into the service. The danger of doing this is that the data collected by OpenAI has already been hacked once, and it’s only a matter of time before someone gets hold of the information.

After all, digital security is not what it is today if an incident occurs, but when.

To coin a phrase from Apple’s playbook, the best way to protect data online is to not put the information in the first place. That’s why Cupertino’s iPhones and other products (through China, India and Vietnam) work on the principle of data minimization, reducing the amount of information collected and making efforts to reduce the need to send it to servers for processing.

This is a great approach, not only because it reduces the amount of information that can get out, but because it also reduces the chance of people getting it wrong in the first place.

People are coming

We don’t have that protection with ChatGPT apps. Aside from a wholesale ban on using the service and app on managed devices, IT admins rely almost entirely on trust when it comes to ensuring their employees don’t share sensitive data with the bot.

However, people are people, so it’s inevitable that, no matter how strong the admonitions are against such use, we can be sure that some people will accidentally share confidential data through an app. They may not even realize they’re doing it, just seeing it as the equivalent of searching the web.

It’s a similar threat to shadow IT, where people accidentally share confidential information in exchange for what seems convenient.

private dancer

IT should consider the OpenAI privacy label attached to its products in the App Store. That label makes it clear that the following data is associated with the user when using the application:

  • Contact information: e-mail e-mail, name, phone number.
  • User Content – “Other” User Content.
  • Identifiers – User ID.
  • Usage Data – Product Interactions.
  • Diagnostics – crash, performance, other diagnostic data.

Available online, OpenAI’s own Privacy Policy is also worth checking out, though the company hasn’t disclosed the training data it uses for its latest bots.

The challenge here is that IT must take into account the limitations of the latter alongside the unattractiveness of human nature. Regulators are already concerned about privacy implications. Privacy regulators in Canada are investigating the company’s privacy practices, with similar actions taking place in Europe. (OpenAI is quite concerned about these investigations and has warned that it may or may not close up shop in Europe if the law is too strict.)

purple haze

The flurry of activity around generative AI and ChatGPT in general should not obscure the far-reaching implications of these technologies, which offer huge productivity benefits but threaten job security on a massive scale.

At least in the short term, IT administrators must do everything possible to ensure that these simple consumer products do not threaten confidential business data. And for this to happen, users must be warned not to share data with these services unless validated under the company’s security and privacy policies.

Please follow me on Mastodon or join me at AppleHolic Bar and Grill and Apple: Discussions groups on MeWe.

Copyright © 2023 IDG Communications, Inc.



Source link