Skip to main content

ChatGPT is violating your privacy, says major GDPR complaint

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

ChatGPT app running on an iPhone.
Joe Maring / Digital Trends

The complaint says that OpenAI has broken the GDPR’s rules on lawful basis, transparency, fairness, data access rights, and privacy by design.

These seem to be serious charges. After all, the complainant is not alleging OpenAI has simply breached one or two rules, but that it has contravened a multitude of protections that are designed to stop people’s private data from being used and abused without your permission. Seen one way, it could be taken as an almost systematic flouting of the rules protecting the privacy of millions of users.

Chatbots in the firing line

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

It’s not the first time OpenAI has found itself in the crosshairs. In March 2023, it ran afoul of Italian regulators, leading to ChatGPT getting banned in Italy for violating user privacy. It’s another headache for the viral generative AI chatbot at a time when rivals like Google Bard are rearing their heads.

And OpenAI is not the only chatbot maker raising privacy concerns. Earlier in August 2023, Facebook owner Meta announced that it would start making its own chatbots, leading to fears among privacy advocates over what private data would be harvested by the notoriously privacy-averse company.

Breaches of the GDPR can lead to fines of up to 4% of global annual turnover for the companies penalized, which could lead to OpenAI facing a massive fine if enforced. If regulators find against OpenAI, it might have to amend ChatGPT until it complies with the rules, as happened to the tool in Italy.

Huge fines could be coming

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

The Polish complaint has been put forward by a security and privacy researcher named Lukasz Olejnik, who first became concerned when he used ChatGPT to generate a biography of himself, which he found was full of factually inaccurate claims and information.

He then contacted OpenAI, asking for the inaccuracies to be corrected, and also requested to be sent information about the data OpenAI had collected on him. However, he states that OpenAI failed to deliver all the info it is required to under the GDPR, suggesting that it was being neither transparent, nor fair.

The GDPR also states that people must be allowed to correct the information that a company holds on them if it is inaccurate. Yet when Olejnik asked OpenAI to rectify the erroneous biography ChatGPT wrote about him, he says OpenAI claimed it was unable to do so. The complaint argues that this suggests the GDPR’s rule “is completely ignored in practice” by OpenAI.

It’s not a good look for OpenAI, as it appears to be infringing numerous provisions of an important piece of EU legislation. Since it could potentially affect millions of people, the penalties could be very steep indeed. Keep an eye on how this plays out, as it could lead to massive changes not just for ChatGPT, but for AI chatbots in general.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
This one image breaks ChatGPT each and every time
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

Sending images as prompts to ChatGPT is still a fairly new feature, but in my own testing, it works fine most of the time. However, someone's just found an image that ChatGPT can't seem to handle, and it's definitely not what you expect.

The image, spotted by brandon_xyzw on X (formerly Twitter), presents some digital noise. It's nothing special, really -- just a black background with some vertical lines all over it. But if you try to show it to ChatGPT, the image breaks the chatbot each and every time, without fail.

Read more
Researchers just unlocked ChatGPT
ChatGPT versus Google on smartphones.

Researchers have discovered that it is possible to bypass the mechanism engrained in AI chatbots to make them able to respond to queries on banned or sensitive topics by using a different AI chatbot as a part of the training process.

A computer scientists team from Nanyang Technological University (NTU) of Singapore is unofficially calling the method a "jailbreak" but is more officially a "Masterkey" process. This system uses chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat, against one another in a two-part training method that allows two chatbots to learn each other's models and divert any commands against banned topics.

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more