scorecard
  1. Home
  2. tech
  3. news
  4. ChatGPT banned by regulator in Italy, which says there's no legal basis for using personal data to train the chatbot following data breach

ChatGPT banned by regulator in Italy, which says there's no legal basis for using personal data to train the chatbot following data breach

Pete Syme   

ChatGPT banned by regulator in Italy, which says there's no legal basis for using personal data to train the chatbot following data breach
  • Italy's data protection regulator announced a ban on ChatGPT, and investigation into OpenAI.
  • It cited a March 20 data breach, and no "legal basis" for using people's data to train the chatbot.

ChatGPT has been banned in Italy over privacy concerns, in a landmark order against the major AI chatbot, the country's privacy regulator announced Friday.

Italy's national data protection agency (DPA) said it would block access to ChatGPT immediately, and is starting an investigation into its creator, OpenAI.

It added that the restriction was temporary, until the company can abide by the European Union's data protection laws, known as General Data Protection Regulation (GDPR).

According to an Insider translation of the Italian press release announcing the news, the DPA said that there is no legal basis to justify "the mass collection and storage of personal data" used to train the algorithms behind ChatGPT. The regulator also alleged that such data was processed inaccurately.

The Italian authority also cited a data breach on March 20, where a bug allowed some ChatGPT users to see the titles of other users' conversations. Sam Altman, the OpenAI CEO, called it a "significant issue" in a tweet two days later, adding: "We feel awful about this."

And while ChatGPT's terms of service say it's aimed at those aged 13 and over, the watchdog pointed out there are no checks to ensure this. It added that this "exposes minors to absolutely unsuitable answers compared to their degree of development and self-awareness."

The temporary ban follows Wednesday's open letter – signed by over 1,000 people including AI experts, Elon Musk, and Apple co-founder Steve Wozniak – which called for a six-month pause on developing technology more powerful than GPT-4.

They urged AI companies to introduce safety protocols for the technology, as progress ramps up – with the more powerful GPT-4 coming less than four months after the chatbot's November release.

"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," the letter said.

OpenAI has 20 days to show what measures it's taking to allay the authority's concerns or face a fine of up to either 20 million euros ($21.7 million), or 4% of the company's annual global turnover.

OpenAI did not immediately respond to Insider's request for comment, sent outside US working hours.




Advertisement