Sam Altman says OpenAI would 'cease operating' in Europe if it can't comply with new rules

Advertisement
Sam Altman says OpenAI would 'cease operating' in Europe if it can't comply with new rules
OpenAI CEO Sam Altman told reporters in London that he was concerned about the EU's AI Act, and said the ChatGPT-maker could "cease operating" in Europe.Win McNamee/Getty Images
  • Sam Altman told reporters in London he's concerned about the upcoming EU AI Act's impact on OpenAI.
  • Altman told the Financial Times it "will try to comply," but could "cease operating" if it can't.
Advertisement

OpenAI's Sam Altman warned that the ChatGPT maker could stop operating in Europe if the bloc implements its proposed rules on artificial intelligence.

"The details really matter," Altman told reporters during his tour of some of Europe's capital cities. "We will try to comply, but if we can't comply, we will cease operating," the Financial Times reported.

The EU's proposed AI Act, which is "the first law on AI by a major regulator anywhere," according to its website, focuses on regulating AI and protecting Europeans from certain AI risks ranked on three categories. The European parliament voted in favor by a large majority to adopt the AI Act. The act is now up for adoption, with June 14 set as the tentative date.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Altman is reportedly concerned that OpenAI's systems, such as ChatGPT and GPT-4, could be designated as "high risk," under the regulation, according to Time. That would mean the company would have to meet certain requirements over safety and transparency, such as disclosing that its content was AI-generated. OpenAI and Sam Altman did not immediately respond to Insider's request for comment.

Under the proposed European rules:

Advertisement

  • AI systems ranked in the highest risk category of the AI Act would be banned. That would be for AI that the regulations say would "create an unacceptable risk, such as government-run social scoring of the type used in China."
  • The second risk category would be "subject to specific legal requirements,"and would be for AI systems that would be used to scan resumes and rank job applicants.
  • The third category is for AI systems that are "not explicitly banned or listed as high-risk" and would therefore be "largely left unregulated."

The AI Act's rules would also require AI companies to design AI models to prevent them from "generating illegal content," and to publish "summaries of copyrighted data used for training."

When OpenAI released GPT-4 in March, some in the AI community were disappointed that OpenAI did not disclose information on what data was used to train the model, how much it cost, and how it was created.

Ilya Sutskever, OpenAI's cofounder and chief scientist, previously told The Verge that the company didn't share this information due to competition and safety.

"It took pretty much all of OpenAI working together for a very long time to produce this thing," Sutskever said. "And there are many many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field."

Sutskever also said that, while competition is top-of-mind now, safety will become more important in the future.

Advertisement

While Altman said he's concerned about how the AI Act will affect OpenAI's presence in Europe, he recently told the US Senate that there should be a government agency to oversee AI projects that perform "above a certain scale of capabilities."

Altman is pushing for a government agency to grant licenses to AI companies, and take them away if they overstep safety rules.

{{}}