scorecard
  1. Home
  2. tech
  3. news
  4. UK spy agency says AI chatbots like ChatGPT pose a security threat

UK spy agency says AI chatbots like ChatGPT pose a security threat

Sawdah Bhaimiya,Beatrice Nolan   

UK spy agency says AI chatbots like ChatGPT pose a security threat
  • A unit of the British spy agency GCHQ warned AI chatbots like ChatGPT pose a security threat.
  • Sensitive queries that are stored by chatbot providers could be hacked or leaked, it said.

A British spy agency has warned that artificially intelligent chatbots like ChatGPT pose a security threat because sensitive queries could be hacked or leaked.

The National Cyber Security Centre, a unit of the intelligence agency GCHQ, on Tuesday $4 outlining risks to individuals and companies from using a new breed of powerful AI-based chatbots.

Other risks included criminals using chatbots to write "convincing phishing emails" and help them mount cyberattacks "beyond their current capabilities," the authors of the NCSC blog post wrote.

The authors pointed out that queries entered into chatbots are stored by their providers. This could allow providers like ChatGPT's owner, OpenAI, to use queries to teach future versions of their chatbots — and to read them — the authors said.

This creates a risk for sensitive queries, such as a CEO asking "how best to lay off an employee" or somebody asking health questions, the authors wrote.

They added: "Queries stored online may be hacked, leaked, or more likely accidentally made publicly accessible. This could include potentially user-identifiable information."

Rasmus Rothe, cofounder of Merantix, an AI investment platform, told Insider: "The main security issue in employees interacting with tools like ChatGPT is that the machine will learn from those interactions — and that could include learning and consequently repeating confidential information."

OpenAI didn't immediately respond to a request for comment from Insider made outside normal business hours.

OpenAI says it $4 with ChatGPT "to improve our systems and to ensure the content complies with our policies and safety requirements."

The authors of the NCSC blog post said the new chatbots were "undoubtedly impressive" but "they're not magic, they're not artificial general intelligence, and contain some serious flaws."

Major companies including $4 and $4 have advised employees not to use ChatGPT over concerns that internal information may be leaked.



Popular Right Now



Advertisement