Why AI makers don't tell their chatbots to answer only what they know

Advertisement
Why AI makers don't tell their chatbots to answer only what they know
Anthropic cofounder Jared Kaplan said that AI chatbots that are too "worried" about accuracy can become impractical for users.Jakub Porzycki/NurPhoto via Getty Images
  • AI models too worried about mistakes can stop being useful, according to one AI executive.
  • Anthropic's Jared Kaplan argues that occasional chatbot errors are a necessary "tradeoff" for users.
Advertisement

Generative AI is famously prone to mistakes. But one AI executive is saying that may not be such a bad thing — for now.

If chatbots become too anxious with their own fallibility, it could lead them to second-guess all the information they're presented with, said Jared Kaplan, cofounder of Anthropic, at The Wall Street Journal's CIO Network Summit on Monday. That, he suggested, would make them pointless for users.

Occasional "hallucinations" — errors caused by incorrect assumptions or programming deficiencies — are part of the "tradeoff" for an otherwise useful AI system, Kaplan said.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"These systems — if you train them to never hallucinate — they will become very, very worried about making mistakes and they will say, 'I don't know the context' to everything," he said.

So when is it acceptable for a chatbot to respond to a query with an answer it knows may not be 100% accurate? Kaplan said that's what developers need to decide.

Advertisement

"A rock doesn't hallucinate, but it isn't very useful," he said. "I think you don't want to go to that limit."

The end goal, Kaplan said, is an AI platform with "zero" hallucinations. But that's easier said than done.

"I hallucinate some amount of the time as well," he said. "I make mistakes, we all make mistakes."

The AI sector is still struggling with how to walk the tightrope between accuracy and practicality. Last year, Google's Gemini AI drew criticism from users for coming up with incorrect answers to straightforward queries. At the same time, though, users said that the chatbot was reluctant to weigh in on controversial topics, telling them to "try using Google search" instead.

Anthropic has grappled with the question of accuracy and ethics in generative AI. Earlier this year, BI reported that researchers at the company, as part of a study, designed AI models that would intentionally lie to humans. The study suggested that models trained to lie can deceive evaluators and pass safety programs.

Advertisement

Kaplan founded Anthropic with other former staffers at OpenAI, the company that created ChatGPT. It bills itself as an "AI safety and research company," and says that its AI development is prioritizing ethical values and safety concerns.

{{}}