Princeton computer science professor says don't panic over 'bullshit generator' ChatGPT

Advertisement
Princeton computer science professor says don't panic over 'bullshit generator' ChatGPT
ChatGPT, a AI chat bot, has gone viral in the past two weeks.Getty Images
  • A Princeton professor told The Markup that "bullshit generator" ChatGPT merely presents narratives.
  • He said it can't be relied on for accurate facts, and that it's unlikely to spawn a "revolution."
Advertisement

A professor at Princeton researching the impact of artificial intelligence doesn't believe that OpenAI's popular bot ChatGPT is a death knell for industries.

While such tools are more accessible than ever, and can instantaneously package voluminous information and even produce creative works, they can't be trusted for accurate information, Princeton professor Arvind Narayanan said in an interview with The Markup.

"It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not," he said.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Experts who study AI have said that products like ChatGPT, which are part of a category of large language model tools that can respond to human commands and produce creative output, work by simply making predictions about what to say, rather than synthesizing ideas like human brains do.

Narayanan said this makes ChatGPT more of a "bullshit generator" that presents its response without considering the accuracy of its responses.

Advertisement

But there are some early indications for how companies will adopt this type of technology.

For instance, Buzzfeed, which in December reportedly laid off 12% of its workforce, will use OpenAI's technology to help make quizzes, according to the Wall Street Journal. The tech reviews site CNET published AI-generated stories and had to correct them later, The Washington Post reported.

Narayanan cited the CNET case as an example of the pitfalls of this type of technology. "When you combine that with the fact that the tool doesn't have a good notion of truth, it's a recipe for disaster," he told The Markup.

He said that a more likely outcome of large language model tools would be industries changing in response to its use, rather than being fully replaced.

"Even with something as profound as the internet or search engines or smartphones, it's turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution," he told The Markup. "I don't think large language models are even on that scale. There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a 'sky is falling' kind of issue."

Advertisement

The Markup's full interview with Narayanan is worth reading, which you can do here.

{{}}