scorecard
  1. Home
  2. tech
  3. news
  4. A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.

A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.

Margaux MacColl   

A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.
  • A man used artificial intelligence (AI) to create a chatbot that mimicked his late fiancé.
  • The groundbreaking AI technology was designed by Elon Musk's research group OpenAI.
  • OpenAI has long warned that the technology could be used for mass information campaigns.

After Joshua Barbeau's fiancé passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her.

$4, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyper-realistic chatbots, recreated the experience of speaking with his late fiancé. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.

It may sound like a miracle ($4, but the AI creators warn that the same technology could be used to fuel mass misinformation campaigns.

Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. $4 ($4, GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.

It's some of the most sophisticated - and dangerous - language-based AI programming to date.

When OpenAI released GPT-2, the predecessor to GPT-3, $4 that it can potentially be used in "malicious ways." The organization anticipated bad actors using the technology could automate "abusive or faked content on social media," "generate misleading news articles," or "impersonate others online."

GPT-2 could be used to "unlock new as-yet-unanticipated capabilities for these actors," the group wrote.

OpenAI staggered the release of GPT-2, and still restricts access to the superior GPT-3, in order to "give people time" to learn the "societal implications" of such technology.

Misinformation is already rampant on social media, even with GPT-3 not widely available. $4 still pushes misinformation, and the nonprofit Center for Countering Digital Hat$4 for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the "Disinformation Dozen," they have millions of followers.

As AI continues to develop, Oren Etzioni, CEO of the non-profit, bioscience research group, Allen Institute, $4 it will only become harder to tell what's real.

"The question 'Is this text or image or video or email authentic?' is going to become increasingly difficult to answer just based on the content alone," he said.

READ MORE ARTICLES ON



Popular Right Now



Advertisement