A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.
- A man used
artificial intelligence(AI) to create a chatbot that mimicked his late fiancé.
- The groundbreaking AI technology was designed by
Elon Musk's research group OpenAI.
- OpenAI has long warned that the technology could be used for mass information campaigns.
After Joshua Barbeau's fiancé passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her.
In a story for the San Francisco Chronicle, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyper-realistic chatbots, recreated the experience of speaking with his late fiancé. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.
It may sound like a miracle (or a Black Mirror episode), but the AI creators warn that the same technology could be used to fuel mass
Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. By consuming massive datasets of human-created text (Reddit threads were particularly helpful), GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.
It's some of the most sophisticated - and dangerous - language-based AI programming to date.
When OpenAI released GPT-2, the predecessor to GPT-3, the group wrote that it can potentially be used in "malicious ways." The organization anticipated bad actors using the technology could automate "abusive or faked content on
GPT-2 could be used to "unlock new as-yet-unanticipated capabilities for these actors," the group wrote.
OpenAI staggered the release of GPT-2, and still restricts access to the superior GPT-3, in order to "give people time" to learn the "societal implications" of such technology.
Misinformation is already rampant on social media, even with GPT-3 not widely available. A new study found that YouTube's algorithm still pushes misinformation, and the nonprofit Center for Countering Digital Hate recently identified 12 people responsible for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the "Disinformation Dozen," they have millions of followers.
As AI continues to develop, Oren Etzioni, CEO of the non-profit, bioscience research group, Allen Institute, previously told Insider it will only become harder to tell what's real.
"The question 'Is this text or image or video or email authentic?' is going to become increasingly difficult to answer just based on the content alone," he said.
- Best perfumes for men in India in 2021
- Microsoft unveils Surface Duo 2 foldable phone with Snapdragon 888 and 5G
- Vodafone Idea, Zee Entertainment, Airtel and other top stocks to watch out for on September 23
- It’s good that India wants just two GST slabs but it has to ensure it doesn’t hurt consumers
- Best car seat for kids in India