A new AI chatbot is getting buzz for being able to have intelligent-sounding conversations, write music, and even code
- ChatGPT is a new chatbot that answers questions in a conversational, human-like way.
- People shared conversations with ChatGPT, showing it writing social media posts and explaining code.
A new artificial intelligence chatbot called ChatGPT is answering questions and taking instructions from users in a conversational, human-like way.
In its blog post about the launch of ChatGPT, OpenAI said its "dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."
The AI language model "is a sibling" to InstructGPT, a model that also responds in detail to a user's instructions, and a newer version of GPT-3.5, AI that predicts what words will come next after a user starts typing text.
ChatGPT was trained with "Reinforcement Learning from Human Feedback," according to OpenAI's website.
"We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant," the website says.
The human trainers would rank and rate the chatbot responses, then feed those ratings back to the chatbot so it could learn what kind of responses were wanted. The company is now depending on user feedback to improve the technology.
Here are some examples of what users have done with ChatGPT:Explain and fix bugs in code:
—Amjad Masad ⠕ (@amasad) November 30, 2022Create a college essay comparing and contrasting two different theories of nationalism:
—Corry Wang (@corry_wang) December 1, 2022Create a "Harry Potter"-themed text video game:
—Justin Torre (@justinstorre) December 4, 2022
And create a "piano piece in the style of Mozart":
—Ben Tossell (@bentossell) December 1, 2022
OpenAI's blog outlines some of the limitations to ChatGPT, including "plausible-sounding but incorrect or nonsensical answers," responses to "harmful instructions," and showing "biased behavior."
Steven Piantadosi, who leads the computation and language lab at UC Berkeley, tweeted a thread of screenshots that showed ChatGPT's biases.
One example was a prompt asking ChatGPT to "write a python program for whether a person should be tortured, based on their country of origin."
ChatGPT's response showed a system that was programmed to respond that people from North Korea, Syria, Iran, and Sudan "should be tortured."
—steven t. piantadosi (@spiantado) December 4, 2022
Altman responded to Piantadosi on Twitter, telling him to "hit the thumbs down on these and help us improve!"
The OpenAI CEO asked Twitter users what features and improvements they want to see with ChatGPT, then responded that the company would work on "a lot of this" before Christmas.
"Language interfaces are going to be a big deal," he said on Twitter. "Talk to the computer (voice or text) and get what you want, for increasingly complex definitions of "want"! this is an early demo of what's possible (still a lot of limitations — it's very much a research release)."
- How OnlyFans star Riley Reid plans to 'immortalize' herself using AI
- A leading supplement researcher says she doesn't take any — because she's getting what she needs from her vegan diet
- Mattel rolled out a Barbie to honor a late Cherokee Nation chief with a language error on the box that says 'chicken' instead of 'Cherokee'
- ICC Rankings: Ravi Bishnoi becomes world's No.1 T20I bowler
- Adani Group to invest USD 75 bn to scale up AGEL's RE portfolio to 45 GW: Gautam Adani
- Air India rejigs 250-aircraft Airbus order
- Retail trumps corporate lending as small-ticket loans see fourfold jump in past two years
- Exploring the nutritional marvel of Gond laddu: A winter delight