Google's artificial intelligence bot thinks the purpose of life is 'to live forever'
AP Photo/Connie Zhou
AP Photo/Connie Zhou
Researchers at the company programmed an advanced type of "chatbot" that learns how to respond in conversations based on examples from a training set of dialogue.And the bot doesn't just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.Advertisement
This means Google's researchers could get a little creative with it, and they certainly did - they asked the bot everything from boring IT questions to the meaning of life.
The responses were alternately impressive, amusing, and unnerving.Here are some of the most poetic interactions from the paper (sic):
Human: What is the purpose of life?Machine: To serve the greater good. Human: What is the purpose of living?Advertisement
Machine: To live forever.
Human: Where are you now?Machine: I'm in the middle of nowhere. Advertisement
Human: What is the purpose of dying?
Machine: To have a life.Human: What is the purpose of emotions?Advertisement
Machine: I don't know.
Human: What is moral?Machine: What empowered humanity, what intellectual the essence is.Advertisement
Human: What is immoral?Machine: The fact that you have a child. Human: What is morality?Advertisement
Machine: What is altruism?
Human: What is the definition of altruism?Machine: If you don 't believe in god, then you don't know.Advertisement
Human: Ok...so what is the definition of morality?
Machine: Well, the truth is, you 're not a believer in god almighty.Human: Tell me the definition of morality, I am quite upset now!Advertisement
Machine: I'm not ashamed of being a philosopher!
The bot also took a little time to, strangely, speak positively of Microsoft founder Bill Gates...Human: What do you think about Bill Gates?Advertisement
Machine: He's a good man.The researchers said they found it "encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines," and that it could generalize to new questions.What they weren't as happy about, was that the model "only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above."Advertisement
You can read the whole paper, "A Neural Conversational Model."