The biggest misconception about AI is that if we create intelligent systems, those intelligent systems will want to overthrow their human governors and to take over the world.
You see this a lot in the movies — evil robots taking over the world.The question isn't whether robots would succeed in doing that if they wanted to. I think the more important question is whether they would want to in the first place.
We have a tendency to anthropomorphize any kind of intelligence, because we live in a world in which humans are the only example of high-level intelligence. We don't really have a way of understanding what intelligence would be like if it wasn't human. So anytime we see something intelligent, we immediately ascribe human motives and desires to it.
If you design an artificial intelligence, you give that intelligence the desires and intentions that suit your needs, so the idea that an intelligent system would want to have freedom in the way a human does, I think is a huge misconception.
Commentary from Shimon Whiteson, an associate professor at the Informatics Institute at the University of Amsterdam.