REMINDER: Asimov's Laws Of Robotics Won't Protect You And Robots Will Be More Than Happy To Kill Us All

Advertisement

terminator salvation

Warner Bros.

You may have noticed, we have a soft spot for sci-fi author Isaac Asimov. His fiction, especially as it pertains to robotics, cemented him in the sci-fi canon and advanced the thinking on what practical robotics would look like in the future.

Advertisement

He also used his fiction to entertain a foreboding question: Should a robot be able to kill a human?

Asimov decided not, and drew up three "laws of robotics" that governed how robots behaved in his fictional universes.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

They go like this:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In short: human life is revered above all else. Not only is a robot forbidden from harming a person, robots are universally charged with protecting people. It's an effective system that sets up a number of entertaining plots in his writing, but the takeaway is clear: In an Asimovian universe where things are operating normally, it is made impossible for a robot to harm a human.

Advertisement

(Futurist and writer Ray Kurzweil famously talks about the singularity, an indeterminate point in the future where machine capability will overtake that of man, which he maintains is just a matter of time away.)

We were surprised when, in casual conversation, an acquaintance expressed relief at the existence of Asimov's laws as if they merited some sort of actual protection from would-be robot overlords. Let's be clear here: Asimov's three laws of robotics are a fictional creation with no real-world bearing on robot behavior. In this sense they're a lot like The Force - fun to contemplate, but useless in defending oneself from sci-fi monsters.

This begs the question: What checks actually are in place to prevent some sort of robot uprising in the future?

None.