​These malfunctioning AI incidents show the need for stronger user privacy measures

Advertisement
​These malfunctioning AI incidents show the need for stronger user privacy measures

  • Amazon reportedly ran an experiment recently that led to its chatbot Alexa advising a user to ‘kill his foster parents’.
  • Facebook shut down an experiment earlier this year when they found two artificial intelligent programs talking to each other in a language understood only by them.
  • Uber’s self-driving car killed a pedestrian while it was on autonomous mode in 2018.
As artificial intelligence-backed voice assistants and chatbots go mainstream, it appears that privacy and security will be crucial factors that decide if the technology catches on with consumers.
Advertisement

In a recent incident, Amazon’s virtual assistant, accidentally advised a customer ‘Kill your foster parents’, reported Reuters. The user, fortunately, did not follow the instructions but proceeded to write a negative review on Amazon’s website.

Another user reported that Alexa had accidentally given a discourse on dog defecation.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
Last year, a German user with the alias ‘Martin Schneider’ asked amazon.de to give him access to his personal data after the implementation of the GDPR. Instead, Amazon sent him a zip file that contained all the Alexa voice commands of another user. Interestingly, Schneider never used Alexa or Amazon’s Echo, reported a German publication, Heise.

The incidents of Alexa’s odd behavior have startled many users previously. The digital assistant, which can be accessed through Amazon’s Echo speakers and other similar devices, started laughing mysteriously without any prior command by the user spreading fear of security as well as privacy, said a report from The New York Times.

Advertisement

In Uber’s case, its self-driving software allegedly failed to recognise six red traffic lights. In one incident, it crossed a red light where pedestrians could have been put in harm's way. In order to avoid a lawsuit, the ride-sharing company settled with the women’s family instead.

While there should be laws in place to address such issues, the fact that such concerns are still relatively new, policymakers in flux about what kind of laws should be put in place.

All these incidents highlight the need for stronger privacy and security measures for consumer-facing AI.

Speaking about the need for regulating Artificial Intelligence, Paul Nemitz, a senior European Commission official said Not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships,” reported The Guardian.

Tech giants claim that they are taking the necessary precautions. Facebook, for instance, shut down an experiment when they found two artificial intelligent programs talking to each other in a language understood only by them, according to a report by The Independent.

Advertisement
See also:
Here are the most controversial data breaches of 2018 that affected Indian users
Quora says 100 million user accounts may have been hacked giving out passwords, e-mails in massive data breach
{{}}