Rise of the machines: 2020 will be a good year for India to teach AI some ethics
- Most countries are still trying to catch up to legislation needed to govern artificial intelligence (AI) — including India.
- In the absence of laws and limited self-regulation, companies are flouting AI ethics in a race to the finish line, according to Capgemini Research Institute.
- Machines are making decisions without the banking sector being aware of it, data is being collected without user consent. And, when it comes to health, AI-based results can sometimes be biased.
Most leading companies in the world are working with artificial intelligence (AI) in one way or another. When some of them, including tech giants like Google and Facebook, develop AI that
comes up with its own language or can
think independently — one can’t help but imagine
Terminator 2 or
The Matrix playing out in the near future.
AI, or artificial intelligence, is intelligence showcased by machines. Any piece of technology that comes with the tagline ‘smart’ is probably some type of AI.
The most prominent example of AI we already use is digital assistants, like Apple’s Siri. If you take it a step forward and integrate that digital assistant with a speaker, you get Amazon Alexa on Echo or Google Assistant on Google Home. And, they’ve already been accused of being used to
spy on users.
It’s not only science-fiction stories like Isaac Asimov’s
I, Robot that have warned us about machines taking over the world, but industry leaders as well. The late theoretical physicist
Stephen Hawking said, "We are facing potentially the best or worst thing to happen to humanity in history."
The CEO of Tesla and SpaceX Elon Musk also echoes Hawking’s concerns. According to him, AI isn’t handled properly it could result in
humans becoming an endangered species. "When a species of primate, homo sapiens, became much smarter than other primates, it pushed all the other ones into a very small habitat... So there are very few mountain gorillas and orangutans and chimpanzees — monkeys in general," he said.
Industry leaders feel that instances of overdependence on AI-led technology have already
started to emerge. In fact, it’s one of the top 10 ethical issues resulting from the use of the AI.
In India, more than 85% of companies have come to face with such issues, according to Capgemini’s
Towards Ethical AI report.
]]>
Without ethical guidelines to keep AI algorithms in check, one of the biggest concerns is the disregard for user consent. In addition, Capgemini finds that a lack of diversity can also lead to a biased AI — preferring one particular gender or race to another. And, the ease with which it can be employed leaves loopholes for the misuse of mass surveillance if not regulated carefully.
The rush for AI
For most, AI’s ethical considerations get thrown out the window because they’re being rushed to get to the finish line, in many instances without enough manpower. Something that Alphabet and Google CEO, Sundar Pichai,
specifically warned against.
For others, it simply didn’t ‘occur’ to them to consider the ethical requirements despite popular examples of how it can all go wrong in a heartbeat, like in
I, Robot or
X-Men: Days of Future Past.
]]>
According to Musk, the ethics are already being disregarded as
‘advance AI’ manipulates social media. He implied that the technology is being to make basic spambots more convincing and effective. And that’s already started to come true in the form of deep-fakes and robotic calls.
There shouldn’t be an ‘oops’ moment when it comes to your data, identity and health
Propagators of AI argue that it doesn’t really pose that big a threat. Even if it’s going to make some tasks obsolete, new job opportunities will emerge simultaneously. It’s also going to processes more efficient since they will all be ‘smart’ systems that learn as they go. And, it leaves zero room for error.
"One of the biggest misconceptions of AI is that there is super-intelligent being, or what we call generalised AI, that knows-all, can do-all, and is smarter than all of us put together… AI built on us, it’s mimicking our thought process — it’s an emulation of us," explained Ayanna Howards, a roboticist, in the YouTube documentary series
How Far is Too Far? The Age of AI.If AI is truly an ‘emulation’ of humans, that means it also inherits the same prejudices and biases. The underlying attempt to eliminate those biases could be interpreted as one of the many ways that developers do want AI to be more competent than humans.
Which is why Capgemini’s report warns that there is a need for
values and laws that govern AI. Machines are making money-related decisions without the
banking sector knowing about it, personal data is being processed without user consent, and while an AI without bias would be superior — a biased AI is dangerous when it comes to healthcare.
Those are three pretty important aspects — value, identity and health — that shouldn’t be put at risk.
"Algorithmic systems are amoral… they do not have a moral compass. Yet, they can make decisions that have pervasive moral consequences," said Nicolas Economou, CEO of H5, an informational retrieval company.
Where does the buck stop?
Ethics are subjective. There is no one definition of what is right and what is wrong, which is why
laws are updated and revolutions take place. The point of having ethical guidelines is that those principles encompass what society at large believes is ‘the right thing to do’.
"In the case of ethics, this is not something where responsibility lies with any particular individual in the company. It is a shared responsibility for all of us," said Michael Natusch, the global head of AI at Prudential Plc, a British insurance company.
The issue with ethics around AI most government’s around the world are still trying to catch up to kind of laws that are required to prevent its misuse. It’s hard to predict what could go wrong with technology that is being
implemented in new ways on a regular basis.
"More conversations about the best legislation should start now… We need both self-regulation and legislation as they are complementary tools," said Luciano Floridi, professor of ethics and philosophy at the University of Oxford.
In isolated incidents, this may not pose much of a threat, but instances of
mass surveillance and personal data being collected without consent are problems that don’t take long to steamroll into something bigger — the most glaring example being
China’s constant monitoring of its citizens."Organizations or individuals may not be inherently evil, but we as a society need to develop a way in which we can systematically evaluate the ethical impact of AI to account for negligence," said Marija Slavkokik, an associate professor at the Univesity of Bergen.
Asking the right questions
"The first step is to define a process," according to Economou and the Institute of Electrical and Electronics Engineers (IEEE) has released the ‘General Principles of Ethically Aligned Design’ to help serve as a guideline. Most people think it’s
about time to bring in these laws to govern AI.
]]>