Here's how we can teach machines to be fair
- Automated decision-making in machine learning can lead to discrimination.
- If this discrimination is not prevented, it would cause irreversible damages such as distrust of the technology and the companies that develop it.
- This is just one of the risks relating to machine learning.
The opportunities that artificial intelligence (AI) can unlock for our world - from discovering cures to diseases that kill millions each year to significantly cutting carbon emissions - are expanding every day. This includes a subset of AI called machine learning, which leverages the ability of machines to learn from vast quantities of data and use those lessons to make predictions. Machine learning (ML) is already enabling pathways to financial inclusion, citizen engagement, more affordable healthcare and many more vital systems and services. ML systems might highlight a post in your Facebook newsfeed based on your online activity, or select applicants in a hiring process. ML is one of the most powerful tools humanity has created - and it is more important than ever that we learn how to harness its power for good.
Erica Kochi is the Head of Innovation at UNICEF and also leads the World Economic Forum's Global Future Council
Learning not to discriminate
What happens when machines learn to discriminate?Most of the stories we've heard about discrimination in machine learning come out of the United States and Europe. Events like a Google photo mechanism that mistakenly labeled an image of two black friends as gorillas, and predictive policing tools that have been shown to amplify racial bias, have received extensive and important media coverage. In many parts of the world, particularly in middle- and low-income countries, using ML to make decisions without taking adequate precautions to prevent discrimination is likely to have far-reaching, long-lasting and potentially irreversible consequences. Take, for instance, any one of the following examples:
- In Indonesia, economic development has unfolded unequally across geographical (and, subsequently, ethnic) lines. While access to higher education is relatively uniform across the country, the top 10 universities are all on the island of Java, and a large majority of the students who attend those universities are from Java. As firms hiring in white-collar sectors train ML systems to screen applicants based on factors like educational attainment status, they may systematically exclude those from poorer islands such as Papua.
- There are now ways for insurance companies to predict an individual's future health risks. Mexico is among the countries where, for most, quality healthcare is available only through private insurance. At least two private multinational insurance companies operating in Mexico are now using ML to maximize their efficiency and profitability, with potential implications for the human right to fair access to adequate healthcare. Imagine a scenario in which insurance companies use ML to mine data such as shopping history to recognize patterns associated with high-risk customers, and charge them more: the poorest and sickest would be least able to afford access to health services.
- While few details are publicly available, reports suggest that China is creating a model to score its citizens by analyzing a wide range of data, from banking, tax, professional and performance records to smartphones, e-commerce and social media information. The Washington Post described this as an attempt "to use the data to enforce a moral authority as designed by the Communist party". What will it mean, in future, if governments act on scores computed using data that is incomplete or historically biased, using models not built for fairness?
These scenarios tell us that, while machine learning can hugely benefit this world, there are also important risks to consider. We need to look closely at the ways discrimination can creep into ML systems, and what companies can do to prevent this.If, as Klaus Schwab argues in his book, The Fourth Industrial Revolution, we want to work together to "shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people", we need to design and use machine learning to prevent and not deepen discrimination.This is an opinion column. The thoughts expressed are those of the author.
Read the original article on World Economic Forum. Copyright 2018.
- Earth's satellites, power grids and planes may have more time to prepare against the threat of solar flares
- These are India’s top 10 'most livable' cities, according to the government's Ease of Living Index 2020
- Flipkart is reportedly looking to list in US via a blank cheque firm
- Smartphone brand OPPO will launch new fitness band on March 9
- Ahead of its electric three-wheeler launch, EV startup Euler Motors raises ₹30 crore