Here's how we can teach machines to be fair
- Automated decision-making in machine learning can lead to discrimination.
- If this discrimination is not prevented, it would cause irreversible damages such as distrust of the technology and the companies that develop it.
- This is just one of the risks relating to machine learning.
Erica Kochi is the Head of Innovation at UNICEF and also leads the World Economic Forum's Global Future Council
View all Offers
- 18% OFF
Redmi 9A (Nature Green, 2GB RAM, 32GB Storage) | 2GHz Octa-core Helio G25 Processor | 5000 mAh Battery₹ 6999₹ 8499Buy On
- 19% OFF
Redmi Note 10 (Aqua Green, 4GB RAM, 64GB Storage) -Amoled Dot Display | 48MP Sony Sensor IMX582 | Snapdragon 678 Processor₹ 12999₹ 15999Buy On
OnePlus Nord 2 5G (Gray Sierra, 8GB RAM, 128GB Storage) I Extra upto Rs.1000 off on Exchange₹ 29999Buy On
OnePlus Nord 2 5G (Blue Haze, 8GB RAM, 128GB Storage) I Extra upto Rs.1000 off on Exchange₹ 29999Buy On
- 18% OFF
Redmi 9A (Midnight Black 2GB RAM 32GB Storage) | 2GHz Octa-core Helio G25 Processor | 5000 mAh Battery₹ 6999₹ 8499Buy On
Learning not to discriminate
What happens when machines learn to discriminate?Most of the stories we've heard about discrimination in machine learning come out of the United States and Europe. Events like a Google photo mechanism that mistakenly labeled an image of two black friends as gorillas, and predictive policing tools that have been shown to amplify racial bias, have received extensive and important media coverage. In many parts of the world, particularly in middle- and low-income countries, using ML to make decisions without taking adequate precautions to prevent discrimination is likely to have far-reaching, long-lasting and potentially irreversible consequences. Take, for instance, any one of the following examples:
- In Indonesia, economic development has unfolded unequally across geographical (and, subsequently, ethnic) lines. While access to higher education is relatively uniform across the country, the top 10 universities are all on the island of Java, and a large majority of the students who attend those universities are from Java. As firms hiring in white-collar sectors train ML systems to screen applicants based on factors like educational attainment status, they may systematically exclude those from poorer islands such as Papua.
- There are now ways for insurance companies to predict an individual's future health risks. Mexico is among the countries where, for most, quality healthcare is available only through private insurance. At least two private multinational insurance companies operating in Mexico are now using ML to maximize their efficiency and profitability, with potential implications for the human right to fair access to adequate healthcare. Imagine a scenario in which insurance companies use ML to mine data such as shopping history to recognize patterns associated with high-risk customers, and charge them more: the poorest and sickest would be least able to afford access to health services.
- While few details are publicly available, reports suggest that China is creating a model to score its citizens by analyzing a wide range of data, from banking, tax, professional and performance records to smartphones, e-commerce and social media information. The Washington Post described this as an attempt "to use the data to enforce a moral authority as designed by the Communist party". What will it mean, in future, if governments act on scores computed using data that is incomplete or historically biased, using models not built for fairness?
These scenarios tell us that, while machine learning can hugely benefit this world, there are also important risks to consider. We need to look closely at the ways discrimination can creep into ML systems, and what companies can do to prevent this.If, as Klaus Schwab argues in his book, The Fourth Industrial Revolution, we want to work together to "shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people", we need to design and use machine learning to prevent and not deepen discrimination.This is an opinion column. The thoughts expressed are those of the author.
Read the original article on World Economic Forum. Copyright 2018.
- With a $2 million funding from Sachin Tendulkar, Rajan Navani of digital entertainment and tech company JetSynthesys shares the roadmap ahead
- Here is a list of bank holidays in August 2021
- Best all-in-one printers in India in 2021
- Tokyo Olympics: Lovlina Borgohain assures a boxing medal for India by entering semifinals
- Clear(Tax) has planned major acquisitions and hiring for this year, in talks with investors to raise funds