It’s time for enterprises to take a long, hard look at AI Ethics

Advertisement
It’s time for enterprises to take a long, hard look at AI Ethics
Canva
  • Only 35% of global consumers trust how AI is being implemented by organisations.
  • AI outcomes can be biased or discriminatory and do not take into consideration the plurality and diversity of societies.
  • Responsible AI is becoming the new business imperative.
Advertisement
A few months ago, a leaked document from Facebook, now Meta, shook up the data privacy world. In the leaked document obtained by Motherboard, a group of privacy engineers working at Meta wrote, “We do not have an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose.’ And yet, this is exactly what regulators expect us to do, increasing our risk of mistakes and misrepresentation.”

This is one of those instances when we tend to agree with overstatements like ‘AI is an existential threat to humanity’.

As Artificial Intelligence (AI) penetrates every aspect of our lives, and AI algorithms decide what we see, how we interact and what we buy, a million-dollar question arises – what can organisations do to ensure the use of responsible AI? The enterprise world is already preparing to go far beyond even AI and embrace the next- generation Metaverse technologies. But, have we done enough to ensure AI ethics and governance guidelines are in place?

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
AI ethics & governance: A top priority

The answers are not easy to come by. The UNESCO’s Recommendation on the Ethics of Artificial Intelligence was adopted by its 193 member states as recently as November 2021.

It is aimed at ‘defining values, principles and policies that will guide countries in building legal frameworks to ensure that AI is deployed as a force for the common good.’ While it marked a great leap in the right direction, there is still a lot that needs to be done in this area.
Advertisement


As businesses scale up the adoption of AI across the organisation, they must be willing to accept the responsibility of producing outcomes that are transparent and unbiased, with human interest at the core. Businesses must have a razor-sharp focus on AI governance, ethics, and evolving regulations. Without adequate data governance, enterprises risk losing reputation, customers and tons of money due to non-compliance.

The fact is that only 35% of global consumers trust how AI is being implemented by organisations, while 77% think organisations must be held accountable for their misuse of AI, according to Accenture 2022 Tech Vision research.

The pitfalls of unintended bias

Often, AI is not just about finding new business opportunities, understanding customers, and improving the topline. UNESCO has been quite vocal about why businesses must focus on human-centred AI and has warned that the technology poses an unprecedented ethical dilemma.

“We are seeing lack of transparency, gender and ethnic bias, grave threats to privacy, dignity and agency, the danger of mass surveillance, and a growing use of unreliable AI technologies in law enforcement, to name a few,” states the agency. It has underscored how AI outcomes are often biased or discriminatory and do not take into consideration the plurality and diversity of societies.

Advertisement
Biased outcomes from AI are already widespread across industries. The financial services sector is just an example. As lending processes get automated, credit algorithms tend to throw up results that are skewed against women. AI systems are only as good as the data they are fed. Unfortunately, input data can be highly biased, leading to an ongoing cycle of biases.

Amazon was forced to do away with a ‘sexist AI’ tool for recruitment that discriminated against women. In the healthcare sector, where AI’s use has seen an exponential increase during the pandemic, there is the quintessential question of how doctors can safely rely on AI recommendations while deciding critical treatments. There is also the challenge of evolving regulatory requirements in these sectors. Autonomous vehicles have already raised several questions around data privacy, ownership and access of data.

Commitment to ethical AI

Many responsible organisations are working on building trust and transparency into their AI programs. They are leveraging more diverse data sets to overcome unintentional AI bias and the related challenges. What was so far on paper is now being put into practice by an increasing number of organisations. Responsible AI must soon become a business imperative for enterprise leaders.

Risk managers, however, acknowledge that it’s going to be an uphill task, with only about 11% of them admitting to having the full capability of assessing the risks associated with adopting AI across the enterprise, as per an Accenture global survey. The future of this technology hinges on the development of human-centric AI models and systems that are inherently trustworthy, fair, and transparent.
SEE ALSO:
Stock trading and side hustles – this is how salaried Indians are trying to deal with mounting expenses
Advertisement
Meet Sonia Syngal, the GAP CEO who navigated the clothing brand during the pandemic
From a small thela to 100 stores — MBA Chai Wala is now looking to hire 500 people for 100 more
{{}}