On the importance of eliminating bias in AI-based recruitment

Advertisement
On the importance of eliminating bias in AI-based recruitment
Pixabay
  • Experts believe that leaders who don't currently have a framework in place for frequently reviewing their organisation’s usage of AI, should develop one.
  • Only a growing number of restrictions may force AI developers and users to create responsible-use standards.
  • To avoid legal wrangling, the data being used to train AI must be sufficiently representative of all groups.
Advertisement
In the nascent stage of technology adoption, there are very few oversight mechanisms to govern emerging technologies. Due to quick innovation and lax regulation, first-to-market enterprises are more likely to depend on popular support rather than institutional approval. Facebook, now Meta, is still mostly self-regulated nearly 20 years after its founding. Cryptocurrency has been in use since 2009 and has drastically fallen off its peak market value of over $2 trillion that was scaled in 2021, and the discussion over its regulation is only beginning. Until the US Congress passed the Telecommunications Act in 1996, the World Wide Web was entirely unrestricted for five years. Those in charge of drafting legislation frequently lack knowledge about the technology that is to be regulated, leading to ambiguous or out-of-date laws that fail to appropriately protect users or foster progress.

The commercialisation of artificial intelligence (AI) is taking a similar route, which is unsurprising. But, given AI's innate ability to adapt and learn at an exponential rate, it may not be a bad thing.

What needs to be done to ensure the use of AI in hiring is unbiased and equitable?

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
Creating diverse, wide data sets

To avoid legal wrangling, the data being used to train AI must be sufficiently representative of all groups. This is especially crucial in hiring because many professional work settings – particularly in industries like computing, finance and media – are dominated by white and/or male employees. If accessing diverse, rich and ample data is not an option, experienced data scientists can synthesise additional, representative samples to ensure that the entire data set has a one-to-one ratio across all genders, races, ages and so on, regardless of the percentage of the total population they portray in the industry or workforce.

Testing to remove inherent biases

Advertisement

Any AI developed to assist in making hiring decisions will be subjected to extensive, catalogued and possibly continuous testing in the future. This will most likely follow the US Equal Employment Opportunity Commission's four-fifths (4/5ths) guideline (EEOC). The 4/5ths rule specifies that any color, sex or ethnic group's selection rate cannot be less than four-fifths, or 80%, of the selection rate for the group with the highest selection rate. For an AI-enabled hiring tool, achieving no negative impact in compliance with the 4/5ths rule should be a normal practice.

Gartner predicted that through 2022, 85% of AI projects will provide false results caused by bias in data, algorithms or the teams responsible for managing them. Hence, increased oversight in AI-assisted hiring will, over time, lower the chances of candidates being penalised based on subjective or outright discriminatory considerations. Due to the vagueness of these rules, AI companies must take responsibility for ensuring that candidates are safeguarded.

Supplement candidate info


To filter or eliminate individuals from consideration, traditional recruiting methods frequently rely on organised data – such as bio-data information, and unstructured data – such as a "gut sense". These data points aren't very predictive of future performance and they frequently contain the most pervasive and systemic biases. Some AI-enabled hiring tools, on the other hand, will spew forth recommendations that tell a hiring manager to exclude prospects based on the AI's findings. There are likely to be problems when AI rejects candidates like this. Instead, such tools should provide extra data points to supplement the information gathered and reviewed during the hiring process. On its best days, AI should be able to deliver actionable, explainable and additional information on all candidates, allowing employers to make the best human-led decisions possible.

Data is never neutral

The risks of leaving AI unrestrained in the recruitment world are considerable. The risk of establishing or perpetuating prejudices against race, ethnicity, gender and disability when AI is used to screen, analyse, and select job candidates is quite real. Trying to acquire fair data throughout the recruiting process is akin to manoeuvring through landmines. GPA, school reputation and word choice on a résumé are used to make deliberate and unconscious decisions, resulting in historically inequitable outcomes. This is why, under certain laws, all automated hiring decision tools must undergo a biased audit, in which an impartial auditor assesses the tool's impact on the individual based on a variety of demographic criteria. While the specifics of the audit requirement are unclear, AI-enabled recruiting firms are required to do "disparate effect evaluations" to see if any group is being harmed.
Advertisement

As businesses increasingly employ artificial intelligence, particularly in people management and recruitment, there's been more discussion among executives about how to ensure that AI is used fairly. And, the skill requirements have been constantly expanding. The AI market globally was nearly $59.67 billion in 2021 and is projected to grow at a CAGR of 39.4% to $422.37 billion in 2028.

Experts believe that leaders who don't currently have a framework in place for frequently reviewing their organisation’s usage of AI should develop one. Only a growing number of regulations and restrictions may force AI developers and users to create responsible-use standards.

SEE ALSO:
Mumbai’s real estate sector registers its best July in a decade even as home loan interest rates rise

India manufacturing PMI hits 8-month high in July on new orders
{{}}