- Experts believe that leaders who don't currently have a framework in place for frequently reviewing their organisation’s usage of AI, should develop one.
- Only a growing number of restrictions may force AI developers and users to create responsible-use standards.
- To avoid legal wrangling, the data being used to train AI must be sufficiently representative of all groups.
The commercialisation of artificial intelligence (AI) is taking a similar route, which is unsurprising. But, given AI's innate ability to adapt and learn at an exponential rate, it may not be a bad thing.
What needs to be done to ensure the use of AI in hiring is unbiased and equitable?
Creating diverse, wide data sets
To avoid legal wrangling, the data being used to train AI must be sufficiently representative of all groups. This is especially crucial in hiring because many professional work settings – particularly in industries like computing, finance and media – are dominated by white and/or male employees. If accessing diverse, rich and ample data is not an option, experienced data scientists can synthesise additional, representative samples to ensure that the entire data set has a one-to-one ratio across all genders, races, ages and so on, regardless of the percentage of the total population they portray in the industry or workforce.
Testing to remove inherent biases
Any AI developed to assist in making hiring decisions will be subjected to extensive, catalogued and possibly continuous testing in the future. This will most likely follow the
Gartner predicted that through 2022, 85% of AI projects will provide false results caused by bias in data, algorithms or the teams responsible for managing them. Hence, increased oversight in AI-assisted hiring will, over time, lower the chances of candidates being penalised based on subjective or outright discriminatory considerations. Due to the vagueness of these rules, AI companies must take responsibility for ensuring that candidates are safeguarded.
Supplement candidate info
To filter or eliminate individuals from consideration, traditional recruiting methods frequently rely on organised data – such as bio-data information, and unstructured data – such as a "gut sense". These data points aren't very predictive of future performance and they frequently contain the most pervasive and systemic biases. Some AI-enabled hiring tools, on the other hand, will spew forth recommendations that tell a hiring manager to exclude prospects based on the AI's findings. There are likely to be problems when AI rejects candidates like this. Instead, such tools should provide extra data points to supplement the information gathered and reviewed during the hiring process. On its best days, AI should be able to deliver actionable, explainable and additional information on all candidates, allowing employers to make the best human-led decisions possible.
Data is never neutral
The risks of leaving AI unrestrained in the recruitment world are considerable. The risk of establishing or perpetuating prejudices against race, ethnicity, gender and disability when AI is used to screen, analyse, and select job candidates is quite real. Trying to acquire fair data throughout the recruiting process is akin to manoeuvring through landmines.
As businesses increasingly employ artificial intelligence, particularly in people management and recruitment, there's been more discussion among executives about how to ensure that AI is used fairly. And, the skill requirements have been constantly expanding. The AI market globally was nearly $59.67 billion in 2021 and is projected to grow at a CAGR of 39.4% to $422.37 billion in 2028.
Experts believe that leaders who don't currently have a framework in place for frequently reviewing their organisation’s usage of AI should develop one. Only a growing number of regulations and restrictions may force AI developers and users to create responsible-use standards.
SEE ALSO: