Diversity in recruitment: why AI might not be the answer

For a few years now, AI has been proffered as the future of cost-effective and efficient recruitment, allowing users to screen millions of CVs in a matter of seconds. More interestingly, there are claims that it allows users to pinpoint the biases which exist in their overarching hiring process or even within the job listing itself. Studies of recruitment diversity have shown that more masculine words can dissuade female candidates from applying, and it is true AI could detect and replace this language with something more gender neutral. So could AI be a silver bullet for killing off hiring prejudices?

Garbage in, garbage out.

Recent attention has centred on the potential for unconscious algorithmic bias. That is, if you have biased data – no matter how much of it – the output is going to be biased. For example, a reliance on postcodes and schools will inevitably intersect with race and class. And if machine learning draws inferences from an already homogenous group of people, it simply won’t know how to diversify. This can be mitigated by having a supervised model in which companies audit data and correct implicit algorithmic biases. However, in this case, the decision boils down to the programmer or user’s own agenda, themselves likely laden with both implicit and explicit bias.

Pymetrics has programmers to audit their algorithms. Stella.ai have an algorithm which only assesses candidates against skills, while Entelo’s Unbiased Sourcing Mode further anonymises material from name and school right through to markers of age and gender. It certainly seems that these could work quite well in conjunction with each other, but none offers a standalone remedy. So AI won’t eliminate hiring biases because in its current form it will produce its own; catching bias within machine learning requires, to some extent, for practitioners to be conscious of their own and others’ unconscious bias in the first place.

It is likely, then, that AI will continue to be what it is best suited for in most industries: an aid to best practice rather than a replacement of existing practices. Overseen by experienced users from a diverse team with sensitivity to the nuances of background and experience, it may be tremendously effective. Our Consultants, for example, have trained on the CYLiX course which challenges unconscious bias. But if human beings weren’t generally so devastatingly rubbish at decision-making, there would presumably be nothing for AI to rectify in the first place.

Given that it is 2018, and there are only three black CEOs, and 24 females leading Fortune 500 companies, having an algorithm to nudge us in the right direction might not be the worst thing imaginable.

rhea@trippassociates.co.uk

Martin Tripp Associates is a London-based executive search consultancy. While we are best-known for our work in the TMT (technology, media, and telecoms) space, we have also worked with some of the world’s biggest brands on challenging senior positions. Feel free to contact us to discuss any of the issues raised in this blog.