Government issues guidance on using AI in recruitment



This guidance identifies potential ethical risks of using AI in recruitment and hiring processes.

Examples are given for sourcing, screening and interviewing. For example, on CV matching, the guidance says that using a screening tool that extracts key data from CVs to find semantic similarities between applicants and pre-determined ideal CVs can risk perpetuating existing biases. For example, it can reinforce patterns of employment where certain groups are under-represented (ethnic minorities, women, disabled people) – particularly in sectors that are less diverse, such as engineering, policing, and construction.

AI is often used for targeted advertising based on candidate profiling – job adverts are shown to specified profiles that are likely to be a good match for the position. The guidance says that profiling can often repeat historical biases, for example, showing administrative roles to women and senior roles to men.

There are also examples of other use cases such as facial recognition in video interviewing, or psychometric tests.

See: GOV.UK - Responsible AI in Recruitment guide

Valuable Data, Priceless Privacy, PL&B’s 37th International Conference 1-3 July 2024 in Cambridge, UK, includes several sessions on AI and on vulnerable groups, diversity, women’s issues and facial recognition.