HM Revenue and Customs hit the headlines recently for using artificial intelligence (AI) to recruit staff. After sending a CV and a 1,000-word statement, applicants received a link to a pre-recorded video in which they were asked six questions. Accepting the role was also a digital experience, done by clicking a button. Many were critical of HMRC’s process failing to include any level of human interaction, especially for a customer service role where people skills are central. 
But AI has become widely used across all sectors, including HR. 70% of HR leaders currently use or plan to use AI in some capacity in the next year, with HR professionals comfortable using AI for writing emails, editing content and automating manual tasks. HR leaders believe AI can increase turnaround (according to 50% of HR leaders) and productivity (53%) and reduce performance specific bias such as recency bias (where an act done recently is valued more than the same act done months ago) and contrast bias (where a manager rates performance in comparison to peers, rather than objectively). 
 
But the same regulations apply to recruitment and management whether done by a human or algorithm, so it is important that employers understand the risks as well as the benefits. 
 
 
How is AI used in employment? 
AI covers a wide range of technology with many potential uses throughout employment. Some common uses include: 
 
CV scanning – AI systems sift through high volumes of CVs and scan for specific keywords to match candidates’ experience to the job description and rank the applicants. 27% of HR leaders surveyed use AI to screen CVs although only 50% of candidates are in favour of AI reviewing job applications. It has been reported that these systems reject up to 75% of CVs. 
 
Automated video interviews – Candidates are given a set time to remotely answer and upload a set of pre-recorded questions. These answers may be analysed by a person or an algorithm. This kind of artificial intelligence is analysing voice, word choice and sometimes even facial expressions. 
 
Psychometric tests – These can be a series of multiple-choice questions or ‘situational judgement tests’ which measure how a candidate would respond in a real-life work scenario. 
 
Performance evaluation: AI algorithms can be used to measure productivity and performance, which could affect decisions about promotion, rotation, and firing. This allows real-time evaluation without the delay of annual appraisals, while potentially mitigating human biases sometimes displayed by managers. 
 
Monitoring and surveillance: This could include technology that captures employees' unsent emails, webcam footage, microphone input, and keystrokes. It can also be used to improve safety or performance, for example, monitoring systems for delivery drivers that track speed, seatbelt usage, and the driver's physical state, can use this information to alert drivers to safety concerns. 
 
 
What are the risks? 
 
Lack of human oversight 
Almost 75% of HR leaders say they trust the candidate recommendations made by AI, and 39% trust very much or completely. But others have raised concerns that AI might fail to consider “human potential” that is not evident in the data. In fact, 75% of candidates were opposed to AI making any final hiring decision. Only 49% of candidates say they would apply for a job with an employer who uses AI to help make final hiring decisions, with the top reason being that AI would miss the “human factor” hiring needs. 
 
Once employed, under data protection regulations employees have the right not to be subject to automated decision-making. The 2006 case Keen v Commerzbank AG [2006] established that employers must be able to explain important decisions that affect their employees, but relying heavily on AI might make this difficult. Estee Lauder had to pay compensation when the firm selected people for redundancy based on automated computer judgments with no human involvement, and Uber recently had to reinstate drivers and pay fines after their app effectively dismissed drivers without any human oversight . When asked, Uber were unable to explain the decision. 
 
The House of Commons research briefing on AI in employment law emphasises that the impact of this regulation may depend on how strictly the tribunal interprets terms such as “made solely by automation.” For example, it might decide that human input could be as simple as a manager deciding to hire the candidate suggested by AI. Until such disputes make it to the tribunal, it is impossible to say. 
 
 
Damage to trust and confidence 
On a similar note, all employment contracts include an implied obligation of trust and confidence between the employer and employee. A serious breakdown of trust and confidence can breach this term and end the contract. This is the basis of many constructive dismissal claims. 
 
As well as making it difficult for employers to explain that their decisions are in good faith, overuse of AI may damage trust if employees feel under surveillance. 
 
Employees surveyed felt particularly uncomfortable about electronic tracking (71%), automated hiring and promotion (62%) and keystroke monitoring (59%). Employees were less concerned about camera monitoring, with only 14% feeling uncomfortable. This could be because people are used to CCTV and camera monitoring, but it may also feel less personally invasive to monitor the general environment with employees in it, than the individual work of each employee. 
 
 
Discrimination and inequality 
One attraction of AI is to avoid unconscious bias that can affect human decisions. 49% of job seekers think AI could help the issue of bias and unfair treatment in hiring and 46% believe AI is better than humans at treating all job applicants the same way. 
 
However, AI systems can pick up biases from the real-world data they learn from. As BBC Bitesize puts is “a computer programme can only be as objective as the person who programmed it.” For example, Amazon created a recruitment tool that relied on data from successful CVs over the past 10 years. But because most of the information came from men, the system ‘learnt’ that male candidates were preferable and graded women poorly. 
 
Similar concerns were raised about AI penalising neurodiverse candidates whose facial expression, eye contact, speech patterns or tone may not fit the profile programmed into the system. Recruitment company HireVue stopped using their video interviewing system in 2020 after candidates complained to the US Federal Trade Commission that it disadvantaged neurodivergent people. Other complaints came from candidates with facial palsy who were penalised for limited facial movements. 
 
Research has also shown that facial recognition software is less effective for people with darker skin, and so could potentially penalise these employees. Uber paid an undisclosed amount to settle race discrimination claims after suspending a Black driver because AI failed to recognise his photos. 
 
Highly digital processes further risk excluding those who are not tech-savvy or don’t have access to technology, whether due to age, disability, or financial circumstances. As well as risking age or disability discrimination, this may also undermine any attempts to build a diverse workforce. 
 
Any disproportionate effect on those with a certain protected characteristic could lead to claims for indirect discrimination. The House of Commons research paper suggests employers may be able to defend a claim by showing that AI is a proportionate means to achieve a legitimate aim, or that no less biased alternative was available – essentially, accepting that AI may be bias but less so than humans. 
 
Companies have a requirement to make reasonable adjustments for those with disabilities. This could include adjusting the recruitment process to account for the difficulties with AI. For example, you might overrule the AI ranking for an autistic person who scored badly for lack of eye contact. However, one autistic job seeker points out that this still places the burden upon the candidate to make contact and disclose their condition. 
 
 
Legal regulation 
The previous UK government had outlined plans for regulating AI, with its main principles including safety, security, transparency, fairness, accountability, and contestability. It intended to legislate based on context and outcomes – for example, an AI customer service chatbot would have different regulations depending on whether it was used for a fashion retailer or for medical diagnostics. It is uncertain whether the new government will continue with any of these plans or have a complete overhaul. 
 
The Trades Union Congress (TUC) has also set out the changes it would like to see in a draft AI bill. The bill would ban the use of emotion recognition technology in the workplace, make any decision to dismiss an employee automatically unfair if based on “unfair reliance on high-risk decision making” using AI, and give employees and jobseekers the right to a human review of decisions made using AI. 
 
A survey by HireVue found that 42% of HR leaders are worried about AI complying with the law. 40% have set up an internal team to assess the compliance of current products, and 16% have hired external resources to assess the compliance of their current products. Those who aren’t worried about compliance may be confident that they have covered all legal bases, or they may be underestimating the problem. But there’s another 44% who don’t have internal or external dedicated to legal compliance, who may be leaving themselves open to claims. 
 
 
What should employers do? 
Employers are likely to see the best results from combining human skill and AI power, using technology as a supportive tool. To do this, you should: 
 
Make sure managers understand how the systems work and can explain any decisions. It will also help maintain trust and confidence with your staff if you can explain how programmes benefit them, not just the business. 
 
Beware of the impact on different groups. Provide staff with general training around discrimination to help them to spot, and reduce, the potential for indirect discrimination. 
 
Be clear about who is responsible for ensuring AI tools are legally compliant, and which tolls are approved for use. If individual employees themselves are responsible, provide training about when to use AI tools and what to consider when choosing tools. 
 
You could consider an AI policy that sets out the process for approving and using AI tools and how to respond if there is a problem. 
 
 
 
Tagged as: Employers
Share this post:

GET IN TOUCH 

Do you have a legal matter you'd like to discuss with us? Get in touch using the details below or use the form here and a member of our team will be in touch to discuss your enquiry. 
Phone: 0121 817 0520 
Address: Spencer Shaw Solicitors Limited 
St Mary's House, 68 Harborne Park Road,  
Harborne, Birmingham, B17 0DH 
Opening hours: 
Monday - Friday 9:00AM - 5:00PM 
Saturday, Sunday & Bank Holidays - Closed 
Keep in touch 

SCHEDULE AN INITIAL CALL 

We take your privacy seriously and will only use the information you provide on this contact form to deal with your enquiry. Please see our Client Privacy Policy for more detail.