We support businesses with commercially focused legal solutions that drive growth and protect and preserve your assets and reputations.
Whatever your business, we can help you prosper.
We provide legal support to address the major challenges in life and protect your family and finances.
From relationship breakdowns or personal injuries to property or criminal defence, we can help you achieve the best outcome for you and your family.
A widespread adoption of AI in HR and recruitment has seen operational, ethical and legal risks becoming clearer. AI’s capacity to improve efficiency and productivity on admin-heavy tasks is obvious. ACAS-commissioned research has even shown that these tools are increasingly being accepted and used by UK employers.
However, it has the potential to become a minefield for those tempted to over rely on it. Its overuse comes at the expense of robust management processes and human oversight.
Steph Marsh, Head of Employment at Coodes, explores the case for AI in recruitment and HR.
The business case for using AI is compelling. The ACAS survey found that a third of the 1,000 companies polled thought AI would lead to increased productivity.
When it comes to recruitment, many organisations are drawn to its ability to streamline candidate selection, automate repetitive tasks and extract insights from large volumes of applications and associated data. In some respects, it is nothing new. Many companies have been utilising the benefits of AI, or AI-like algorithms, for years without even realising it.
However, the scale and scope of technology has taken a significant leap more recently. Alongside this comes scrutiny of the control, applications and ethics of AI.
We now see AI-powered platforms frequently used to screen CVs, match suitable applicants against (AI- created) job descriptions, schedule interviews and generate candidate reports in a fraction of the time, and costs, associated with traditional methods.
AI screening tools are being used in online interviews allowing preliminary interviews to be conducted virtually entirely by bots.
Once in the workplace, employee activity can be routinely tracked and evaluated using AI.
In theory, this frees up HR professionals from administrative burdens, making them more efficient and creating time for complex tasks. The reality is, if poorly managed, it could also be creating problems further along the recruitment and employment process.
Despite the apparent benefits, AI’s deployment in recruitment is not without considerable risk. Alongside efficiency gains comes questions around fairness, data protection and transparency.
The impact we are most likely to have to face is the risk of discriminatory outcomes. Algorithms trained on biased data sets, often reflecting unrepresentative cohorts or historical inequalities, may perpetuate or even exacerbate those patterns. The often-quoted example of this is Amazon’s discontinued AI recruitment tool which, having been trained primarily on male CVs, went on to discriminate against female applicants.
Other examples include Samsung banning staff from using generative AI platforms after sensitive proprietary code was unintentionally uploaded into ChatGPT.
In October 2024, a multinational company fell victim to ransom demands from a North Korean hacker. Hired remotely after using AI-generated credentials to secure employment, he gained access to highly confidential company information. After four months he was dismissed for poor performance, shortly after which the company received anonymous emails requesting a six-figure sum for the return of said information.
Under the Equality Act 2010, employers must ensure recruitment practices do not produce discriminatory outcomes, either directly or indirectly. Therefore, using AI in recruitment and HR may unintentionally or otherwise disadvantage protected groups and may give rise to claims under ss.13 or 19 of the Act.
GDPR and the Data Protection Act 2018 add further obligations, with Article 22 of the GDPR specifically protecting data subjects, which would include employees, from significant decisions based solely on automated processing, and this principle is echoed in s.14 of the Data Protection Act.
The Employment Rights Act 1996 is also relevant. For example, any dismissal as a result of a flawed or unreviewed decision, including one that may have been influenced by AI, puts the employer in the potential position of being unable to demonstrate that the decision was reasonable, procedurally fair or adequately investigated – with the ensuing risk of unfair dismissal claims.
We are all – companies, HR departments and lawyers alike – responsible for the material we produce and the decisions we make. That is regardless of (or perhaps more so) where AI tools were involved in that process.
Courts take a very dim view of inaccurate and, in some cases, entirely fictitious evidence presented as fact. This was illustrated by the recent High Court judgment in the combined cases of Ayinde and Al-Haroun. It brought together two entirely separate instances where lawyers were suspected of using AI during proceedings, citing fictitious and fake case law.
The referrals arose out of, ‘the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court’.
In one of these it was disputed whether AI was the source of ‘fake’ citations presented in a case. In the second, the solicitor admitted AI had been among the research tools which had been used. The result was that the lawyers involved were referred (or self-referred) to the Bar Standards Board and Solicitors Regulation Authority and were thoroughly admonished but avoided contempt proceedings.
In a similar well-publicised case in the First-tier Tax Tribunal (FtT); a non-existent law was cited (Harber). During a hearing, HMRC argued the other side produced a response document which included cases not identifiable as genuine cases. The FtT concluded that the cases cited had been generated by AI which were subsequently disregarded.
Mata, in the US, achieved global notoriety for the unwitting attempt by Mata’s lawyers to evidence more than half a dozen examples of case law, which on examination were found to have been entirely made up by ChatGPT.
To manage legal and reputational risk, employers using AI in recruitment or HR need a clear AI use policy. This should govern both organisational responsibilities and employee conduct.
Policies should set out approved tools and the tasks for which these can be used. Additionally, it should set out prohibited uses (such as uploading confidential data to public platforms) and governance structures for oversight. Doing so will not eliminate risk, but it will significantly reduce exposure to discrimination and data breach claims.
Systems should be audited for hidden bias and to ensure their security, robustness and compliance with equality, data protection and employment laws. In particular, the training data used by AI systems should be scrutinised, ensuring that it reflects a diverse and representative pool of candidates, inclusive of races, ethnicities, genders and educational backgrounds. This should be further monitored against outputs to ensure results are in line with intended policy.
Critically, human oversight needs to be built into any process. This includes regular reviews, clear accountability for automated systems and training for the HR and line managers using them.
The law is clear on the rights of individuals where it comes to significant decisions being taken which might affect them and the role of AI in these, so there needs to be a clear understanding and process within teams where decision making needs to be firmly in the hands of a human being.
If you’re an employer looking for the latest legalities and policy surrounding AI, Coodes’ Employment team can help. Get in touch with Steph Marsh by calling 01579 324 017 or emailing steph.marsh@coodes.co.uk
Head of Employment
Call us on 0800 328 3282, or complete the form below and we’ll get back to you as soon as possible.
As of 6th April 2024, paternity leave will be changing to reflect a shifting attitude…
What steps should you take if you suspect someone is committing financial abuse as a…