If your organisation’s hiring practices don’t currently involve some form of automation or artificial intelligence, the chances are that this will change in the coming 12 months.
From initial screening and candidate assessment, to interview assistance and employee onboarding, recruiters and hiring managers at companies of all sizes are increasingly leveraging artificial intelligence tools to help them do their jobs better and more efficiently.
Perhaps unsurprisingly, a growing reliance on AI by HR teams is being matched by a rise in candidates’ use of the same technology: for drafting applications and assisting with psychometric assessments and other screening exercises, among other things. Indeed, notwithstanding that organisations had been using forms of automated technology in recruitment for several years before the release of ChatGPT in November 2022 brought AI to wider public attention, the global job market is significantly more technology-driven than it was even 18 months ago.
The use of AI-enabled tools to assist with the recruitment process brings with it a host of legal challenges, including compliance with current employment, equality and anti-discrimination, and data protection laws in Europe and the United States. The European Union’s AI Act, whose provisions on artificial intelligence systems that are used for recruitment or selection apply from 2 August 2026, will only increase the complexity of those challenges.
For the purposes of this article, however, we focus on the here and now – namely, a review of recent data protection-related developments in the UK and U.S. as they apply to the use of AI and automated decision-making in the recruitment context.
First, to the UK
Last month, the UK Information Commissioner’s Office issued a detailed report on the use of AI in recruiting, which followed a series of consensual audits that the regulator conducted between August 2023 and May 2024 with developers and providers of AI-powered sourcing, screening and selection tools. The report comes on the back of several similar initiatives, including the previous UK government’s Responsible AI in Recruitment guidance (issued in March 2024) and the ICO’s own guidance on AI and Data Protection (last updated in March 2023).
Although the ICO’s recruitment audits focused on the providers of AI products and services, its November report contains a number of important takeaways for the users of these offerings (“recruiters” in the ICO’s terminology).
Some of the requirements set out below will be familiar to organisations from their wider UK GDPR compliance efforts, but the nature of AI-enabled processing — and its attendant risks — means that employers should pay particular attention to meeting the requirements of data protection laws as they apply to the use of new, and in some cases untested, technologies that make potentially significant decisions about job applicants.
With that in mind, the ICO recommends that employers using AI for hiring purposes:
- Ensure that the personal data they process are fair, accurate and not biased. This can be done, in part, by monitoring the AI and its outputs throughout the lifecycle and taking swift action to remedy issues that arise.
- Ensure that candidates are made aware that their personal data will be processed by AI or similar automated technology. Employers can do this by providing detailed information to candidates through a privacy notice, consent form or other transparency information. If the responsibility for doing so — whether legally or contractually — lies with the provider of the AI system, employers should ensure that the parties’ contract makes this clear.
- Ensure that the personal data that they process are limited to what is necessary for the purposes. Before any tool goes live, organisations should confirm that it does not collect excessive or unnecessary personal data (particularly sensitive data) and periodically thereafter review the inputs and outputs of their recruitment tools as well as the retention periods for personal data to ensure they are not being retained for longer than necessary (either by the company or provider).
- Ensure that they have a lawful basis, or bases, under Arts. 6 and 9 of the UK GDPR for each personal data processing activity. To do this, you should identify and document your lawful basis (or bases) in your records of processing and privacy notice(s). When relying on legitimate interests, you should complete a legitimate interests analysis, and when relying on consent, you must ensure that it meets UK GDPR requirements (i.e., specific, granular, freely given, as easy to withdraw as it was to give, and documented).
- Ensure that their contracts with providers define the roles of the parties for the purposes of the UK GDPR (i.e., controller, joint controller or processor). Your data processing terms should meet the applicable requirements of the UK GDPR (Arts. 26 and/or 28) and you should periodically check that the provider is complying with its obligations under the contract.
This list is not exhaustive, and a robust data protection impact assessment should be conducted that documents the risks of processing and how these will be minimised — together with training for relevant staff — before any go-live date.
Then, to the U.S.
In the U.S., the Equal Employment Opportunity Commission has reported that upwards of 80% of American employers use automated tools, including various forms of AI, at some stage of the hiring process, and concerns regarding bias, inaccuracies and the absence of a human touch have spurred sporadic — primarily state-level — regulation.
In keeping with the notice-and-consent model, many American states provide “consumers” with a right to opt out of processing related to automated decisions with significant effects — a category which likely includes certain cases of AI-augmented recruiting and hiring. For example, Montana, Texas and Connecticut all provide consumers with a right to opt out of processing in furtherance of “solely automated decisions” that produce “legal or similarly significant” effects.
Other states, such as Colorado, have similarly worded provisions providing opt-out rights related to automated processing that evaluates, analyses or predicts an “individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” In many cases, predicting and evaluating such characteristics is precisely the goal of employing AI in hiring processes, thus raising the compliance issues described elsewhere in this article.
Several comprehensive state-level laws go further. Minnesota provides consumers subject to automated decisions with a right to “question the result” of such decisions and to be informed of the reasoning behind them, as well as the right to review and correct the data used. Similarly, the comprehensive privacy rules under consideration in Massachusetts would mandate that any time a hiring decision is made using electronically collected information, a job candidate must receive the data used and an opportunity to correct or corroborate it as well as an explanation. If the Massachusetts rules pass in their current form, no hiring decision may be “based solely on such information”.
There are other state-level rules that touch on the use of AI in hiring and recruiting. Maryland prohibits the use of facial recognition software in interviews absent written interviewee consent. In part, these rules exist because there is widespread concern that the use of AI image recognition software reinforces existing tone, mannerism and linguistic biases unrelated to job applicant quality.
Colorado tackles those issues by defining a category of “high-risk” AI and assigning duties to reasonably ensure that such AI systems do not perpetuate bias. Since AI systems in the hiring realm are often “a substantial factor in making” “a consequential decision” related to “an employment opportunity” and do not “perform a narrow procedural task” or merely “detect” patterns from “prior decision-making,” many would likely qualify as “high-risk” under the Colorado rules. Use of such systems triggers several duties that may fall on employers, including a duty to undertake annual impact assessments similar to those New York City currently requires for “automated employment decision tools.” Comparable rules are under consideration in Virginia and Texas.
The existing body of federal anti-discrimination law sits on top of these AI-specific rules. It is illegal for employers over a certain size to base hiring decisions on stereotypes or assumptions about an applicant’s race, color, religion, sex, national origin, age, disability or genetic information.
That general prohibition applies to decisions made with AI tools, as recent Equal Employment Opportunity Commission guidance makes clear. The guidance also indicates that the use of an AI tool that results in a selection rate for a protected group of applicants that is “substantially” less than the selection rate for individuals not in that group may violate anti-discrimination rules. Certain states, such as Illinois, also prohibit using AI to discriminate in recruitment or hiring as well as failing to provide notice to candidates that AI is being employed.
As the medley of rules evolves, several best practices for AI-augmented recruiting and hiring in the U.S. remain constant. First, a good understanding of what data is employed in a model and how that data is parsed is crucial for compliance with American rules. Employers without a strong understanding run the risk of unknowingly violating anti-discrimination rules by making decisions based on protected categories or decisions which alter selection rates in a manner which could support an inference of discrimination.
They also run the risk of being unable to explain the decision in states, such as Minnesota, which require an explanation in certain cases. Second, it remains crucial to inform job candidates when decisions regarding their candidacy may be made using the assistance of AI tools. Providing that notice is the precondition for compliance with opt-out rules in many states. Third, ensuring that there is a human in the loop reviewing AI-enabled decisions is advisable. Doing so ensures that a hiring decision is not “solely” an automated one.
2025 and Beyond
As recruiting cycles tend to slow towards the end of year holiday period, and budgets are being decided for next year’s headcount and technology spend, now is a good time to understand whether your organisation will incorporate AI into its hiring practices in 2025.
Data protection by design (i.e., baking in the steps discussed in this article from the outset of a project) is a core tenet of good compliance, and so should be at the front of mind when — and it really is when, not if — you are asked to advise your organisation about how AI-enabled recruiting can be done in a legally compliant manner.
Subscribe to Ropes & Gray Viewpoints by topic here.
Authors
Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.