California and EU AI hiring and employment laws: signs of things to come?
14 November 2025 | posted in Commercial law Corporate and business law Dispute resolution Employee share schemes Employment law Family law Immigration law Media and entertainment law Music law Private client law Property law
This insight is part of our Business Law newsletter | Autumn 2025 series. Explore the full series at the end of this piece.
AI is transforming recruitment and employment practices globally, but new laws are reshaping what’s possible – and what’s legal.
This autumn, California’s landmark AI employment regulations came into force, marking a significant shift in how businesses must manage algorithmic decision-making in recruitment and workplace management. For UK employers, these developments could be more than a transatlantic curiosity. Alongside other US laws and the EU’s AI-specific regulations, they signal the emergence of new standards concerning bias, transparency and human accountability in AI-driven employment environments.
AI is now being used to perform such tasks as screening CVs, analysing video interviews, monitoring productivity, predicting promotions, assessing employee wellbeing, analysing and drafting documents, and processing large volumes of information and correspondence quickly. The rapid expansion of laws regulating its use means that employers need to understand the legal developments and protect themselves, their staff and clients from legal and reputational risks.
These risks apply to all businesses that use AI but are likely to be particularly acute for organisations that use AI-driven recruitment platforms, automated HR systems or client-facing document automation tools, including professional services firms, tech companies and large-scale recruiters.
California’s expanding AI employment framework vs the tech prosperity deal?
Since 1 October 2025, California’s employers using AI in hiring have been required to retain applicant data for four years, ensure human oversight of automated decisions and will need to conduct bias testing to ensure compliance. These rules, set out under the California Fair Employment and Housing Act, apply to technologies that analyse CVs, rank candidates or assess video interviews. UK firms are increasingly using similar tools to improve recruitment efficiency.
However, California’s ‘No Robo Bosses Act’, which was widely expected to take effect on 1 January 2026, was vetoed by Governor Gavin Newsom in October. The bill was intended to govern the use of AI in disciplinary actions, dismissals and other employment decisions, prohibit the sole use of automated systems to demote or dismiss workers and mandate human review and plain-language notices to affected employees. There was also an explicit ban on profiling based on protected characteristics, with civil penalties for non-compliance and enforcement to be led by public prosecutors. Gavin Newsom considered the legislation too broad, unfocused and duplicative of some existing regulations. This pleased the tech companies but the unions and worker advocacy groups were disappointed.
While California is tightening some of its rules and putting others on hold, New York City has already implemented bias audit requirements. In contrast, some US states are moving in the opposite direction.
Montana and Idaho are pursuing legislation that limits government oversight of AI. ‘Right to Compute’ legislation aims to protect individuals’ access to computational tools, including AI, by restricting state interference. However, provided that future US federal law doesn’t take a similar path, these seem to be the exception.
What remains unclear is whether the ‘tech prosperity deal’ – the landmark UK-US agreement signed in September 2025 – will result in joint regulatory frameworks or mutual recognition of AI standards. This deal may encourage more flexible, ‘innovation-friendly’ AI rules in the UK, potentially delaying or moderating the development of stricter employment-related AI regulations similar to those in California, New York City and the EU.
The EU AI Act: a regulatory benchmark?
The EU AI Act, now in force in the 27 member states, classifies most recruitment and employment-related AI systems as high risk. However, the obligations for employers and technology providers are being phased in.
Since February 2025, some AI uses in employment, such as emotion recognition and other ‘unacceptable risk’ practices have been prohibited. Employers must also promote AI literacy among staff.
In August 2026, the main obligations for high-risk AI systems in employment contexts (including recruitment, promotion and worker monitoring) take effect. These require employers and technology providers to implement bias-mitigation strategies, ensure transparency for candidates, maintain human oversight and complete technical documentation and risk assessments.
Although the UK is no longer bound by new EU law, the act will affect UK-based firms operating in the EU or using EU-developed AI tools. It also sets a benchmark for responsible AI governance that UK regulators seem likely to consider. The UK is also a signatory to the Council of Europe’s binding Framework Convention on Artificial Intelligence (The Framework Convention on Artificial Intelligence – Artificial Intelligence). This is intended to “ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation.”
UK legal framework: AI-relevant but not yet AI-specific
The UK has not yet introduced dedicated AI employment legislation. However, existing laws already impose similar obligations to those in California and the EU. The Equality Act 2010 prohibits discrimination based on protected characteristics, whether by humans or machines. As a result, AI-driven recruitment or disciplinary decisions that lead to discrimination are already unlawful.
Similarly, the UK GDPR regulates how employers handle personal data, including during recruitment. Particularly relevant to those looking to use AI in recruitment are provisions on solely automated decisions and around the fair and lawful processing of personal data.
Recent changes to UK data protection law – specifically, the Data (Use and Access) Act 2025, which came into force in June 2025 – require similar safeguards to those referenced under California law. Under the amended UK GDPR (articles 22A–22D), organisations must ensure human involvement in significant automated decisions and enable individuals to contest or seek human review of those decisions.
At the same time, the UK Information Commissioner has set out guidance that testing for and removing bias in AI models is key to ensuring fair processing of personal data. Other less prescriptive provisions, such as obligations to adopt a privacy-by-design approach to ways of working and to conduct data protection impact assessments will also be relevant.
The government’s Responsible AI in Recruitment Guidance encourages the ethical use of AI and, while not having statutory effect, it points to practices that should comply with existing UK law. Meanwhile, the TUC’s 2024 Artificial Intelligence (Regulation and Employment Rights) Bill borrowed heavily from EU AI law and included targeted measures, including mandatory risk assessments and a register of AI systems used in employment.
Though not set to reach the UK statute books, the TUC draft reflects growing pressure for reform from some quarters, and the Equality and Human Rights Commission supported (but rapidly settled) 2021 Employment Tribunal case of Pa Edrissa Manjang v Uber Eats UK Ltd & Others and the ICO Investigation: AI in Recruitment Outcomes Report (Nov 2024) give some indication of the growing challenges. How all this sits with the US-UK tech prosperity deal remains to be seen.
Businesses operating in the UK should closely monitor developments in UK legislation as international AI laws evolve, particularly as broader AI regulation is now likely to feature in the next session of the UK parliament after the king’s speech, expected to be in May 2026.
Don’t get too emotional!
One key emerging area in AI regulation is the seemingly futuristic concept of ‘emotion recognition technology’. These systems are designed to interpret human emotional states from facial images, speech, text and other physical signals that might be captured during recruitment processes and video interviews.
Probably because they are considered unreliable and prone to bias, AI systems that infer emotions are banned (by the EU AI Act since February 2025) in workplaces and educational institutions in the EU, except for medical or safety purposes. California’s proposed Workplace Surveillance Tools Bill still seems set to follow suit in 2026.
In the UK, processing person-identifiable biometric data, such as physical, physiological or behavioural characteristics, involved in emotion recognition is currently technically possible under the UK GDPR. However, to remain compliant, UK employers would need to have properly identified a valid lawful basis from both article 6 and the particularly stringent article 9, which governs processing of special category data. In any event, such use might still carry a significant risk of Equality Act bias challenges based on sex, race, disability and other protected characteristics.
Given these developments, UK employers should take practical steps to ensure compliance. Overseas employers operating in the UK need to comply with UK-specific rules and, at the same time, monitor and adapt to any international legal developments that affect the UK rules.
Practical steps for UK employers
- Conduct data protection impact assessments before AI tools are deployed in recruitment and employment-related decision-making.
- Ensure a lawful basis under UK GDPR article 6 and a valid condition under article 9 for special category data.
- Implement human oversight and other necessary safeguards for AI-driven decisions affecting employment status.
- Monitor and mitigate bias in AI systems by understanding the data sets used to train them and using fairness testing to check for biases.
- Provide clear privacy notices explaining AI use and data processing.
- Review contracts with AI vendors to define controller/processor roles and responsibilities.
Contact Moore SGD Law if you are concerned about operating in ways that could lead to unlawful discrimination or breach UK data protection laws. Our legal team, working in conjunction with Moore Kingston Smith Data Protection Services, can help you navigate legal risks and implement responsible AI practices.
Moore SGD Law provides a full range of legal services for employers and employees, including privileged and commercially focused legal advice and guidance, conduct of legal proceedings and advising and drafting policies and procedures. We also work closely with Moore Kingston Smith HR Consultancy when clients need assistance with issues relating to HR consultancy, reward, compensation and benefits, leadership and learning and development, organisational development and strategy – and Moore Kingston Smith Data Protection, Cyber Security and IT Assurance teams for data privacy, GDPR audit and compliance, cyber and information security services, business continuity and IT assurance services.