I’ve spent the last 5+ years knee-deep in recruiting technology...testing platforms, sitting through product demos, and working with hiring teams that rely on AI-powered tools to move faster.
One thing’s clear: AI is changing how we hire. Rapidly.
Resume screening, candidate assessments, interview scheduling…AI now can handle it all. But as companies lean harder on this tech, governments are stepping in to set the rules.
Laws are cropping up across the U.S., Europe, and beyond, forcing businesses to rethink how they use AI in hiring.
If you’re using AI to hire or thinking about it– you need to know these regulations. Non-compliance can mean legal headaches, fines of up to $1500 per violation (in the US), and even reputational damage.
Let’s break it all down.
Why are AI regulations necessary in the first place?

I’ve seen the good, the bad, and the downright risky when it comes to AI in recruiting. But this conversation isn’t just about hiring. It’s bigger than that.
When AI runs unchecked across any field, it can cause real harm. And fast.
When AI goes wrong: What’s at stake?
Take healthcare. Imagine an AI system misdiagnosing patients because it was trained on biased data. Lives could literally be at risk.
Or finance– an AI deciding who gets a loan based on patterns that discriminate against certain communities. People locked out of opportunities simply because the data said so.
Self-driving cars? One wrong algorithm and lives are lost.
Without regulations, we’re trusting machines to make life-altering decisions without guardrails.
What does this mean for hiring?
Now, bring that same lens to recruiting.
AI is sourcing, scoring, and ranking candidates these days.
This sounds efficient until you realize the system might be unfairly screening out qualified candidates because they had a gap in their work history or owing to their gender/ race.
I’ve worked with hiring teams that discovered after months of using AI that their system was quietly favoring men over women for marketing roles. Another team found their screening tool penalized non-native English speakers.
Not because anyone set out to discriminate. But because, the data the AI was trained on reflected human biases.
Without regulations, these errors don’t just slip through. They scale.
We’re talking hundreds, maybe thousands, of candidates losing out on jobs. And companies missing out on great talent, all because no one checked the system.
Regulations are not anti-AI. They’re pro-fairness
Regulations aren’t about slowing down progress. They’re about ensuring AI works how we want it to— efficient, yes, but also fair, transparent, and accountable.
In hiring, that means ensuring every candidate gets a fair shot regardless of their gender, race, or background.
The legal map of AI in hiring: 10+ laws you should be aware of
Here's a comprehensive overview of key AI regulations affecting hiring practices worldwide:
1. United States: State-level regulations
While the U.S. lacks a comprehensive federal AI law, several states have introduced legislation impacting AI in recruiting:
Illinois: Artificial Intelligence Video Interview Act
The Artificial Intelligence Video Interview Act requires employers to notify candidates when AI is used to evaluate video interviews, obtain their consent, and ensure the deletion of interview recordings upon request.
Requirements:
- Employers must inform applicants when AI will analyze their video interviews and obtain explicit consent.
- Applicants should be told which characteristics the AI will assess.
- Recorded interviews can only be shared with individuals directly involved in evaluating the candidate.
- Candidates can request the deletion of their interview recordings, and employers must comply within 30 days.
Non-compliance implications:
While the Act doesn't specify penalties or a private right of action, failure to adhere could lead to legal challenges or reputational harm.
New York City: Local Law 144 on Automated Employment Decision Tools (AEDTs)
Local Law 144 mandates that employers using automated employment decision tools conduct annual bias audits. Employers must also inform candidates about the use of such tools in the hiring process.
Requirements:
- Employers must conduct yearly independent audits of AEDTs to detect and mitigate biases.
- Applicants should be informed when such tools are used in the hiring process.
- Employers are required to publicly share a summary of the most recent bias audit findings.
Non-compliance penalties:
Fines range from $500 for a first violation to $1,500 for subsequent violations. Each day of non-compliance is considered a separate offense.
Colorado: AI Act
The Colorado AI Act is the first broad AI accountability law in the U.S., regulating high-risk AI systems, including those used in hiring. It requires companies to evaluate the risks of AI systems and protect individuals from algorithmic discrimination.
Requirements:
- Employers must conduct Data Protection Impact Assessments (DPIAs) before using high-risk AI tools in hiring.
- Job applicants must be notified when AI systems are involved in employment decisions.
- Employers must implement safeguards to prevent bias and discriminatory outcomes from AI systems.
Non-compliance penalties:
Enforced by the Colorado Attorney General, though specific fines are not yet detailed. Companies risk investigations, corrective actions, and potential legal penalties.
Maryland: Facial Recognition Law (HB 1202)
Maryland’s HB 1202 regulates the use of facial recognition technology in hiring, ensuring candidates are aware and consent before any biometric data is collected.
Requirements:
- Employers must obtain written consent from applicants before using facial recognition technology during interviews.
- Consent must be documented through a waiver that clearly outlines the use of facial recognition and is signed by the applicant.
Non-compliance implications:
The law doesn't specify penalties, but failure to secure proper consent could expose employers to legal challenges and potential liabilities.
Utah: AI Policy Act
The Utah AI Policy Act focuses on transparency in generative AI usage across various sectors, including HR. It aims to ensure individuals know when they are interacting with AI.
Requirements:
- Upon request, companies must disclose to individuals that they are interacting with AI in hiring or employment processes.
- Mandatory upfront disclosure is required when generative AI is used in regulated professions such as law, education, or construction:
- Verbal disclosure at the start of oral interactions.
- Written disclosure before text-based exchanges.
Non-compliance penalties:
The law does not specify fines, but businesses risk regulatory scrutiny and reputation damage for failure to disclose AI usage.
2. EU/ UK regulations
European Union: AI Act
EU AI Act is the world’s first comprehensive AI law, approved in March 2024, classifying AI systems used in hiring as “high-risk.” It mandates human oversight, transparency, and safeguards against bias in recruiting processes.
Requirements:
- AI hiring systems must involve human oversight in final decisions.
- Candidates must be informed when AI is used during hiring.
- Employers must document AI system development and testing processes.
- Risk management measures must be in place to prevent bias.
- Employers must ensure high-quality, unbiased training data for AI models.
Non-compliance penalties:
Fines can reach up to €35 million or 7% of global turnover, whichever is higher, depending on the severity of the violation.
European Union: General Data Protection Regulation (GDPR)
The GDPR is Europe’s landmark data privacy law, effective since 2018, that regulates personal data processing, including AI-powered hiring systems. It limits fully automated hiring decisions and gives candidates the right to human review.
Requirements:
- Automated hiring decisions that significantly impact candidates require human intervention or the right to appeal.
- Employers must provide explanations on how AI decisions are made.
- Data Protection Impact Assessments (DPIAs) are required before deploying AI tools in hiring.
Non-compliance penalties:
Fines can reach up to €20 million or 4% of annual global turnover, whichever is higher.
United Kingdom: GDPR (UK Version)
Post-Brexit, the UK retained its version of GDPR, mirroring the EU’s standards for data privacy and automated decision-making in hiring.
Requirements:
- Automated hiring decisions require human oversight or the right to contest.
- Employers must explain how AI-driven hiring decisions are made.
- DPIAs are necessary before using high-risk AI systems like automated screening or AI-powered interviews.
Non-compliance penalties:
Fines can reach £17.5 million or 4% of annual global turnover, whichever is higher.
3. Canada: Artificial Intelligence and Data Act (AIDA)
The Artificial Intelligence and Data Act (AIDA) is Canada’s upcoming federal law aimed at regulating high-impact AI systems, including those used in hiring. It seeks to ensure transparency, fairness, and privacy protections when AI is involved in employment decisions.
Requirements: (Detailed regulations are still being developed, but the law is expected to require)
- Employers must identify high-impact AI systems used in hiring (e.g., automated screening, candidate evaluations, biometric assessments).
- Transparency requirements will mandate disclosure of AI use to candidates.
- Companies will need to implement risk management systems to prevent algorithmic bias and privacy breaches.
Non-compliance penalties:
Fines can reach up to $10 million CAD or 3% of global revenue, whichever is higher, for serious violations.
4. India: Ministry of Electronics & Information Technology (MeitY) AI Advisory
India’s AI advisory focuses on preventing bias and discrimination in AI systems and promoting responsible AI development across all sectors, including hiring.
Requirements:
- Employers must ensure AI systems do not demonstrate bias or discriminatory behavior during hiring.
- AI solution providers should disclose limitations or unreliability if models are untested or experimental.
- Companies must implement safeguards against deep fakes and maintain human oversight in critical employment decisions.
Non-compliance penalties:
While the advisory is not legally binding, non-compliance could expose employers to legal disputes under broader anti-discrimination and data protection laws in India.
5. China: Regulations on AI in employment
China has implemented several regulations to oversee AI applications, emphasizing transparency, fairness, and accountability, particularly concerning employment-related AI systems.
Internet Information Service Algorithmic Recommendation Management Provisions
Effective since March 1, 2022, these provisions mandate that companies using algorithmic recommendation services ensure transparency and prevent discrimination.
Requirements:
- Transparency: Employers must inform users about the use of recommendation algorithms.
- Fairness: Algorithms should not discriminate based on race, ethnicity, gender, or other protected characteristics.
- Accountability: Regular audits and assessments of algorithmic systems are required to ensure compliance.
Non-compliance penalties:
Violations can lead to fines, service suspensions, or criminal investigations, depending on the severity of the breach.
Interim Measures for the Management of Generative Artificial Intelligence Services
Implemented on August 15, 2023, these measures focus on generative AI services, ensuring they align with China's core values and legal standards.
Requirements:
- Generated content must be accurate and not misleading.
- User data used in AI systems should be securely stored and processed.
- Measures must be in place to prevent discriminatory outputs from AI systems.
Non-compliance penalties:
Non-adherence can result in fines, revocation of business licenses, and other administrative actions.
6. Australia: AI ethics framework
Australia has introduced an AI Ethics Framework outlining principles to guide the development and use of AI, ensuring it is safe, reliable, and fair, particularly in employment contexts.
Requirements:
- AI systems in hiring must be designed to avoid bias and discrimination.
- Employers should clearly communicate the role of AI in recruiting processes to candidates.
- Personal data used by AI systems must be handled in compliance with privacy laws.
Non-compliance penalties:
While the framework provides guidelines, non-adherence could result in reputational damage and potential legal challenges under existing laws.
Hiring with Kula All-In-One meets all fairness benchmarks

At Kula, we’ve partnered with Warden AI, a trusted assurance leader, to ensure our platform upholds the highest standards of fairness, transparency, and compliance in hiring. This collaboration brings independent, data-backed evidence of Kula AI’s performance in delivering bias-free, responsible hiring solutions.
We’re proud to share that Kula AI is fully compliant with NYC Local Law 144 and is on track to meet the stringent requirements of the EU AI Act, effective August 2026. To promote full transparency, we’ve launched a public dashboard featuring live audit results, regularly updated to keep you informed and confident in our technology.
Stay ahead, stay compliant
Ignoring AI regulations is not an option. The risk isn’t just fines (though those can be steep); it’s harming candidates, damaging your brand, and building bias into your hiring process without even realizing it.
Laws will continue to evolve globally. What’s compliant today might not be tomorrow.
So, what can you do?
Here’s how to future-proof your AI recruiting practices:
- Audit your current tools: Are you using AI in hiring? If so, do you know how those systems work? Are there bias safeguards in place?
- Talk to your vendors: Ask for bias audit reports, transparency on algorithms, and proof they’re compliant with local laws.
- Stay informed: Regulations are shifting fast, from New York to the EU. Make monitoring legal updates part of your recruiting strategy.
- Prioritize human oversight: AI should enhance decision-making, not replace it. Human judgment is the final checkpoint for fairness.
Is your hiring process AI-compliant? Now’s the time to find out.