The State-by-State Guide to AI Hiring Regulations
AI tools are now completely embedded in the hiring process. They appear in résumé screening, candidate sourcing, assessments, and video interviews. As more employers rely on these tools, the rules around them are getting more complex at the federal, state, and local levels.
Regulators assess how AI tools influence discrimination risk, data privacy, and transparency.
Some requirements already apply, and more will arrive over the next few years. States continue to introduce new bills, so this area changes quickly. HR teams need to stay current.
When organizations get this wrong, consequences follow fast. Regulators bring enforcement actions, impose fines, and trigger reputational fallout. Many of these requirements also reinforce employers’ aim: to build hiring processes that are fair, consistent, and defensible.
What Counts as an AI Hiring Tool?
AI hiring laws target tools that affect hiring decisions, not basic HR systems. If you rely on an AI tool to screen, rank, analyze, or decide who moves forward, those laws apply. That includes résumé screeners, ranking tools, video interview platforms with analytics, online assessments, and targeted job ads.
States use different terminology. You will see phrases like “automated employment decision tools” or “automated decision systems.” The label changes, but the concept remains the same. These rules apply whether the tool comes from a vendor or is built internally by your team.
For HR teams, the first step is simple. Know what you are using. Identify any tool that relies on data and algorithms to evaluate, filter, recommend, or influence decisions about applicants or employees.
Federal Baseline Requirements for AI Hiring Tools
Federal law does not address AI hiring in one place. Existing laws already cover how employers use these tools.
The Equal Employment Opportunity Commission (EEOC) has been clear. If an AI tool leads to discrimination, the employer is responsible under Title VII of the Civil Rights Act, the Americans with Disabilities Act, and other federal laws. Using a vendor does not change that.
At a high level, federal law focuses on a few core issues:
- Discrimination risk. Employers remain responsible if a tool produces an adverse impact.
- Accessibility. Tools cannot create barriers for individuals with disabilities.
- Transparency and documentation. Employers need to understand how tools work and support hiring decisions if challenged.
- Data use. These tools rely on data, and regulators expect clarity around how that data is used.
State and local rules add another layer. Some jurisdictions impose more detailed requirements than others. Start with the highest-risk jurisdictions and build from there.
Targeted Focus: Leading Jurisdictions and Emerging Threats
HR leaders need to know where AI hiring rules apply right now and where changes are coming next. The chart below highlights jurisdictions with laws already in effect and those moving toward new requirements. These states and localities should be the priority for multi-state employers.
| Jurisdiction | Law / Bill Name | Effective Date / Status | Who It Covers | Key HR Requirements | Penalties / Enforcement |
|---|---|---|---|---|---|
| California | FEHA ADS Regs (OAL 2025-0515-01); CCPA ADMT (11 CCR § 7120) | FEHA: Current; CCPA: Apr 2026/Jan 2027 | CCPA-threshold employers (revenue/data vol.); all under FEHA | Evaluate ADS impact on protected groups; retain records; risk assessments, pre-use notice, opt-out rights | FEHA discrimination claims; CCPA fines up to $7,500/violation |
| Colorado | SB 205 (High-Risk AI) | Jun 30, 2026 | Employers deploying high-risk AI in employment | Risk management policy; annual impact assessments; notify users; correct inaccurate data | AG enforcement; civil penalties under consumer protection |
| Connecticut | Proposed algorithmic discrimination | Pending | High-risk AI deployers | Impact assessments; anti-discrimination measure | AG enforcement |
| Illinois | AI Video Interview Act (820 ILCS 42); IHRA amendments (HB 3773) | Current | All employers using AI video analysis or discriminatory AI | Consent/explain AI video use; delete videos on request; no protected-class discrimination via AI | IHRA claims; DCEO reporting violations |
| Maryland | HB 1202 (Facial Recognition) | Oct 1, 2020 | All employers using facial recognition in interviews | Written consent/waiver before facial template creation; ethical guidelines | State labor dept enforcement; potential fines |
| Massachusetts | FAIR Act (An Act Fostering Artificial Intelligence Responsibility – S.35 / H.77) | Pending 2026 | TBD | Candidate notice; potential bias reviews | TBD |
| New Jersey | Proposed AI hiring transparency, including New Jersey Assembly Bill 3911 | Pending 2026 | Likely all using automated tools | Notice to candidates; possible audits | Labor dept penalties |
| New York (State) | Assembly Bill A9314 | Pending 2026 | All employers and employment agencies | Notice to candidates; possible audits | TBD |
| New York City | Local Law 144 (AEDTs) | Enforcement began on July 5, 2023 | All employers/agencies using tools for NYC roles | Annual independent bias audits; public summary of results; 10-day advance notice to candidates | Civil penalties per violation – NYC Department of Consumer and Worker Protection (DCWP) |
| Oregon | Proposed bias mitigation in AI hiring | Attorney General Guidance | TBD | TBD | TBD |
| Texas | HB 149 (Responsible AI Governance) | Current (effective January 1, 2026) | All employers developing/using AI | Prohibits intentional discrimination via AI against protected classes | Aligns with federal claims; state AG oversight |
| Vermont | Proposed bills H0714 (automated employment decision-making tools for state agencies), H0262 (regulate AI in employment) | Pending | State agencies; employers | Transparency and fairness requirements | TBD |
| Washington | Proposed AI employment bias bills, including HB 2144, SHB 1672 | Pending 2026 | TBD (likely all employers) | Notice requirements, transparency requirements | TBD |
As of March 2026. Check state Attorney General sites or employment law trackers for updates.
States with No Specific AI Hiring Law Yet
Many states have not passed AI-specific hiring laws. That does not mean there are no rules.
Federal anti-discrimination laws still apply. State human rights laws still apply. Privacy rules still apply. If an AI tool leads to a biased outcome, regulators can and will step in.
For employers operating across multiple states, the practical move is to set one standard based on the strictest requirements that apply. Managing a single, consistent process is far more workable than adjusting hiring practices state by state.
Practical Compliance Steps for HR and Recruiting
Before you can manage risk, you need a clear starting point.
Focus on a few key actions:
- Identify your tools. Know what you use across recruiting and hiring, including vendor platforms and internal systems.
- Review outcomes. Look for patterns that could create risk, especially across protected groups.
- Be clear with candidates. Explain when you use these tools and what they do in plain terms.
- Pressure-test your vendors. Make sure they can explain how their tools work and support audits or disclosures if needed.
- Assign ownership. Treat this as an ongoing process with clear responsibility across HR and legal.
Laws will continue to change. Your process needs to keep up.
Staying Current: Where to Check Next
AI hiring laws continue to shift. A one-time review is not enough.
Assign ownership so someone can track developments and surface what matters for your hiring process.
Use a small set of reliable sources and review them consistently. Revisit your tools, notices, and policies with that information in mind. If something changes, adjust your approach.
Handle this the same way you manage other compliance areas. Stay current, make updates as needed, and document your process.
AI recruiting tools can help balance candidate experience with faster, more efficient hiring, but the legal landscape continues to evolve. Make sure your approach keeps pace while still delivering results. See how Cangrade can support both by requesting a demo.