Skip to content

The State-by-State Guide to AI Hiring Regulations

AI tools are now completely embedded in the hiring process. They appear in résumé screening, candidate sourcing, assessments, and video interviews. As more employers rely on these tools, the rules around them are getting more complex at the federal, state, and local levels.

Regulators assess how AI tools influence discrimination risk, data privacy, and transparency.

Some requirements already apply, and more will arrive over the next few years. States continue to introduce new bills, so this area changes quickly. HR teams need to stay current.

When organizations get this wrong, consequences follow fast. Regulators bring enforcement actions, impose fines, and trigger reputational fallout. Many of these requirements also reinforce employers’ aim: to build hiring processes that are fair, consistent, and defensible.

What Counts as an AI Hiring Tool?

AI hiring laws target tools that affect hiring decisions, not basic HR systems. If you rely on an AI tool to screen, rank, analyze, or decide who moves forward, those laws apply. That includes résumé screeners, ranking tools, video interview platforms with analytics, online assessments, and targeted job ads.

States use different terminology. You will see phrases like “automated employment decision tools” or “automated decision systems.” The label changes, but the concept remains the same. These rules apply whether the tool comes from a vendor or is built internally by your team.

For HR teams, the first step is simple. Know what you are using. Identify any tool that relies on data and algorithms to evaluate, filter, recommend, or influence decisions about applicants or employees.

Federal Baseline Requirements for AI Hiring Tools

Federal law does not address AI hiring in one place. Existing laws already cover how employers use these tools.

The Equal Employment Opportunity Commission (EEOC) has been clear. If an AI tool leads to discrimination, the employer is responsible under Title VII of the Civil Rights Act, the Americans with Disabilities Act, and other federal laws. Using a vendor does not change that.

At a high level, federal law focuses on a few core issues:

  • Discrimination risk. Employers remain responsible if a tool produces an adverse impact.
  • Accessibility. Tools cannot create barriers for individuals with disabilities.
  • Transparency and documentation. Employers need to understand how tools work and support hiring decisions if challenged.
  • Data use. These tools rely on data, and regulators expect clarity around how that data is used.

State and local rules add another layer. Some jurisdictions impose more detailed requirements than others. Start with the highest-risk jurisdictions and build from there.

Targeted Focus: Leading Jurisdictions and Emerging Threats

HR leaders need to know where AI hiring rules apply right now and where changes are coming next. The chart below highlights jurisdictions with laws already in effect and those moving toward new requirements. These states and localities should be the priority for multi-state employers.

JurisdictionLaw / Bill NameEffective Date / StatusWho It CoversKey HR RequirementsPenalties / Enforcement
CaliforniaFEHA ADS Regs (OAL 2025-0515-01); CCPA ADMT (11 CCR § 7120)FEHA: Current; CCPA: Apr 2026/Jan 2027CCPA-threshold employers (revenue/data vol.); all under FEHAEvaluate ADS impact on protected groups; retain records; risk assessments, pre-use notice, opt-out rightsFEHA discrimination claims; CCPA fines up to $7,500/violation
ColoradoSB 205 (High-Risk AI)Jun 30, 2026Employers deploying high-risk AI in employmentRisk management policy; annual impact assessments; notify users; correct inaccurate dataAG enforcement; civil penalties under consumer protection
ConnecticutProposed algorithmic discriminationPendingHigh-risk AI deployersImpact assessments; anti-discrimination measureAG enforcement
IllinoisAI Video Interview Act (820 ILCS 42); IHRA amendments (HB 3773)CurrentAll employers using AI video analysis or discriminatory AIConsent/explain AI video use; delete videos on request; no protected-class discrimination via AIIHRA claims; DCEO reporting violations
MarylandHB 1202 (Facial Recognition)Oct 1, 2020All employers using facial recognition in interviewsWritten consent/waiver before facial template creation; ethical guidelinesState labor dept enforcement; potential fines
MassachusettsFAIR Act (An Act Fostering Artificial Intelligence Responsibility – S.35 / H.77)Pending 2026TBDCandidate notice; potential bias reviewsTBD
New JerseyProposed AI hiring transparency, including New Jersey Assembly Bill 3911Pending 2026Likely all using automated toolsNotice to candidates; possible auditsLabor dept penalties
New York (State)Assembly Bill A9314Pending 2026All employers and employment agenciesNotice to candidates; possible auditsTBD
New York CityLocal Law 144 (AEDTs)Enforcement began on July 5, 2023All employers/agencies using tools for NYC rolesAnnual independent bias audits; public summary of results; 10-day advance notice to candidatesCivil penalties per violation – NYC Department of Consumer and Worker Protection (DCWP)
OregonProposed bias mitigation in AI hiringAttorney General GuidanceTBDTBDTBD
TexasHB 149 (Responsible AI Governance)Current (effective January 1, 2026)All employers developing/using AIProhibits intentional discrimination via AI against protected classesAligns with federal claims; state AG oversight
VermontProposed bills H0714 (automated employment decision-making tools for state agencies), H0262 (regulate AI in employment)PendingState agencies; employersTransparency and fairness requirementsTBD
WashingtonProposed AI employment bias bills, including HB 2144, SHB 1672Pending 2026TBD (likely all employers)Notice requirements, transparency requirementsTBD

As of March 2026. Check state Attorney General sites or employment law trackers for updates.

States with No Specific AI Hiring Law Yet

Many states have not passed AI-specific hiring laws. That does not mean there are no rules.

Federal anti-discrimination laws still apply. State human rights laws still apply. Privacy rules still apply. If an AI tool leads to a biased outcome, regulators can and will step in.

For employers operating across multiple states, the practical move is to set one standard based on the strictest requirements that apply. Managing a single, consistent process is far more workable than adjusting hiring practices state by state.

Practical Compliance Steps for HR and Recruiting

Before you can manage risk, you need a clear starting point.

Focus on a few key actions:

  1. Identify your tools. Know what you use across recruiting and hiring, including vendor platforms and internal systems.
  2. Review outcomes. Look for patterns that could create risk, especially across protected groups.
  3. Be clear with candidates. Explain when you use these tools and what they do in plain terms.
  4. Pressure-test your vendors. Make sure they can explain how their tools work and support audits or disclosures if needed.
  5. Assign ownership. Treat this as an ongoing process with clear responsibility across HR and legal.

Laws will continue to change. Your process needs to keep up.

Staying Current: Where to Check Next

AI hiring laws continue to shift. A one-time review is not enough.

Assign ownership so someone can track developments and surface what matters for your hiring process.

Use a small set of reliable sources and review them consistently. Revisit your tools, notices, and policies with that information in mind. If something changes, adjust your approach.

Handle this the same way you manage other compliance areas. Stay current, make updates as needed, and document your process.

AI recruiting tools can help balance candidate experience with faster, more efficient hiring, but the legal landscape continues to evolve. Make sure your approach keeps pace while still delivering results. See how Cangrade can support both by requesting a demo.