Can AI Vendors Be Liable for Bias? California’s 2025 Rules

a woman holding a box with a plant in it

California’s 2025 rules mark a seismic shift in how AI vendors and employers must address bias, discrimination, and fairness in automated decision systems (ADS). But the main question is, can AI vendors be liable for bias?

With the rise of AI in hiring, workforce management, and predictive behavioral analysis, the state is clarifying liability for discriminatory outcomes stemming from automated decision making systems. These regulations emphasize human oversight, anti-discrimination laws, and fairness under the Fair Employment and Housing Act (FEHA), laying out new compliance responsibilities for California employers and third-party AI vendors alike.

Context: AI Tools in Employment Decisions

Today, many California employers rely on AI tools to screen job applicants, conduct video interview analysis, and apply automated resume filters. These systems use machine learning and computational processes to predict candidate suitability, assess protected characteristics, or even infer them indirectly. Such tools promise efficiency but risk replicating or amplifying existing bias in training data, producing an adverse impact on protected groups.

Automated decision-making tools can replace human judgment entirely or augment it. Even when human decision makers are “in the loop,” biased recommendations can shape employment decisions, raising questions of liability under anti-discrimination laws.

California Civil Rights Council and the 2025 Rules

The California Civil Rights Council and the California Civil Rights Department are spearheading these 2025 rules. The regulations are designed to enforce FEHA, which bars discrimination in employment based on protected characteristics like race, sex, age (age discrimination), disability, and more. The goal is to ensure equal employment opportunity, even when employers use AI systems in hiring processes.

These final regulations will define “automated decision systems (ADS)” broadly, including:

  • Automated decision-making tools that replace or aid human decision-making
  • AI technology uses computational processes to evaluate or make employment decisions
  • Systems performing predictive behavioral analysis on job applicants

Such tools can include video interview analysis software that evaluates tone or facial movements, automated resume filters, or scoring models that predict job success.

a cell phone sitting on top of a laptop computer

Bias Audits and Testing Requirements

A centerpiece of the 2025 rules is bias testing. Employers and AI vendors must evaluate whether automated decision making systems produce discriminatory outcomes for protected groups. This is similar in concept to federal guidance from the Equal Employment Opportunity Commission (EEOC), but California’s rules are expected to be even stricter.

Bias audits must examine:

  • Adverse impact on protected characteristics
  • Whether the training data is biased or unrepresentative
  • The validity of using such data for business necessity

Bias audits are intended to expose hidden discrimination that might be baked into models, enabling California employers to take corrective action before deploying these tools.

Can AI Vendors Be Liable for Bias? California’s 2025 Rules

A central question in these 2025 rules is: can AI vendors themselves be liable for bias? Historically, employers have shouldered liability under FEHA. But California’s new framework recognizes that third-party AI vendors play an active role in designing, maintaining, and deploying these systems. The Employment Act applicable here has proper guidance.

The rules suggest that artificial intelligence vendors can share liability when:

  • Vendors provide tools that cause an adverse impact on protected groups
  • Vendors fail to conduct or disclose bias audits
  • Vendors design systems that infer protected characteristics without justification
  • Vendors fail to allow human oversight or appeals of AI-driven decisions

This move reflects the state’s broader approach to administrative law and consumer privacy (including the California Consumer Privacy Act), which imposes obligations on service providers that handle sensitive personal information.

Human Oversight and Final Decisions

California’s 2025 rules do not ban AI-driven decisions outright. Instead, they mandate human oversight and accountability:

  • Employers must ensure that final decisions are not simply rubber-stamped by automated decision systems.
  • There must be a meaningful review by human decision makers who understand the tool’s limitations.
  • Job seekers should be able to appeal AI-driven decisions and contest the outcome.

These requirements aim to preserve human judgment and reduce the risk that biased systems replace human decision making entirely.

Enforcement Provisions and Compliance Risk

The California Civil Rights Department will be responsible for enforcing these rules. Employers and AI vendors face significant compliance risk if they fail to meet their compliance obligations:

  • Potential administrative complaints under FEHA
  • Civil litigation alleging discrimination or adverse impact
  • Penalties for failure to conduct bias audits
  • Sanctions for failing to disclose the use of automated decision-making systems to job applicants

Employers must disclose that they use such tools in hiring, describe the nature of the decision-making process, and inform applicants of their rights. Failure to comply can lead to anti-retaliation claims if employers punish applicants who assert their rights.

The “Robo Bosses Act” and Local Laws

Unfair hiring lawsuits are getting more common than you may think. At the state level, legislators are also considering bills like the so-called “Robo Bosses Act” to regulate AI in hiring and management. These laws would:

  • Require clear disclosure of AI systems’ use in workforce management
  • Limit data collection about workers, including sensitive personal information
  • Provide rights to job seekers and employees to know how decisions are made

In addition to state laws, local laws may impose even stricter requirements. Many California employers must navigate these overlapping frameworks to reduce compliance risk.

Anti-Discrimination and Fair Employment Principles

All of these regulations are rooted in anti-discrimination principles enshrined in the Fair Employment and Housing Act. FEHA prohibits discrimination based on:

  • Race, color, national origin
  • Sex, gender identity, sexual orientation
  • Age (40+), disability, medical condition
  • Religion, marital status, military/veteran status

California employers using AI technology must show that any adverse impact is justified by business necessity. Even then, they must prove no less-discriminatory alternative exists. Automated decision-making tools that produce discriminatory outcomes without justification are illegal.

The Role of the California Consumer Privacy Act

The California Consumer Privacy Act (CCPA) and its amendments also impact AI vendors and employers. These laws impose obligations on data collection and the use of sensitive personal information, which includes data on protected characteristics.

Under CCPA:

  • Job applicants have the right to know what data is collected and how it is used.
  • Employers and AI vendors must disclose automated decision making use.
  • Consent may be required for certain types of processing.

These privacy laws add another layer of compliance responsibilities for employers using AI systems in hiring.

AI, Artificial Intelligence concept,3d rendering,conceptual image.

Existing Laws and Federal Guidance

Beyond California law, federal laws like Title VII and guidance from the EEOC apply. Employers must ensure AI tools do not produce discriminatory outcomes under federal employment regulations.

California’s 2025 rules are expected to serve as a model for other states and may influence federal standards. They emphasize that AI vendors are not simply neutral suppliers of technology. By designing and selling automated decision systems, they share compliance obligations for preventing bias.

Implications for Employers and AI Vendors

These regulations aim to make sure AI-driven decisions in hiring are fair and accountable. Both California employers and third-party AI vendors must:

  • Conduct bias audits of automated decision-making systems
  • Document the business necessity for any adverse impact
  • Provide human oversight over final decisions
  • Ensure job seekers can appeal or challenge AI-driven decisions
  • Disclose the use of AI systems in hiring

Failing to meet these requirements can result in lawsuits, penalties, and reputational harm. For AI vendors, these rules mean that selling biased systems will no longer be legally safe.

Conclusion

California’s 2025 rules reflect a fundamental shift in administrative law, employment regulations, and anti-discrimination enforcement. They recognize that AI vendors cannot disclaim responsibility for bias in their products.

Instead, vendors and employers must work together to ensure automated decision making systems promote fairness, respect protected characteristics, and comply with the Fair Employment and Housing Act.

These changes will redefine workforce management, the hiring process, and human decision making in California and beyond. Employers, AI vendors, and legal counsel must be prepared for this new landscape of compliance obligations and enforcement provisions.

Fight AI Vendors Liable for Bias with BLG

If your company is evaluating AI tools for hiring or wants to ensure compliance with California’s evolving rules, contact Bourassa Law Group today to discuss your obligations and reduce your risk.

Contact Us Now

Related Posts

Free Case Evaluation

The evaluation is FREE! You do not have to pay anything to have an attorney evaluate your case.