Fired by an Algorithm — How California's 2026 FEHA Rules Hold Employers Liable for AI-Driven Terminations
- Lawyer Referral Center

- 4 hours ago
- 13 min read
HOME › CALIFORNIA EMPLOYMENT LAW › WRONGFUL TERMINATION › AI and Algorithmic Terminations — 2026 FEHA ADS Regulations
Updated April 2026 to reflect FEHA's automated decision system regulations effective October 1, 2025 (2 Cal. Code Regs., tit. 2, §§ 11008.1–11008.4), California Civil Rights Department enforcement guidance on ADS discrimination, and emerging litigation patterns in AI-driven employment decisions.
The performance management platform flagged you as a low performer. The workforce analytics tool ranked you in the bottom quartile. The scheduling algorithm repeatedly assigned you the least desirable shifts.
The automated scoring system gave your work a rating that triggered a disciplinary process. A human manager reviewed the output, agreed with the recommendation, and terminated your employment.
You never saw the data. You never knew the criteria. You never had an opportunity to challenge the score.
And your employer told you — perhaps sincerely — that the decision was objective. Data-driven. Free of bias.
California law now says otherwise.
As of October 1, 2025, the California Civil Rights Department's automated decision system regulations under FEHA impose specific obligations on every employer that uses AI tools, algorithmic systems, or automated scoring to make or influence employment decisions — including terminations.
These regulations represent the most significant expansion of California employment discrimination law in years. And the employees most likely to be harmed by discriminatory AI systems are the same employees California's anti-discrimination framework has always protected.

What Are Automated Decision Systems — and Why Do They Discriminate
An automated decision system — or ADS — is any computational tool, algorithm, machine learning model, or AI system that processes data about employees or applicants to generate a score, ranking, recommendation, or decision that influences an employment outcome.
ADS tools are now embedded throughout the California workplace — in ways that most employees and many employers do not fully recognize.
ADS Application | How It Is Used in Employment | Discrimination Risk |
Performance scoring platforms | Assign numerical ratings based on productivity metrics, activity logs, or output volume | Metrics that penalize caregiving patterns, disability-related absences, or communication styles correlated with protected characteristics |
Workforce analytics tools | Rank employees for retention, promotion, or reduction decisions | Training data reflecting historical workforce disparities produces discriminatory rankings |
Resume screening algorithms | Filter candidates based on keyword matching, credential patterns, or predictive scoring | Language and credential patterns that correlate with race, national origin, or age |
Scheduling algorithms | Assign shifts based on availability patterns and historical scheduling data | Patterns that disadvantage employees who requested accommodation or took protected leave |
Behavioral monitoring systems | Score remote employees based on keystrokes, screen activity, or communication patterns | Metrics that penalize disability-related work patterns or culturally distinct communication styles |
Sentiment analysis tools | Evaluate employee communications for "engagement" or "flight risk" | Language pattern analysis that correlates with national origin, age, or disability |
Predictive termination models | Score employees for likelihood of voluntary departure or "cultural fit" | Training data reflecting prior discriminatory termination patterns |
The core discrimination problem with ADS tools is that they learn from historical data, which reflects historical discrimination.
An algorithm trained on the workforce decisions of an employer that historically promoted white employees and terminated employees of color at higher rates will encode those patterns into its outputs.
The algorithm does not intend to discriminate. It replicates the discrimination embedded in its training data with mathematical precision.
California's 2026 FEHA ADS Regulations — What They Require
The California Civil Rights Department's ADS regulations — 2 Cal. Code Regs., tit. 2, §§ 11008.1–11008.4 — effective October 1, 2025, impose four categories of obligations on California employers that use automated decision systems in employment decisions.
Obligation 1 — Pre-Use Notice
Before deploying an ADS tool that will influence employment decisions affecting California employees, the employer must notify affected employees of:
The existence of the ADS and its general purpose
The categories of data the system collects and processes
The types of employment decisions the system influences
How employees can request more information about the system
This notice requirement applies to both new ADS deployments and to existing systems already in use when the regulations took effect.
Employers who were using ADS tools before October 1, 2025, without employee notice, were required to provide retroactive notice by the regulation's effective date.
Obligation 2 — Post-Adverse-Action Notice
When an employer takes an adverse employment action — termination, demotion, significant schedule change, denial of promotion — that was influenced in whole or in part by an ADS output, the employer must notify the affected employee:
That an ADS was used in the decision
What data does the system use to generate the relevant output
What the system's output was
The employee has the right to request a human review of the decision
This post-adverse-action notice is the most significant employee protection in the regulations, because it creates the documentary record that allows an employee and their attorney to evaluate whether the ADS output reflects discriminatory criteria.
Obligation 3 — Discriminatory Effect Evaluation
Employers must evaluate their ADS tools for discriminatory effects on protected groups — including race, sex, age, disability, national origin, and all other FEHA-protected characteristics. This evaluation must occur:
Before deploying a new ADS tool
Periodically during use — at least annually for high-impact systems
After any significant modification to the system's criteria or training data
After any adverse employment action that the employee challenges as discriminatory
The evaluation must be documented and must specifically assess whether the system produces disparate impact on any FEHA-protected group.
An employer who deploys an ADS without conducting this evaluation — or who conducts it and discovers discriminatory effects but deploys the system anyway — has committed an independent FEHA violation under Government Code § 12940.
Obligation 4 — Four-Year Documentation Retention
Employers must retain documentation of all ADS inputs, outputs, evaluation results, and related employment policies for at least 4 years.
This documentation retention requirement is specifically designed to enable CRD investigations and civil discovery — giving employees and government investigators access to the data needed to evaluate whether discriminatory patterns exist in ADS outputs across the workforce.
The No-Safe-Harbor Rule — Why "The Vendor Did It" Is Not a Defense
The most consequential provision of California's ADS framework for employers — and the most important protection for employees — is the explicit rejection of the third-party vendor defense.
Under the FEHA ADS regulations, an employer who deploys an ADS tool operated by a third-party vendor retains full FEHA liability for any discriminatory outcome that the tool produces. The employer cannot escape liability by arguing that:
The algorithm was designed by the vendor, not the employer
The employer did not know the system was producing discriminatory results
The vendor contractually warranted that the system complied with anti-discrimination law
The employer only reviewed and approved the system's recommendation — a human made the final decision
None of these defenses insulates the employer from FEHA liability. The California Department of Civil Rights' position — confirmed in regulatory guidance — is that deploying an ADS tool in employment decisions is a business decision for which the employer bears full responsibility. If the tool discriminates, the employer discriminated.
This is a significant departure from how many employers have approached ADS procurement. The common assumption — that purchasing a vendor-certified "bias-free" tool transfers the discrimination liability to the vendor — is not valid under California law.
The employer's obligation to evaluate the tool for discriminatory effects, notify employees of its use, and document its outputs belongs to the employer regardless of who built the system.
How ADS Discrimination Manifests in Termination Decisions
ADS-driven discrimination in termination decisions is rarely visible as a discrete event. It emerges from a pattern — a systematic bias in how the algorithm scores, ranks, or flags employees across protected groups — that becomes apparent only when workforce data are analyzed.
The productivity metric trap. Performance scoring systems that measure output volume, response time, or activity level without adjusting for protected leave, accommodation-related work patterns, or disability-related productivity variations systematically penalize employees who have taken FMLA, CFRA, or PDL leave, or who have received disability accommodations that affect their measured output.
An employee who was legitimately absent for protected leave and whose productivity score reflects that absence — without the system accounting for the leave — has been scored on discriminatory criteria.
The engagement score problem. Workforce analytics tools that score employees on "engagement," "collaboration," or "cultural alignment" frequently do so through behavioral metrics — communication frequency, meeting participation, after-hours availability — that disadvantage employees with caregiving responsibilities, disabilities that affect energy or availability, or cultural communication styles that differ from the dominant workplace norm.
These engagement scores then drive termination recommendations that track protected characteristics without any single decision-maker intending to discriminate.
The historical bias loop. A termination prediction model trained on an employer's historical termination data will encode whatever biases existed in those historical decisions.
If the employer historically terminated employees of color at higher rates for comparable conduct — even without conscious discriminatory intent — the model will score employees of color as higher termination risks, producing recommendations that perpetuate the historical pattern with algorithmic precision.
The RIF selection algorithm. Reduction-in-force selection tools that rank employees for elimination based on composite scores — combining performance ratings, tenure adjustments, role criticality assessments, and skills evaluations — can systematically produce age- or race-biased selection outcomes even when no individual criterion is discriminatory on its face.
The composite score aggregates the biases embedded in each component into a single number that then drives the selection decision.
What Employees Can Now Challenge — The Legal Framework
An employee terminated in California on or after October 1, 2025, who had an ADS used in the termination decision has several potential legal claims under FEHA's ADS framework.
Claim | Legal Basis | What Must Be Shown |
Disparate treatment discrimination | FEHA § 12940(a) | ADS output reflected discriminatory criteria targeting a protected characteristic |
Disparate impact discrimination | FEHA § 12941 / ADS regulations | ADS produced statistically disparate adverse outcomes for a protected group without business justification |
Failure to provide ADS notice | ADS regulations § 11008.2 | Employer did not notify employee of ADS use before or after adverse action |
Failure to evaluate for discriminatory effects | ADS regulations § 11008.3 | Employer deployed ADS without required bias evaluation |
Retaliation for challenging ADS decision | FEHA § 12940(h) | Employee objected to ADS-influenced decision and suffered adverse action |
The disparate impact theory is particularly powerful in ADS cases — because it does not require proof of discriminatory intent. An employee does not need to show that the employer wanted to discriminate.
They need to show that the algorithm produced outcomes that disproportionately harmed a protected group, and that the employer cannot justify the system's criteria as a business necessity.
For a full breakdown of how California's FEHA discrimination framework evaluates both disparate treatment and disparate impact claims, see our California workplace discrimination guide.
What to Request — The ADS Discovery Package
When an employee challenges an ADS-influenced termination, the documentation the employer was required to retain under the four-year retention rule becomes the foundation of the legal case.
An experienced California employment attorney will request — through the CRD investigation process or civil discovery — the following from the employer:
The ADS output for the terminated employee. The specific score, ranking, flag, or recommendation the system generated — including the data inputs that produced it.
The ADS criteria and weighting. What factors the system uses, how they are weighted, and whether those weightings were reviewed for discriminatory effects.
The workforce-wide ADS output data. The scores, rankings, or flags generated for all similarly situated employees — broken down by protected characteristic — to identify whether the system produced disparate outcomes across groups.
The bias evaluation documentation. The employer's own evaluation of whether the system produces discriminatory effects, which, if it reveals problems that the employer ignored, is among the most damaging evidence available.
The vendor documentation. The vendor's representations about the system's design, training data, and bias testing, and whether those representations were accurate.
The post-adverse-action notice. If the employer failed to provide the required notice, that failure is itself a FEHA violation and evidence of the employer's noncompliance with the regulatory framework.
Real Cases — AI and Algorithmic Termination Patterns in California
Technology, San Francisco. A 54-year-old software engineer was identified for termination by a workforce analytics platform that the employer used to score employees on "future potential" and "adaptability."
The platform's scoring criteria — which heavily weighted recent skill acquisition in specific programming languages and penalized employees whose GitHub activity had plateaued — systematically produced lower scores for employees over 50 who had reached senior technical roles and were no longer in active skill acquisition phases.
Of the twelve employees the platform flagged for termination, ten were over 50. The engineer's attorney obtained the platform's output data and criteria through CRD discovery.
The ADS's scoring structure — designed around criteria that tracked age with statistical precision — supported both a disparate-impact age-discrimination claim under FEHA and an independent ADS regulation violation for failure to evaluate the system for discriminatory effects before deployment.
To assess whether your termination involved AI-influenced criteria, our wrongful termination case qualifier walks through the specific indicators that attorneys evaluate.
Healthcare, Los Angeles. A productivity-monitoring system used in a hospital network scored nursing staff on response time, documentation speed, and shift activity levels.
The system did not account for FMLA or CFRA leave in its productivity calculations — an employee who had taken twelve weeks of protected medical leave returned to find her productivity score reflected the leave period as zero-output weeks.
Her composite score placed her in the bottom quartile, triggering an automatic performance improvement plan. When she raised the leave accounting issue, HR responded that the "system calculated the scores."
The employer's failure to audit the system for leave-related disparate impact — combined with the employer's failure to fulfill the post-adverse-action notice obligation — established both a FEHA disability-related discrimination claim and an independent violation of the ADS regulations.
The FEHA Claim Checker covers the exact disability and leave intersection that this case turned on.
Financial services, San Diego. A sentiment analysis tool used by a large financial services firm scored employees on "communication effectiveness" and "team alignment" based on analysis of internal email and Slack communications.
The tool consistently scored employees whose communications used direct, task-focused language — a style more common among employees from certain cultural backgrounds — as lower on "team alignment" than employees whose communications used more socially performative language.
The disparity by national origin was statistically significant across the firm's workforce data. Three employees of Indian national origin who were terminated following low alignment scores challenged the terminations under FEHA's national origin discrimination and ADS disparate impact frameworks.
The CRD accepted the complaints and initiated an investigation.
What to Do If You Were Terminated and an ADS Was Involved
Request the post-adverse-action notice immediately. If your employer used an ADS in your termination decision, they were legally required to notify you. If they did not, request the notice in writing — and document the request and the response, or lack thereof.
Ask specifically about algorithmic or AI involvement. In your termination meeting or immediately after, ask HR in writing whether any automated scoring, ranking, or algorithmic tool was used in the decision. The employer's response — or refusal to respond — is evidence.
Preserve your performance records. All performance reviews, productivity reports, and any dashboards or scores you had access to before termination. If you have access to your own scores on an employer platform, take a screenshot before losing access to the system.
Note the demographics. If you are aware of other employees who were flagged or terminated through the same system — and you know anything about their demographics — document that. The disparate impact theory requires workforce-wide pattern evidence that your attorney will pursue in discovery.
File within the deadline. FEHA ADS discrimination claims follow the same three-year filing deadline as other FEHA claims — measured from the date of the adverse action. File a complaint with the California Civil Rights Department within three years of your termination.
The CRD's investigation will include a request for the employer's ADS documentation under the four-year retention requirement.
For the full wrongful termination legal framework — including how ADS claims interact with the Tameny doctrine, FEHA discrimination theories, and available damages — see our California wrongful termination guide.
Frequently Asked Questions
Does California's ADS regulation apply to all employers or only large ones? The FEHA ADS regulations apply to all California employers covered by FEHA — those with five or more employees. There is no size exemption. A fifty-person company using a third-party HR analytics tool is subject to the same pre-use notice, post-adverse-action notice, and bias evaluation obligations as a Fortune 500 employer deploying enterprise-scale workforce analytics.
What if the employer says a human made the final termination decision? Under California's ADS regulations, human review of an ADS output does not insulate the employer from FEHA liability. If the human decision was influenced by the algorithmic output — even partially — the ADS is considered to have been used in the employment decision. The employer cannot escape liability by pointing to the human-in-the-loop if that human was acting on discriminatory algorithmic data.
Can I challenge an ADS-influenced termination if I do not know what system was used? Yes. The CRD's investigation process includes the authority to require employers to produce ADS documentation under the four-year retention requirement.
You do not need to know the specific system or its criteria before filing — the investigation will surface that information. What you need is a reasonable basis to believe an ADS was used, which can come from the nature of your performance evaluation process, any dashboards or scoring tools you were aware of, or the employer's general use of HR technology.
What if my employer's ADS vendor claims their system is "bias-free"? Vendor certifications of bias-free performance are not a defense under California law. The employer's obligation to evaluate the system for discriminatory effects before and during deployment exists independently of whatever the vendor represents.
A "bias-free" certification from a vendor does not satisfy the employer's obligation to conduct its own evaluation — and if the system produces discriminatory outcomes in the employer's specific workforce context, the vendor's certification is legally irrelevant.
How does an ADS discrimination claim interact with a standard FEHA discrimination claim? They are complementary theories that can be pursued simultaneously. A standard FEHA disparate treatment claim requires showing discriminatory intent — the ADS produced a biased result because of its design or deployment.
A FEHA disparate impact claim does not require intent — the ADS produced statistically disproportionate adverse outcomes for a protected group without business justification. The ADS regulation violations are independent claims that can be added to either theory or to both.
Are there damages specifically for ADS regulation violations? ADS regulation violations are FEHA violations — they carry the full FEHA remedial framework.
An employee who establishes that their employer violated the ADS regulations in connection with a discriminatory termination can recover the same damages available in any FEHA wrongful termination case — back pay, front pay, uncapped emotional distress, punitive damages where the conduct was malicious or oppressive, and attorney's fees.
Connect With a Vetted California Employment Attorney
AI-driven termination cases require early discovery of ADS documentation — the employer's four-year retention obligation means the evidence exists, but obtaining it requires a CRD complaint or civil lawsuit to trigger the disclosure requirement. Early legal intervention preserves access to that evidence before the retention period expires.
DISCLOSURE
This article is intended for general informational purposes only and does not constitute legal advice. No attorney-client relationship is formed by reading this content. 1000Attorneys.com is a State Bar of California Certified Lawyer Referral and Information Service (LRS #0128), not a law firm.


.webp)