Colorado’s New AI Law Sets “National Standard” for Healthcare AI: 5 Things Developers and Deployers Need to Know, Now

Last Friday, Colorado’s governor signed into law the Colorado AI Act (“CAIA”). For digital health companies building with artificial intelligence, CAIA sets the first comprehensive national benchmark for minimum rights and protections for users of healthcare AI, effective February 1, 2026.

Why is it a national benchmark? The law only applies to companies doing business in Colorado, but because it is the most stringent and comprehensive state AI law to date, it is the de facto standard for companies operating nationally.

Over the next two years, Colorado’s legislature and executive agencies will no doubt be busy interpreting the new law, publishing implementing regulations, and modifying it as industry and consumer groups inevitably continue to weigh in. Because the law is a result of significant multi-stakeholder participation, and because it tends to reflect the existing guidance related to implementation of AI in healthcare, I predict we will see changes, but that the overall structure will remain. I also believe it’s likely other states will look to this law as they craft their own legislation.

Here’s what you need to know, now.

Will this law apply to all digital health companies?

CAIA will regulate “deployers and developers” of high risk artificial intelligence systems (“HRAI Systems”). The law defines “artificial intelligence system” as a machine-based system that “infers from the inputs the system receives how to generate outputs” that can influence physical or virtual environments. This definition includes a broad range of AI types, including algorithmic AI, generative AI, predictive AI, and decision-making AI.

HRAI Systems include those AI systems that, when deployed, make (or are substantial factors in making) decisions with a “significant” effect on the provision, denial, cost, or “terms” of healthcare services or health insurance. Regulations are needed to clarify whether direct-to-consumer healthcare AI (where there is no clinician intervention would fall into this category. The legislative language is not clear on this point, though the purpose of the law seems to align with that intention. In addition, the word “terms” is undefined, and therefore open to interpretation.

Importantly, the law excludes technology used for communication or informational purposes, referral or recommendation, or “answering questions” if the HRAI System is subject to an acceptable use policy that prohibits generating content that is discriminatory or harmful.

Key Takeaway: Generally, the law could apply to healthcare providers, payors, and technology companies developing or deploying HRAI Systems. However, the law seems to indicate that HRAI Systems that do not impact clinical or insurance coverage decision-making would not trigger the law’s applicability, so a close look at the intended use will be key to determining applicability.

What is the primary purpose of the law?

The law imposes a duty on developers and deployers of HRAI Systems to use “reasonable care” to avoid algorithmic discrimination.

Algorithmic discrimination occurs when the use of HRAI Systems results in “unlawful or differential treatment or impact” on an individual or group of individuals on the basis of age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, etc.

Pro Tip: While this law’s stated purpose is preventing bias and discrimination, its overall structure touches on each of the SHARP-ENF Principles we discuss here. Use this framework to begin putting together your AI governance plan.

What does the law require of healthcare AI developers?

The law primarily requires developers of HRAI Systems to disclose and document specific information System’s training data, performance, risks, and benefits. However, for reasons we touch on later, to produce this information, developers may also want to conduct an impact assessment (generally required only for deployers of HRAI Systems). Note that some developers will also be deployers—those folks will need to comply with additional requirements.

Developer Checklist

Developers must make the following available to deployers before sale or license of the HRAI System to a third party:

  1. A statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the HRAI System

  2. Documentation disclosing/describing:

    • High level summaries of the type of data used to train the HRAI System

    • Known or reasonably foreseeable limitations of the HRAI System, including algorithmic discrimination

    • The purpose of the HRAI System

    • The intended benefits and uses of the HRAI System

    • How the HRAI System was evaluated for performance and mitigation of algorithmic discrimination

    • The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation

    • The intended outputs of the HRAI System;

    • The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the HRAI System

    • How the HRAI System should be used, not be used, and be monitored by an individual when the HRAI System is used to make, or is a substantial factor in making, a healthcare or insurance decision

    • Any additional information reasonably needed to allow the deployer to comply with law, and to understand the outputs and monitor the performance of the HRAI System for risks of algorithmic discrimination.

In addition, developers must publish a statement on the developer’s web site or public use case inventory, summarizing (1) the types of HRAI Systems that the developer makes available to deployers or other developers; and (2) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination. This statement must be updated as necessary to ensure that the statement remains accurate, and no later than ninety days after the developer modifies the HRAI System in a way that introduces a reasonably foreseeable risk of algorithmic discrimination.

Last, developers must disclose to the Colorado Attorney General and to all known deployers, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the HRAI System, within 90 days of their discovery.

Key Takeaway: Compliance with this section will require developers to understand and document the intended use, potential discrimination risks (and mitigations), underlying model training data, and intended inputs/outputs. This must be produced for internal compliance and for disclosure to deployers for the purpose of producing their required impact assessment. This will require careful planning at the outset, and continuous monitoring as your organization scales. For this reason, an AI governance and compliance plan will be key to organizing and documenting compliance with CAIA.

What does the law require of healthcare AI deployers?

CAIA imposes significant risk management, documentation, and disclosure requirements for most deployers of HRAI Systems. In addition, the law gives the AG the right to request proof of compliance with such requirements upon 90 days’ notice.

Risk Management Program

Deployers of HRAI Systems must implement a risk management policy and program (the “Program”) that describes how the deployer will identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.

The Program, which must be regularly reviewed and updated, must be “reasonable” given the size and complexity of the deployer, the nature and scope of the HRAI System (including intended uses), and the sensitivity and volume of data processed by the HRAI System. The law cites two key few frameworks that companies can use to show reasonableness, though this list is not exhaustive:

  1. The NIST RMF

  2. ISO/IEC 42001

Impact Assessment

Deployers must also complete an impact assessment at least annually and within 90 days of a substantive change triggering reasonably foreseeable risk of algorithmic discrimination. This impact assessment must include, at a minimum, and to the extent reasonably known by or available to the deployer:

  1. A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the HRAI System

  2. An analysis of whether the deployment of the HRAI System poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks

  3. A description of the categories of data the HRAI System processes as inputs and the outputs the HRAI System produces

  4. If relevant, an overview of the categories of data the deployer used to customize the HRAI System

  5. Any metrics used to evaluate the performance and known limitations of the HRAI System

  6. A description of any transparency measures taken, including any measures taken to disclose to a consumer when the HRAI System is in use

  7. A description of the post-deployment monitoring and user safeguards

  8. If the deployer modifies the system in a way that introduces a reasonably foreseeable risk of algorithmic discrimination, a statement disclosing the extent to which the HRAI System was consistent with or varies from the developer’s intended uses of the system.

In addition, deployers must review the HRAI System for algorithmic discrimination at least annually.

Pro Tip: Deployers and developers may choose to outsource this work, and we’re seeing a growing number of companies offering support in the identification, testing, and mitigation of algorithmic bias. Nixon Gwilt Law partners with these providers to support deployers under the cover of attorney client privilege. Reach out to learn more.

Consumer Disclosures

Deployers must notify consumers that they’re using the HRAI System and:

  1. Provide to the consumer a statement disclosing the purpose of the HRAI System and the nature of the “decisions” it makes; the contact information for the deployer; a description, in plain language, of the HRAI System; and instructions on how to access the statement required by subsection (5)(a) of this section;

  2. Provide the consumer information regarding their right to opt out of the processing of personal data for the purpose of profiling--the use of algorithms and data analysis to create detailed profiles of individuals for targeted predictions, services, advertising, or other purposes.

  3. If any decision is “adverse” to the consumer:

    1. Provide a statement disclosing the principal reason or reasons for the decision, including (i) how the HRAI System contributed to the decision; (ii) the type of data that was processed in making the decision; and (iii) the source or sources of the data used to make the decision

    2. Provide an opportunity to correct any incorrect personal data

    3. Provide an opportunity to appeal the decision

The above disclosures must be sent to each consumer in an accessible format, using plain language and those languages that are used in the deployer’s normal course. In addition, the deployer must include a statement on its web site summarizing: (i) the types of HRAI Systems currently deployed by the deployer; (ii) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination; and (iii) in detail, the nature, source, and extent of the information collected and used by the deployer.

Deployers must also disclose to consumers interacting with the HRAI System that they are interacting with an HRAI System, unless it’s demonstrably obvious.

Exceptions for small deployers

CAIA provides an exception for small deployers with fewer than fifty full-time equivalent employees. This exception is only available if:

  1. The HRAI System and does not use the deployer’s own data to train the HRAI System

  2. The HRAI System is used for the intended uses that are disclosed to the deployer by the developer in accordance with CAIA and continues learning based on data derived from sources other than the deployer’s own data

  3. The deployer makes available to consumers the developer’s impact assessment

If this exception applies, the deployer does not need to build a risk management program, conduct an impact assessment, or provide the above-mentioned web site notice. This is likely applicable for small deployers using third party off the shelf (“OTS”) software containing AI.

Pro Tip: Developers of HRAI Systems should conduct an impact assessment and provide it to deployers purchasing their proprietary software—although the law does not explicitly require it, buyers of this technology may require it (1) to help them fulfill their legal obligations related to deploying the technology and (2) to show that the vendor is aligned with their own risk management strategy.

Exceptions for certain HRAI Systems

CAIA does not apply to a developer or deployer of an HRAI System that has been approved, authorized, certified, cleared, developed, or granted by a federal agency, including FDA or ONC. It also does not apply to work performed under contract with FDA, DoD, DOC, or NASA. These federal agencies have their own documentation, disclosure, non-discrimination, and testing requirements, so this exception defers to federal oversight of these deployers, relieving companies of the obligation to comply with additional state level requirements.

Join us June 20 for our webinar, “Demystifying FDA Regulation of AI-Powered Digital Health Tools: What You Need to Know Before You Launch,” where we will break down how the FDA analyzes and regulates healthcare AI!

Exceptions for HIPAA Covered Entities

CAIA does not apply to covered entities (healthcare providers, payor, and healthcare clearinghouses) who provide healthcare recommendations that:

  1. Are generated by a HRAI System

  2. Require a healthcare provider to take action to implement recommendations

  3. Are not considered to be high risk

It is unclear what this exception means, given the law does not specifically define the term “high risk”. It’s possible that future regulations will provide clarity here. In the absence of specific guidance, covered entities will need to make a reasonable determination regarding the level of risk involved in an HRAI System’s healthcare recommendations. Further, it appears that most direct to consumer (“DTC”) applications of healthcare AI would not qualify for this exception, given the requirement for an intervening healthcare provider.

What happens if a company violates CAIA?

If a developer or deployer of high risk AI complies with the law, they get a “rebuttable presumption” that they used reasonable care—in other words, failing to follow the law doesn’t mean the developer or deployer has violated the law, but if the regulators take legal action to enforce the law, the law will presume that the developer or deployer acted with reasonable care if they’ve complied with the requirements outlined above.

Colorado will consider a violation of CAIA an “unfair and deceptive trade practice” in violation of the state’s Consumer Protection Act. Violations can result in a civil penalty of $20,000 to $50,000 for each violation, civil injunction, civil damages of up to 3 times actual damages plus attorneys’ fees, and/or criminal penalties.

What should I do right now?

The Colorado AI Act represents a significant step forward in regulating the deployment and development of AI in healthcare. It sets a comprehensive standard that other states are likely to follow, establishing critical guidelines to prevent algorithmic discrimination and promote transparency. Some of these requirements are fast becoming industry standard, and in the next two years, will become legal standards as well. Healthcare technology companies, providers, and payors must start preparing now to ensure compliance. Healthcare AI vendors especially need a plan for AI governance and demonstrating adherence to established healthcare AI principles.

By understanding the requirements and proactively implementing robust AI governance frameworks, HRAI Systems developers and deployers can not only meet regulatory obligations but also foster trust and innovation in the healthcare AI landscape. Stay ahead of the curve and make AI compliance a strategic priority to navigate the evolving legal landscape effectively. Ask us how we can help!