As artificial intelligence (AI) continues to transform the way we live, work and interact with technology, Colorado has taken a significant step forward in the regulation of these systems. Signed into law by Governor Jared Polis on June 8, 2021, the Colorado AI Act [1] (also known as Senate Bill 24-205) is the first state-level comprehensive legislation in the U.S. that regulates the use of AI systems. The act aims to promote transparency, accountability and fairness in the development and deployment of AI systems while protecting the rights and interests of consumers and citizens.
Key provisions of the act
- The act applies to organizations that do business in Colorado; they do not have to have a physical presence to be obligated to comply with this law.
- The act focuses primarily on “high-risk” AI systems, which are AI-based systems that, when deployed, make “consequential decisions.” Consequential decisions are decisions that have a material impact on consumers’ educational opportunity, employment, finance and lending, healthcare, housing, legal or government services.
- Developers and deployers are obligated to document and disclose specific details about high-risk AI models.
- The act obligates deployers to exercise reasonable care by creating an AI risk management program and conducting recurring AI model impact assessments.
- Deployers of all consumer-facing AI models must ensure that consumers are aware they are interacting with an AI system.
- The Colorado Attorney General is exclusively responsible for enforcement of the act.
Developer obligations
Developers, those that create or substantially modify a high-risk artificial intelligence system, must exercise reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination arising from the use of their AI system. Additionally, developers must make available the following documentation, disclosures and information to deployers and other developers of the AI system:
- A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system
- Documentation disclosing:
- High-level summaries of the type of data used to train the high-risk AI system
- Known or reasonably foreseeable limitations of the AI system
- The purpose of the AI system
- Its intended benefits and uses
- All other information necessary for a deployer to comply with the deployer’s obligations - Documentation describing:
- How the AI system was evaluated for performance and mitigation of algorithmic discrimination
- The data governance measures to cover the training datasets and measures used to examine the suitability of data sources including possible biases and appropriate mitigation
- The intended outputs of the AI system
- How the system should be used, not be used and be monitored - Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the AI system
Disclosures and notifications
Developers are obligated to disclose, on their website or in a public use-case inventory, a statement summarizing the types of high-risk AI that the developer has developed or modified and how the developer manages risks of algorithmic discrimination. Additionally, the developer is required to keep these disclosures updated as the AI system is modified.
Within 90 days of a developer discovering that a high-risk AI system has been deployed and has caused or is reasonably likely to have caused discrimination, they must inform the Colorado Attorney General and all known deployers and developers of the AI system.
Deployer obligations
Deployers are entities that do business in Colorado and deploy (e.g., implement or use with consumer impacts) a high-risk AI system. Like a developer, a deployer must exercise reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination and must notify consumers when they have deployed a high-risk AI system to make, or be a substantial factor in making, a consequential decision concerning a consumer. Deployers are required to disclose:
- A description of the AI system
- A description of the purpose of the AI system
- The nature of consequential decisions being made
- Instructions on how to access details of the AI system on their website
- Provide information regarding the consumer’s right to opt out of the processing of personal data concerning the consumer for profiling
- Make available in a manner that is clear and readily available on their website, the types of high-risk AI systems deployed, how they manage known or foreseeable risks of algorithmic discrimination, and the nature, source and extent of information collected and used by the AI system
- Within 90 days of discovering that their high-risk AI system has caused or is reasonably likely to have caused discrimination, they must inform the Colorado Attorney General
Adverse decisions
When a deployer’s high-risk AI system makes a consequential decision that its adverse to the consumer, the deployer must:
- Provide to the consumer a statement disclosing:
- The principal reason or reasons for the consequential decision, including the degree and manner to which the AI system contributed to the decision
- The types and sources of data that were processed by the AI system in making the decision - Provide an opportunity for the consumer to correct any incorrect personal data and appeal the adverse decision and require human review
Additional disclosures to consumers
While the sections above refer specifically to high-risk AI systems, the following disclosures apply to any AI system that consumers interact with. Deployers of AI systems (that are not obvious to a reasonable person) must disclose that the system the consumer is interacting with is an AI system.
Enforcement by attorney general
The Colorado Attorney General has exclusive authority to enforce the act. Developers and deployers that are faced with an enforcement action have an affirmative defense if both the following are true:
- They discover and cure a violation of the act because of feedback, adversarial testing or “red teaming,” or internal review processes
- They comply with the latest version of the NIST AI Risk Management Framework [2] or another nationally or internationally recognized risk management framework for AI systems, or any risk management framework designated by the attorney general.
Additional regulations
The attorney general may promulgate additional rules as necessary for the purpose of implementing and enforcing the act. These changes may include documentation and requirements for developers, notifications to consumers, required disclosures and risk management and impact assessment policies and procedures.
Establishing risk management policies and program
A deployer of a high-risk AI system must implement and maintain a risk management policy and program to govern the AI system that incorporates the principles, processes and personnel that the deployer uses to identify, document and mitigate risks of algorithmic discrimination.
Acceptable risk management frameworks include the NIST AI Risk Management Framework [3], ISO/IEC 42001 [4] or other internationally recognized, substantially equivalent, risk management standards.
Impact assessment
Within 90 days of the act taking effect, a deployer or third party contracted by the developer must complete an impact assessment that is then repeated annually and whenever substantial modifications to high-risk AI systems occur. The impact assessment must include, at a minimum:
- A statement disclosing the purpose, intended use cases and benefits afforded by the high-risk AI system
- An analysis of whether the deployment of the AI system poses any risks of algorithmic discrimination and the steps that have been taken to mitigate those risks
- A description of the categories of data the AI system processes as inputs and the outputs the AI system produces
- Any metrics used to evaluate the performance and limitations of the AI system
- A description of any transparency measures taken to notify a user that the AI system is in use
- A description of post-deployment monitoring and user safeguards
Exemptions to high-risk AI policy
- The deployer employs fewer than 50 people
- The deployer does not use its own data to train the AI system
- The AI system is used for its intended purpose
- The deployer makes available to consumers any impact assessment that the developer of the AI system has completed
What should organizations do to prepare for compliance?
The act will take effect on Feb. 1, 2026, giving organizations two years to prepare for compliance. Organizations that operate in Colorado and leverage AI should consider the following steps to comply:
- Appoint a team to lead AI compliance efforts
- Conduct an inventory and assessment of existing and planned AI use cases and determine whether they meet the standard of a high-risk AI system
- Implement an AI risk management framework, such as the NIST AI Risk Management Framework [5]
- Create and document the policies and procedures for disclosing, explaining and evaluating AI systems, and for addressing the feedback from the end user or consumer
- Set the foundation to conduct impact assessments by identifying the policies, processes and resources needed to orchestrate, conduct, document, analyze and monitor AI impact
- Implement and test the mechanisms and tools for providing the required documentation, disclosures and other required information
- Train and educate staff and stakeholders on the ethical and legal implications of AI systems, and on the best practices for designing and operating them
- Monitor and review AI systems regularly to adjust and improve as needed
Organizations that develop or deploy AI systems for use in Colorado should consider an AI readiness assessment to identify gaps in organizational preparedness and build a road map to achieve and maintain compliance with changing regulations.
What does this mean for companies outside of Colorado?
Although the legislation directly applies to organizations that do business in Colorado, the Colorado AI Act is landmark legislation that sets a precedent for other states to follow. Utah has enacted legislation that establishes liability for use of AI that violates consumer protection laws if not properly disclosed. Additionally, four other states (California, Illinois, Massachusetts, Ohio) have active bills related to fair and responsible use of AI.
This policy proliferation reflects the growing awareness and concern about the potential impacts and risks of AI systems on society and individuals. Organizations with operations in affected states will need to align their AI practices with the state’s regulatory standards, potentially prompting a broader adoption of these guidelines to ensure consistency across their operations.
Finally, it is important to monitor the changing AI regulatory landscape, conduct regular risk and vulnerability assessments of AI systems and ensure governance is being applied across the organization.
How we can help
Ensuring your organization is properly equipped to adhere to incoming AI regulations will help save time, energy and resources by preventing retrospective efforts. Baker Tilly’s digital team can support your organization in defining an AI strategy, conducting readiness and impact assessments, designing and implementing an AI governance and risk management framework, or – if you already have things in place – implementing and scaling AI systems. Contact one of our professionals today to learn more.
Sources
[1] 2024a_205, colorado.gov
[2] Artificial Intelligence Risk Management Framework (AI RMF 1.0), nist.gov
[3] Artificial Intelligence Risk Management Framework (AI RMF 1.0), nist.gov
[4] ISO/IEC 42001:2023 - AI management systems, ISO.org
[5] Artificial Intelligence Risk Management Framework (AI RMF 1.0), nist.gov