Article
Various states push momentum towards regulating insurer's use of artificial intelligence
Apr 01, 2024 · Authored by John Romano, Russell Sommers
The recent, rapid expansion of big data and new technologies has transformed the insurance industry and has shown that it has the potential to increase efficiencies and benefit insurers and consumers alike. However, the unchecked use of these advancements can unintentionally result in harm to protected groups. To combat this, Colorado has introduced the Senate Bill (SB) 21-169, which aims to protect consumers from insurance practices that result in unfair discrimination on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression. Insurers in the state may soon be held accountable for testing their big data systems to ensure they are not unfairly discriminating against consumers on the basis of a protected class, and are required to address any concerns as they arise.
Following Colorado’s lead, on Jan. 17, 2024, the New York Department of Financial Services (NYDFS) issued a proposed circular letter to all New York state insurers in order to identify its expectations on the management and use of artificial intelligence systems (AIS) and other predictive models in underwriting. Most recently, Alaska, New Hampshire and the Connecticut Insurance Department adopted the NAIC model bulletin on artificial intelligence (AI).
Our financial services risk advisory specialists broke down both of the new regulations below. A table is included summarizing the key takeaways.
Criteria | Colorado (SB21-169) | New York (NYDFS update) |
Connecticut (Adoption of NAIC Model Bulletin) |
Scope | Targeted at life insurers in Colorado for overall governance | Broad coverage including all insurers in New York for underwriting and pricing | Aligns with NAIC Model Bulletin focusing on AI systems in insurance |
Key focus | Governance of ECDIS, algorithms, and predictive models in risk management | Management of ECDIS, AIS, and predictive models in underwriting and pricing | Adoption of national standards on AI systems in insurance |
Governance and risk management | Mandates a risk-based governance and risk management framework, including documentation, oversight and regular reviews | Emphasis on fairness principles, board oversight, policies, procedures and internal controls | Follows NAIC guidelines, likely focusing on ethical AI use and consumer protection |
Internal audit | Not specifically mentioned | Requires an insurer to have an internal audit function to provide general and specific audits, reviews and tests necessary to protect assets, evaluate control effectiveness and efficiency, and evaluate compliance with policies and regulations | AIS program should address internal audit functions, but specifics are not mentioned |
Fairness and discrimination | Focus on preventing unfair discrimination, particularly regarding race, in the use of ECDIS and models | Stringent measures against unfair and unlawful discrimination in the use of ECDIS and AIS | Expected alignment with NAIC’s stance on fair AI usage without discriminatory practices |
Documentation and reporting | In-depth documentation, testing for discrimination, narrative report submission on compliance | Comprehensive documentation and regular testing for discrimination; detailed reporting requirements | Connecticut requires an annual certification certifying the insurer is in compliance and will make available related information upon request. Certification is due Sept. 1, 2024 |
Consequences of noncompliance | Civil penalties, cease and desist orders, license suspensions or revocations | Not explicitly detailed, but may include examination by the Department | Not specified, but likely similar to standard regulatory noncompliance penalties |
Current status | Adopted with recent revisions | Proposed stage, with closed comment period | Adopted, following the NAIC Model Bulletin |
Additional insights | Colorado's regulation is specific to life insurers with a notable shift towards risk-based frameworks | NYDFS's approach is comprehensive and broad, encompassing various insurance activities | Connecticut's adoption signifies a trend towards uniformity in AI regulations in insurance across states |
Colorado’s SB21-169 new and revised algorithm and predictive model governance regulation for life insurers
On Sept. 21, 2023, the Division of Insurance at Colorado’s Department of Regulatory Agencies (Division), adopted Regulation 10-1-1, Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models. The regulation establishes requirements for life insurers that utilize External Consumer Data and Information Sources (ECDIS), as well as algorithms and predictive models that incorporate ECDIS.
Effective Nov. 14, 2023, the regulation, which applies to all life insurers authorized to operate in Colorado, mandates the establishment of a risk-based governance and risk management framework. The framework is designed to ensure that the use of ECDIS, algorithms and predictive models does not result or potentially result in unfair discrimination, particularly with respect to race, and provide a method to remediate unfair discrimination. Click here for more information on SB21-169.
The governance and risk management framework outlined in the regulation mandates life insurers using ECDIS, algorithms and predictive models to:
- Formulate governing principles: Document guiding principles to prevent unfair discrimination in the use of ECDIS, algorithms and predictive models.
- Ensure oversight: The board of directors or a suitable committee should oversee the risk management framework, with senior management responsible for setting strategy and monitoring performance.
- Create a cross-functional team: Establish a team from key functional areas to oversee the use of these tools.
- Develop policies and training: Create written policies for the design, use and monitoring of these tools, including a training program for relevant personnel.
- Handle consumer complaints: Implement processes to address consumer complaints and inquiries about the use of these tools.
- Assess risks: Develop a system for assessing and prioritizing risks associated with these tools, considering consumer impacts.
- Maintain an inventory: Keep an updated inventory of all used ECDIS, algorithms and predictive models, documenting any changes.
- Conduct testing and monitoring: Document testing conducted to detect unfair discrimination and monitor the performance of algorithms and predictive models.
- Manage external resources: Document the process used for selecting third-party vendors supplying ECDIS, algorithms and/or predictive models, and ensure compliance with all regulatory requirements.
- Review regularly: Conduct regular reviews of the governance structure and risk management framework, updating documentation as needed.
The updated draft regulation for life insurers has been paired down from the initial version released in February 2023. Most notably, the updated draft no longer emphasizes "disproportionately negative outcomes," which would have included results or effects that "adversely impact a group" with protected characteristics "even after considering factors that define similarly situated consumers." Instead of this term, the updated draft pivots to necessitating "risk-based" governance and management frameworks. This shift is substantial – it not only brings the updated draft in line with conventional insurance regulation, but also signifies a pragmatic, progressive advancement for such regulation.
However, despite being less demanding than the initial draft, the updated version still imposes significant obligations on life insurers. These include mandates for life insurers to set up risk-based frameworks for the utilization of ECDIS in any insurance practice including claims, ratemaking and pricing. Furthermore, the regulation necessitates the execution of these frameworks concerning any algorithms and predictive models that use or depend on ECDIS.
The regulation outlines comprehensive reporting requirements. Insurers using ECDIS, algorithms and predictive models must submit a narrative report to the Division, summarizing their progress towards compliance with the regulation's requirements. Conversely, insurers that do not use ECDIS or algorithms and predictive models are exempt from these requirements but must submit an attestation to that effect.
The regulation stipulates that sanctions may be imposed, including civil penalties, cease and desist orders, and/or suspensions or revocations of license, subject to the requirements of due process.
- Understand the regulation: Familiarize yourself with the definitions and requirements of the regulation. Understand what constitutes ECDIS, algorithms and predictive models, and how they are used in your organization
- Establish a governance and risk management framework: Develop a comprehensive framework that includes principles, responsibilities, a cross-functional committee, roles and responsibilities, policies and processes, training, controls, protocols for consumer complaints, a plan for unintended consequences and the use of external audits
- Maintain comprehensive documentation: Keep detailed records of all ECDIS, algorithms and predictive models in use, including those supplied by third parties. This should include an inventory, results of annual reviews, a system for tracking changes, descriptions of testing, limitations, ongoing monitoring, datasets used, how predictions are made, potential risks and impacts, the process for selecting external resources and all decisions made regarding the use of ECDIS and algorithms
- Prepare for reporting requirements: Plan for the submission of reports to the Division summarizing your progress towards compliance with the requirements specified in the regulation. These reports are due six months following the effective date of the regulation and annually thereafter
- Plan for potential noncompliance: Understand the potential consequences of noncompliance, including civil penalties, cease and desist orders, and/or suspensions or revocations of license. Ensure that your organization has a plan in place to address any potential noncompliance issues
Update from the NYDFS on the use of artificial intelligence systems and external consumer data
On Jan. 17, 2024, the New York Department of Financial Services (NYDFS) issued a proposed insurance circular letter to all insurers authorized to write insurance in New York state titled Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing. As noted by the Department, “the purpose of this circular letter (“Circular Letter”) is to identify DFS’s expectations that all insurers authorized to write insurance in New York State, licensed fraternal benefit societies, and the New York State Insurance Fund (collectively, “insurers”) develop and manage their use of ECDIS, artificial intelligence systems (AIS), and other predictive models in underwriting and pricing insurance policies and annuity contracts.”
The letter goes on to state that the Department may examine the use of ECDIS and AIS within the scope of examinations. Per the Department, the use of ECDIS and AIS should be governed by Fairness Principles, such that “an insurer should not use ECDIS or AIS for underwriting or pricing purposes unless the insurer can establish that the data source or model, as applicable, does not use and is not based in any way on any class protected pursuant to Insurance Law Article 26. Moreover, an insurer should not use ECDIS or AIS for underwriting or pricing purposes if such use would result in or permit any unfair discrimination or otherwise violate the Insurance Law or any regulations promulgated thereunder.”
The Fairness Principles outlined by the Department explicitly outline the following:
Data actuarial validity: Insurers should be able to demonstrate that the ECDIS are supported by generally accepted actuarial standards of practice and are based on actual or reasonably anticipated experience
Unfair and unlawful discrimination: Specifically noting the insurer should not use ECDIS or AIS in underwriting or pricing unless the insurer has determined that the ECDIS or AIS does not collect or use criteria that would constitute unfair or unlawful discrimination or an unfair trade practice
Analyzing for unfair or unlawful discrimination: Insurers should perform and document regular testing of ECDIS and AIS for potential discrimination using both qualitative and quantitative considerations
- Qualitative: The ability to articulate the ECDIS used, the nature and intent of the AIS and the potential disparate impacts which could result from its use
- Quantitative: The use of specific metrics to monitor outcomes from the use of ECDIS or AIS, including (but not limited to):
Adverse impact ratio: Analyzing the rates of favorable outcomes between protected classes and control groups to identify any potential disparities
Denials odds ratios: Computing the odds of adverse decisions for protected classes compared to control groups
Marginal effects: Assessing the effect of a marginal change in a predictive variable on the likelihood of unfavorable outcomes, particularly for members of protected classes
Standardized mean differences: Measuring the difference in average outcomes between protected classes and control groups
Z-tests and t-tests: Conducting statistical tests to ascertain whether differences in outcomes between protected classes and control groups are statistically significant
Drivers of disparity: Identifying variables in AIS that cause differences in outcomes for protected classes relative to control groups. These drivers can be quantitatively computed or estimated using various methods, such as sensitivity analysis, Shapley values, regression coefficients or other suitable explanatory techniques
In addition to the principles outlined above, there are also expectations around areas like governance and risk management, including:
- Board and senior management oversight
- Policies and procedures
- Risk management and internal controls
- Linkage to other risk management activities (e.g., ERM)
- Standards for model development, implementation, use, validation, review and testing
- Roles, responsibilities and competencies
Engagement of internal audit, which may include:
“-verifying that acceptable policies and procedures are in place and are appropriately adhered to;
-verifying records of AIS use and validation to test whether validations are performed in a timely manner and AIS models are subject to controls that appropriately account for any weaknesses in validation activities;
-assessing the accuracy and completeness of AIS documentation and adherence to documentation standards, including risk reporting;
-evaluating the processes for establishing and monitoring internal controls, such as limits on AIS usage;
-assessing supporting operational systems and evaluating the accuracy, reliability, and integrity of ECDIS and other data used by AIS;
-assessing potential biases in the ECDIS or other data that may result in unfair or unlawful discrimination against insureds or potential insureds; and
-assessing whether there is sufficient reporting to the board or other governing body and senior management to evaluate whether management is operating within the insurer’s risk appetite and limits for model risk.”
Adherence to all applicable laws, rules and regulations
Establishment of written standards, policies and procedures for the planning, vetting, acquisition, use and monitoring of EDCIS and/or AIS by third parties
Lastly, the Department outlines the transparency requirements of existing insurance laws and circular letters (Insurance Circular Letter No 1 [2019]) outlining among other things the specific requirements pertaining to refusal, limitation or differing rates stemming from certain conditions. The letter also outlines the requirements around notification for the reason behind declination, limitation, rate differential or other “adverse” underwriting decisions, as well as the potential insured’s right to request information used by AIS in the underwriting process.
The comment period for this letter closed on March 17, 2024.
Alaska, New Hampshire and Connecticut adopt the NAIC Model Bulletin on AI, others expected to follow
On Feb. 26, 2024, the Connecticut Insurance Department (CID) adopted Bulletin No. MC-25, titled Use of Artificial Intelligence Systems in Insurance (otherwise known as the Connecticut Bulletin). The Connecticut Bulletin aligns with the Model Bulletin titled Use of Artificial Intelligence Systems by Insurers, which was initially adopted by the National Association of Insurance Commissioners (NAIC) on Dec. 4, 2023.
Connecticut marks the third state to adopt this Model Bulletin, following Alaska and New Hampshire, and it is anticipated that more states will follow suit in the near future. The Connecticut Bulletin mostly follows the NAIC Model Bulletin although, unlike the Model Bulletin, Connecticut domestic insurers must complete an annual Artificial Intelligence Certification, certifying that the insurer is in compliance with the Connecticut Bulletin and that it will make available all information and documentation related to its AI Systems upon request. The Artificial Intelligence Certification is due on Sept. 1, 2024, and annually thereafter.
Baker Tilly’s thought leadership on the NAIC Bulletin is located here.
Now that insurers are embracing the latest technologies and innovation and changing the way that they store and analyze their data, more and more states will be releasing new, more stringent regulations. If you would like to discuss this topic with one of our insurance industry and risk advisory specialists, reach out and schedule a 30-minute introductory meeting.
For the latest information and insurance industry updates, check out our webpage and subscribe to our monthly newsletter.
New York Department of Financial Services updates cybersecurity requirements for financial services organizations