The NAIC draft model is similar to Colorado’s new regulations. However, it is now more elastic.
Some players want the state’s new insurance guidelines on artificial intelligence to fit like a billowing wool kaftan, and others want them to fit like a tight steel belt.
The battle between wool and steel is appearing in comments on a new model bulletin prepared by the National Association of Insurance Commissioners’ Innovative Cybersecurity and Technology Committee.
The commission posted the second draft of the bulletin model last week on its section of the NAIC website. Comments on the new draft are due November 6.
Scott Kosnoff, an insurance law expert at Faegre Drinker, said in an email interview that the NAIC’s efforts to regulate AI have much of the same basis as Colorado’s new regulation banning the use of data and information sources of external consumers leads to discrimination based on race.
But Colorado’s statute takes a regulatory approach, Kosnoff said. The NAIC Bulletin provides regulatory expectations rather than requirements.
What does it mean: Many of the debates over the conduct of AI and robotics life and annuity issuers may start to look more like battles over how lax and flexible the rules should be, rather than the goals of the Rule.
All participants seemed to agree that, in principle, life insurance and annuity issuers should not use AI or other new technologies to discriminate unfairly.
Nuts and bolts: Federal law leaves regulation of the insurance business to the states. The NAIC, a group of state insurance regulators, can issue voluntary guidelines but typically cannot impose regulations on its own.
The new draft model newsletter is a revision of an earlier version that the Innovation Committee posted in July and was included in the meeting packet circulated in August.
This bulletin is part of a long-standing conversation among regulators, insurers, insurance groups and consumer groups about insurers’ efforts to use different types of data and new data analytics in the marketing, underwriting, pricing and management of life and annuity products.
For example, in 2019, New York sent a letter warning insurers to be prepared to substantiate any analytical strategies they use in accelerated life insurance underwriting programs. new are reasonable, fair and transparent.
Colorado regulators approved the life anti-discrimination rule in September.
Birny Birnbaum, a consumer advocate, has spoken about the need for AI anti-discrimination rules at NAIC events for years.
The new NAIC Draft Bulletin reflects the AI principles that the NAIC adopted in 2020.
Arguments: The Innovation Committee published a series of comment letters on the first draft of the bulletin that reflected many of the questions that shaped the drafting process.
Sarah Wood of the Institute for Insured Retirement was among the commenters who spoke about the fact that insurers may have to contend with what technology companies are willing and able to offer. She urged the commission to continue to approach this issue thoughtfully so as not to create an environment where there are only one or two suppliers, while others who can comply are banned from use by the industry.
Scott Harrison, co-founder of the US InsurTech Council, welcomed the flexible, principles-based approach evident in the first draft of the bulletin, but he recommended that the committee find ways to encourage countries to Parties have the same views and apply the same standards. Specifically, we are concerned that a particular AI process or business use case could be considered appropriate in one state and an unfair trade practice in another, Harrison said.
Michael Conway, Colorado’s insurance commissioner, suggested that the Innovation Commission could require life insurers themselves to support a variety of strong, specific rules. Overall, we believe we have reached a broad consensus with the life insurance industry on our governance,” he said. In particular, increased emphasis on insurer transparency regarding decisions made by AI systems that impact consumers could be an area of focus.
Birnbaums Center for Economic Justice asserted that the first draft of the newsletter was too loose. “We believe that the process-oriented guidance presented in the bulletin will not enhance regulatory oversight of insurers’ use of AI systems or capabilities,” the Center said. ability to identify and prevent unfair discrimination caused by these AI systems.
John Finston and Kaitlin Asrow, deputy executive director of the New York State Department of Financial Services, support the idea of adding rigorous, specific, data-driven fairness testing strategies, such as consider the rate of adverse effects or compare the rate of favorable effects. results between the protected consumer groups and members of the control group, to identify any differences.
Credit: peshkov/Adobe Stock
#strict #State #rules #insurance #ThinkAdvisor
Image Source : www.thinkadvisor.com