Navigating Regulatory Changes in the Insurance Industry: Insights for Established Leaders and Insurtechs to Thrive 



As we progress through the first quarter of the year, the insurtech revolution is transforming the insurance industry. We explore the impact of regulatory changes on organizations and provide insights on how both established industry leaders and insurtechs can adapt to the shifting landscape.


While the use of AI in health insurance is increasing, so is regulatory scrutiny around AI’s potential negative impacts. In 2022, the Biden White House offered its “Blueprint for an AI Bill of Rights,” with other governing bodies like the Federal Trade Commission and multiple state insurance regulators also publishing high-level guidelines and/or opinions to inform the use of AI. As these guidelines offer insight into how future legislation may look, organizations investing in AI today would be wise to adopt a data and model risk management framework that complies as tightly as possible to the guidelines. 

As with all new technologies, AI should not be treated as the single solution to organizations’ business problems. Implementing any AI solution requires technologists to establish and maintain a sound governance structure that marries cutting-edge data science techniques with human oversight, consistent policies, and solid procedures. At Verikai, we ensure that our data, models, and methods are robust, empirically sound, and compliant with appropriate regulations and/or best practices (e.g., HIPAA, SOC II, etc.). Our algorithms are tested to ensure that outcomes do not unfairly discriminate against any protected class. Moreover, our solution is never implemented without appropriate safeguards that incorporate customer feedback. 


A common thread among all published guidelines and opinions concerns AI ethics and potential bias within AI models or in the underlying data used to train the models. For insurers eager to reduce expense loads and looking at AI to streamline some underwriting and claims-handling processes, it is essential to understand any implicit bias in the data used to train AI models, as well as any bias in the results output by the models. For example, any organization using AI in healthcare must consider the bias within the health system as it exists today before building and deploying any models. At worst, they may blindly build and deploy a model that propagates and reinforces existing bias. At best, and when used intelligently and appropriately, AI models can reduce and remove discrepancies within the health system, which is why AI shouldn’t be banned in healthcare or insurance outright.

As an example, consider an organization building an AI model to predict which individuals are at the highest risk for pregnancy complications and child & maternal mortality. Any model that is even moderately accurate would reflect the current reality that there is a strong racial and socioeconomic component to these conditions. Still, careful consideration of the underlying data produces the conclusion that individuals considered “high-risk” by the model are really those most likely to lack access to care, lack affordable, nutritious food, have certain pre-existing conditions, etc. A forward-looking healthcare organization could design thoughtful interventions for these high-risk individuals to address the underlying issues in a way that could simultaneously reduce racial & socioeconomic disparities while saving money in the long run. None of that is possible if you were to blindly build a “garbage in, garbage out” model on a biased dataset without considering the bigger context. 


With the 2024 U.S. election cycle now beginning in earnest, political debates around the collection, use, and sharing of personal data are sure to heat up. In the short term, legislative and regulatory bodies at both the state and national levels are already getting more aggressive in their oversight of data sharing and the use of AI in healthcare. Large insurers and healthcare providers are inherently conservative, so absent more guidance regarding what constitutes acceptable use of data and AI, there’s an expectation that adoption will be slow and innovation could be stifled a bit. When looking long term, it is safe to say that the amount of data produced will only keep growing. There is too much value in that data for AI to be banned completely, so the government, industry, and advocacy groups will need to consolidate around a common framework and set of practices that balance the need to protect individuals’ data with the real medical benefits of using that data within AI models.  

As the insurance industry increasingly incorporates artificial intelligence into underwriting and data analysis, regulatory scrutiny will continue to grow. Ensuring fairness and accountability in AI-powered insurance underwriting is essential to building trust and avoiding regulatory penalties. This includes regularly reviewing and testing AI algorithms, being transparent with customers about AI use, and providing an avenue for appeals. Moreover, regulators are pushing for more legislation around data collection and AI to protect consumer rights and prevent discrimination. To succeed in this changing landscape, insurers must stay up-to-date on regulations and best practices and prioritize responsible AI use. Companies that are able to adapt to the changing landscape will be well-positioned to success in today’s competitive market. By responsibly incorporating AI technologies into their businesses, insurers can build customer trust, reduce regulatory risk, and drive long-term success in the industry. 


Verikai is a leading predictive risk underwriting tool for the insurance industry that leverages cutting-edge AI and machine learning models, combined with behavioral data, to deliver individual and group risk assessments. Verikai’s platform empowers insurers to optimize their underwriting processes and improve profitability.

Top Posts

Subscribe for insights

Related resources

Ready to unlock the power of data?

Maximize accuracy. Minimize risk.