Generative artificial intelligence (AI) is gaining traction in the insurance industry for the capabilities it offers beyond traditional forms of AI. While traditional AI has helped property and casualty insurers improve their data analytics, automate parts of their operations, and make better decisions, generative AI and its ability to produce new outputs hold untapped potential for carriers. But this potential cannot be fully unlocked without addressing the risks of generative AI.
Insurers have started looking into varying use cases across the value chain for generative AI. It can elevate the way they design their products and assess emerging risks and market needs, help take customization of the customer experience to the next level, free up underwriters’ time for more complex cases, and bolster claims fraud detection capabilities, to name a few.
However, as with any new emerging technology, incorporating generative AI to these workflows should only occur after insurers have carefully considered the risks and taken steps to mitigate or accept each. A common concern when looking at the success of innovation programs has been, “why can’t my projects get accepted and moved into production?” Without proper risk mitigation, insurers may introduce new points of failure, waste time on ineffective applications of the technology, experience data breaches, and lose customer and partner trust across the industry, all of which can be damaging to the long-term success of an insurance enterprise.
Uncovering the Risks of Generative AI
Since generative AI has broad applicability across the insurance value chain and is a novel approach to solving problems, it comes with a broad range of risks. The risks are grouped into three general categories that insurers can use to help identify areas of concern to focus on when implementing generative AI.
While this isn’t a comprehensive list of every risk that an enterprise leveraging generative AI may come up against, it should serve as a solid foundation for thinking about the risks this emerging technology comes with.
- Operational Risks
- Reliability: Adding generative AI into the core operations of a business come with inherent risks. If done haphazardly, integrating generative AI could add risk of system errors or even failures, leading to disruption of service, customer dissatisfaction, and ultimately financial repercussions.
- Skill Gaps: It will take time for employees to get up to speed on the skills needed to work with generative AI. Without proper investment in training workers and developing the requisite skills to leverage this technology properly, insurers may lag behind other organizations’ capabilities and results.
- Market Competition: Carriers who don’t strategize how and when they will incorporate generative AI into their workflow risk letting competitors leap ahead with respect to innovation. But rushing headfirst into adoption without preparing adequately risks wasting time and resources on fruitless efforts and potential failed implementations.
- Security and Compliance Risks
- Data Privacy: Insurers are well aware of the importance of data privacy and security measures. Since generative AI requires access to private stores of data to be trained effectively for carriers’ use cases, there is obvious risk in terms of customer data privacy and misuse of sensitive information. And any data misuse or potential data breach carries legal ramifications with it. It is not just a customer’s data privacy in question, but also an insurer’s private company data. Recently, for example, Samsung banned employees from using generative AI services due to misuse and proprietary code being uploaded.
- Regulatory Compliance: As AI’s capabilities evolve, so, too, do the regulations governing its uses. Organizations applying generative AI must stay current with emerging legislation to maintain compliance with all relevant laws, especially as they pertain to data usage.
- Dependence on External Data: Generative AI models are often trained on external data sources, but this comes with risks tied to the quality and availability of third-party data. Ensuring the data a generative AI model uses is accurate and consistently available is of key concern.
- Brand Risks
- Bias: Algorithms of any kind come with inherent bias risks, and generative AI, with its ability to produce content, may amplify that risk. Perpetuating biases through a generative AI model can lead not only to unfair treatment of certain demographics, but also reputational damage, legal challenges, and financial loss.
- Model Transparency: It can be difficult to understand how generative AI models function, leading to challenges ensuring model transparency and accountability for vital processes like policy pricing, claims adjudication, and underwriting decisioning.
- Intellectual Property: Outside the insurance industry, there have already been concerns raised about the intellectual property of content created by generative AI. Those same concerns could apply to products developed with tools powered by generative AI. In addition, copyright and patent laws may change as the technology advances.
- Customer Trust: In sensitive portions of the insurance life cycle, such as the underwriting phase or claims processing, customers want to be certain their insurer isn’t letting a faceless computer make final decisions. Carriers must be careful to apply generative AI in ways that retain customers’ trust.
Steps Toward Risk Mitigation
While not every organization will run up against each of these risks, it is important for insurers to have a strategy in place for dealing with all of the risks that come with generative AI before launching it within their technology stack.
A strategic approach to mitigating the risks of generative AI includes proactively addressing challenges. When this is done effectively, carriers can use generative AI to drive innovation and build a competitive edge without worrying that they are introducing a major legal issue or ethical quandary into their workflows.
Here are some mitigation steps tailored toward the three categories of risk outlined above.
- Operations
- Enhance Operational Reliability: Any application leveraging generative AI should have robust testing and deployment protocols. These protocols should include fail-safes as well as manual oversight where it is most beneficial. AI systems should be regularly reviewed and updated to ensure they are operating effectively.
- Invest in Employee Development: Establishing training programs for the necessary AI skills can not only help in-house talent have the abilities needed to manage current AI practices, but it can also help build a culture of continuous learning and keep teams adaptable for future upskilling needs. A common approach is training for generative AI security, general understanding of the technology, and prompt engineering courses to understand how to better use the technology. These training courses should not be limited just to the technology group.
- Take a Phased Implementation Approach: To avoid operational disruption, organizations should begin with pilot projects, then scale up integration efforts based on lessons learned within those pilots. Taking this phased approach allows the enterprise to adjust strategies and build in additional mitigation measures as needed. Look to internal projects that can help your teams decrease time spent on tasks that don’t allow them to produce value. These can range from code reviews to tools that help underwriters better research the risk they are analyzing. A common misconception is that deploying generative AI for customer use is the best place for it. However, it is wise to lean away from direct customer interaction until a firm practice around generative AI is in place and tested thoroughly.
- Security and Compliance
- Establish Robust Data Governance: Strict data governance policies are critical for an insurance organization, especially when it comes to AI of any kind. These policies should focus on data quality, privacy, and security, ranging from encryption to access controls to regular audits. Governance policies must comply with all relevant data legislation and secure sensitive customer information. A common phrase that heard when working in this space is “do no harm,” meaning sometimes regulations and best practices aren’t enough. Think about if you as a consumer would care about that data — what about your children or parents?
- Ensure Regulatory Compliance: While governance policies should factor in regulatory compliance, carriers should take additional steps to meet legislative requirements. Staying up to date on evolving AI and data regulations, such as by engaging with regulators and in industry forums, is imperative as technology continues to evolve at a rapid pace. All generative AI applications need to be designed and operated to meet compliance. Ensure that you have thought through the design of the architecture to enable quick modifications when regulations do change.
- Validate and Monitor External Data Sources: Third-party data is used across many insurance organizations, but these sources should be carefully selected and monitored continuously to ensure that data coming into the enterprise is accurate and reliable. Establishing processes to ensure that all data sources are regularly validated is key to maintaining the integrity of AI models.
- Brand
- Implement Bias Detection and Mitigation: Using diverse data sets to train generative AI models as well as fairness-aware algorithms can help insurers detect bias within their models. These models should be tested and updated regularly to ensure that different groups of applicants and policyholders are receiving equitable outcomes.
- Promote Transparency: Generative AI models should be developed and deployed to be as transparent and explainable as possible. The tools and techniques insurers use when developing their models should help carriers elucidate how AI decisions are being made, leaving a clear record of accountability and showcasing regulatory compliance. Sometimes even simple things utilized in prompt engineering such as telling the system, “I want to understand every step of the process, so please provide detailed explanations for why you are presenting the information to me this way” can help alleviate model “hallucinations.”
- Address Intellectual Property Concerns: Intellectual property (IP) ownership and use rights should be clearly defined when developing generative AI technologies as well as licensing them. Insurers should consult legal experts as the landscape of IP law is complex, especially when it comes to AI-generated content and innovation. Make sure you really understand who owns the data. It is also very good to brush up on open-source licensing models such as MIT or Apache to ensure you are in compliance with usage.
- Proactively Engage to Build Trust: AI is everywhere these days; customers and stakeholders are aware that companies they work with and patronize are leveraging it. Communicate openly about AI usage, its benefits, and what measures are in place to protect data privacy and ensure fair outcomes. Getting feedback from customers can also help insurers refine their AI applications and models. If all else fails and AI can’t give your customers, employees, brokers, or agents the answers they are looking for in a predictable way, make it very easy to transition to a human or location to get the answers.
Leveraging generative AI holds promise across the value chain for insurers, but as with any emerging technology, carriers must carefully consider every risk that comes with it and take steps to mitigate these concerns. As the technology itself evolves and the legislation that regulates its use evolves alongside, insurers must have strategies in place to maintain compliance while keeping up with industry innovation.
Interested in learning more about how insurers are modernizing their technology infrastructures? Find out in our whitepaper Scaling With Application Engineering.