Published on: 
August 13, 2024

6 Common Myths About AI, Insurance, and Compliance

5 min read

These days, my professional life is dedicated to one focused part of the global business landscape: the untamed frontier where cutting-edge AI meets insurance. 

I have conversations with insurers around the world about where it’s all going and how AI will work under new global regulations. And one thing never ceases to amaze me: how often I end up addressing the same misconceptions. 

Some confusion is understandable (if not inevitable) considering the speed with which these technologies are evolving, the hype from those suddenly wanting a piece of the action, and some fear-mongering from an old guard seeking to maintain the status quo. So, I thought I’d take a moment to clear the air and address six all-too-common myths about AI in insurance.

Myth 1: You’re not allowed to use AI in insurance

Yes, there’s a patchwork of emerging AI regulations—and, yes, in many cases they do zero-in specifically on insurance—but they do not ban its use. From my perspective, they do just the opposite: They set ground rules, which frees carriers to invest in innovation without fear they are developing in the wrong direction and will be forced into a hard pivot down the line. 

Sixfold has actually increased customers (by a lot) since the major AI regulations in Europe and elsewhere were announced. So, let’s put this all-too-prevalent misconception to bed once and for all. There are no rules prohibiting you from implementing AI into your insurance processes.

Myth 2: AI solutions can’t secure customer data

As stated above, there are no blanket prohibitions on using customer data in AI systems. There  are, however, strict rules dictating how data—particularly PII and PHI—must be managed and secured. These guidelines aren’t anything radically new to developers with experience in highly regulated industries. 

Security-first data processes have been the norm since long before LLMs went mainstream. These protocols protect crucial personal data in applications that individuals and businesses use every day without issue (digital patient portals, browser-based personal banking, and market trading apps, just to name a few). These same measures can be seamlessly extended into AI-based solutions.

Myth 3: “My proprietary data will train other companies’ models”

No carrier would ever allow its proprietary data to train models used by competitors. Fortunately, implementing an LLM-powered solution does not mean giving up control of your data—at least with the right approach. 

A responsible AI vendor helps their clients build AI solutions trained on their unique data for their exclusive use, as opposed to a generic insurance-focused LLM to be used by all comers. This also means allowing companies to maintain full control over their submissions within their environment so that when, for example, a case is deleted, all associated artifacts and data are removed across all databases.

At Sixfold, we train our base models on public and synthetic (AKA, “not customer”) data. We then copy these base models into dedicated environments for our customers and all subsequent training and tuning happens in the dedicated environments. Customer guidelines and data never leave the dedicated environment and never make it back to the base models.

Let’s kill this one: Yes, you can use AI and still maintain control of your data.

Myth 4: There’s no way to prevent LLM hallucinations

We’ve all seen the surreal AI-generated images lurching up from the depths of the uncanny valley—hands with too many fingers, physiology-defying facial expressions, body parts & objects melded together seemingly at random. Surely, we can’t use that technology for consequential areas like insurance. But I’m here to tell you that with the proper precautions and infrastructure, the impact of hallucinations can be greatly minimized, if not eliminated.

Mitigation is achieved using a myriad of tactics such as using models to auto-review generated content, incorporating user feedback to identify and correct hallucinations, and conducting manual reviews to ensure quality by comparing sample outputs against ground truth sets.

Myth 5: AIs run autonomously without human oversight

Even if you never watched The Terminator, The Matrix, 2001: A Space Odyssey, or any other movie about human-usurping tech, it’d be reasonable to have some reservations about scaled automation. There’s a lot of fearful talk out there about humans ceding control in important areas to un-feeling machines. However, that’s not where we’re at, nor is it how I see these technologies developing. 

Let’s break this one down.

AI is a fantastic and transformative technology, but even I—the number one cheerleader for AI-powered insurance—agree we shouldn’t leave technology alone to make consequential decisions like who gets approved for insurance and at what price. But even if I didn’t feel this way, insurtechs are obliged to comply with new regulations (e.g., the EU AI Act and the California Department of Insurance), that tilt towards avoiding fully automated underwriting and require, at the very least, that humans overseers can audit and review decisions. 

When it comes to your customers’ experience, AI opens the door to more human engagement, not less. In my view, AI will free underwriters from banal, repetitive data work (which machines handle better anyway) so that they can apply uniquely human skills in specialized or complex use cases they previously wouldn’t have had the bandwidth to address.

Myth 6: Regulations are still being written, it’s better to wait for them to settle

I hear this one a lot. I understand why people arrive at this view. My take? You can’t afford to sit on the sidelines!

To be sure, multiple sets of AI regulations are taking root at different governmental levels, which adds complexity. But here’s a little secret from someone paying very close attention to emerging AI rulesets: there’s very little daylight between them. 

Here’s the thing: regulators worldwide attend the same conferences, engage with the same stakeholders, and read the same studies & whitepapers. And they all watching what each other is doing. As a result, we’re arriving at a global consensus focused on three main areas: data security, transparency, and auditability. 

The global AI regulatory landscape is, like any global regulatory landscape, complex; but I’m here to tell you it’s not nearly as uneven or even close to unmanageable as you may fear. 

Furthermore, if an additional major change were to be introduced, it wouldn't suddenly take effect. That’s by design. Think of all the websites and digital applications that launched—and indeed, thrived—in the six-year window between when GDPR was introduced in 2012 to when it became enforceable in 2018. Think of everything that would have been lost if they had waited until GDPR was firmly established before moving forward.

My entire career has been spent in fast-moving cutting-edge technologies. And I can tell you from experience that it’s far better to deploy & iterate than to wait for regulatory Godot to arrive. Jump in and get started!

There are more myths to bust! Watch our compliance webinar

The regulations coming are not as odious or as unmanageable as you might fear—particularly when you work with the right partners. I hope I’ve helped overcome some misconceptions as you move forward on your AI journey.

Watch it here!

Want to learn more about AI insurance and compliance? Watch the replay of our compliance webinar featuring a discussion between myself; Jason D. Lapham, the Deputy Commissioner for P&C Insurance the Colorado Division of Insurance; and Matt Kelly a key member of Debevoise & Plimpton’s Artificial Intelligence Group. We're discussing the global regulatory landscape and how AI models should be evaluated regarding compliance, data usage, and privacy.

This article was originally posted on LinkedIn

Share this post
Alex Schmelkin
Founder & CEO