Published on: 
September 11, 2024

To Build or Buy AI: A Guide for Insurers

5 min read

I’m just going to say it: I don’t care how accomplished your team is, they just won’t be able to build a proprietary horizontal LLM to compete, feature-wise, with the GPTs, Geminis, and Claudes of the world. 

Your team may, however, have it in them to build a vertical AI solution to execute specific high-level underwriting tasks. Their solution will probably incorporate one (or even several) aforementioned foundational models complemented with additional components, purpose-made for your specific use case. 

If you haven’t investigated advanced AI for your underwriting tech stack, you’re already behind. The question for carriers has long since moved on from “should we implement?” to “what’s the best way forward?” Some might think it preferable to build a proprietary AI solution using internal resources.

Many larger enterprises are certainly going to take on that substantial challenge. But is this strategy right for your organization? Here are four questions to consider before taking that leap:

1. Do you know what a quality AI-powered solution looks like?

You know how to measure the success of, say, a proprietary Java-powered microservice or web portal. But do you know what metrics to use for a non-deterministic AI solution? It’s a whole new thing.

LLMs are flexible and amazing, but they’re also unpredictable and can get things wrong (even when the end user did everything right). Developing non-deterministic systems requires an evolution in thinking about usefulness and quality control. It means getting acquainted with new concepts like “error tolerance.” 

If you’ve worked with traditional digital systems, you know that when a problem arises, it’s almost always attributable to human error somewhere along the line. LLMs, on the other hand, can do weird stuff when they’re working properly. Ask an LLM the same question 10 times in a row and you’ll get 10 different answers. The key with these solutions isn’t robotic repetition, it’s making sure they provide 10 useful answers. 

Ask an LLM the same question 10 times in a row and you’ll get 10 different answers. The key with these solutions isn’t robotic repetition, it’s making sure they provide 10 useful answers. 

Not only must you anticipate some amount of unpredictability with LLMs, you have to build out an infrastructure to mitigate their impact. This could mean building in extra layers of validation to detect errors. Or perhaps by giving human users the ability to spot errors and give feedback to the system. In some cases, it might mean that we live with some amount of "spoilage," i.e., accepting bad results from time to time.

This is new territory, I know. Are you ready for it? Almost as importantly—would you know how to communicate this new paradigm to the stakeholders who matter?

2. Are you prepared for a relentless pace of change?

Due to LLM’s inherent newness, few engineers or product managers have experience shepherding a vertical AI to market. That means your team must learn to deal with both structured and unstructured data when engaging with LLMs. It means learning the latest prompt design strategies to ensure you're providing consistently accurate answers (and indeed, defining what “accuracy” even means in a non-deterministic system). And it means occasionally having to re-learn it all over again after the next great AI innovation drops.  And a new AI innovation is always about to drop.

Developing cutting-edge vertical AI in 2024 is very different than it was in 2023 and I can promise you, it will be different in 2025.

Developing cutting-edge vertical AI in 2024 is very different than it was in 2023 and I can promise you, it will be different in 2025. Technology moves fast, and at this moment of peak-buzz AI, you have to be prepared for changes to come at your team weekly, if not daily. 

Last year, for example, we were a LangChain shop, as was pretty much everyone else attempting to address big challenges with LLMs. Fast-forward one year and we—and many players in this space—concluded that LangChain just isn’t for production and moved on to building scalable pipelines directly with generative AI primitives. That meant rebuilding some key features from scratch while adding resiliency and scale.

Determination is paramount in the face of rapid change. Are you prepared to hard-pivot a project you’ve been pushing along for months because the ecosystem has irrevocably changed with a new model release, new technique, or newly proposed regulation? Are you prepared to explain the necessity of these sea changes to your team and stakeholders?

3. Are you up on today’s AI regulations? How about tomorrow’s?

There’s a lot of talk in the public discourse about the potential negative impacts of scaled automation. As a result, regulatory bodies at all levels of government have drafted rules for how AI can be implemented, many of which single out consequential sectors such as insurance

Technological acumen is crucial, but it could all be rendered meaningless if it doesn’t comply with regulatory requirements. Do you have the infrastructure in place to keep on top of this evolving patchwork of global regulations?

To navigate these choppy waters, you need a team in place to make sure you’re complying with today’s rules, and prepared for tomorrow’s.

What’s better? Getting your team in the conversation with the rule-makers, and help inform the rule sets as they take shape.

4. Can you compete for AI talent?

You have an amazing dev team. They’re driven and passionate, and great colleagues too. I’m sure they could launch a top-notch mini-site in just a few weeks. But have they designed an LLM-powered AI solution before? 

If not, you’ll need to find yourself some AI experts.

That means competing for talent in a limited pool of AI engineers and paying top dollar for it to keep pace with MAMAA-caliber compensation packages.

That means competing for talent in a limited pool of AI engineers (Reuters reports a 50% skills gap in AI roles) and paying top dollar for it to keep pace with MAMAA-caliber compensation packages.

This pool becomes even smaller when looking for talent experienced with building systems for highly regulated industries in general, let alone insurance in particular.

Did you answer “no” to any question above?

I don’t know where you’ll land when it comes to building your vertical AI solution. If the go-it-alone path seems treacherous, then you can always partner with a team that’s been leading the way in the emerging LLM-powered AI for insurance.

I’m not a salesman, I’m a techie, but I can tell you we do great work and our team would love to talk through what you have in mind.

This blog post was originally posted on LinkedIn

Share this post
Brian Moseley
Co-founder & CTO