AI Vendor Compliance: A Practical Guide for Insurers
In the hands of insurers, AI can drive great efficiency —safely and responsibly. We recently sat down with Matt Kelly, Data Strategy & Security expert and counsel at Debevoise & Plimpton, to explore how insurers can achieve this.
Matt has played a key role in developing Sixfold’s 2024 Responsible AI Framework. With deep expertise in AI governance, he has led a growing number of insurers through AI implementations as adoption accelerates across the insurance industry.
To support insurers in navigating the early stages of compliance evaluation, he outlined four key steps:
Step 1: Define the Type of Vendor
Before getting started, it’s important to define what type of AI vendor you’re dealing with. Vendors come in various forms, and each type serves a different purpose. Start by asking these key questions:
- Are they really an AI vendor at all? Practically all vendors use AI (or will do so soon) – even if only in the form of routine office productivity tools and CRM suites. The fact that a vendor uses AI does not mean they use it in a way that merits treating them as an “AI vendor.” If the vendor’s use of AI is not material to either the risk or value proposition of the service or software product being offered (as may be the case, for instance, if a vendor uses it only for internal idea generation, background research, or for logistical purposes), ask yourself whether it makes sense to treat them as an AI vendor at all.
- Is this vendor delivering AI as a standalone product, or is it part of a broader software solution? You need to distinguish between vendors that are providing an AI system that you will interact with directly, versus those who are providing a software solution that leverages AI in a way that is removed from any end users.
- What type of AI technology does this vendor offer? Are they providing or using machine learning models, natural language processing tools, or something else entirely? Have they built or fine-tuned any of their AI systems themselves, or are they simply built atop third-party solutions?
- How does this AI support the insurance carrier’s operations? Is it enhancing underwriting processes, improving customer service, or optimizing operational efficiency?
Pro Tip: Knowing what type of AI solution you need and what the vendor provides will set the stage for deeper evaluations. Map out a flowchart of potential vendors and their associated risks.
Step 2: Identify the Risks Associated with the Vendor
Regulatory and compliance risks are always present when evaluating AI vendors, but it’s important to understand the specific exposures for each type of implementation. Some questions to consider are:
- Are there specific regulations that apply? Based on your expected use of the vendor, are there likely to be specific regulations that would need to be satisfied in connection with the engagement (as would be the case, for instance, with using AI to assist with underwriting decisions in various jurisdictions)?
- What are the data privacy risks? Does the vendor require access to sensitive information – particularly personal information or material nonpublic information – and if so, how do they protect it? Can a customer’s information easily be removed from the underlying AI or models?
- How explainable are their AI models? Are the decision-making processes clear, are they well documented, and can the outputs be explained to and understood by third parties if necessary?
- What cybersecurity protocols are in place? How does the vendor ensure that AI systems (and your data) are secure from misuse or unauthorized access?
- How will things change? What has the vendor committed to do in terms of ongoing monitoring and maintenance? How will you monitor compliance and consistency going forward?
Pro Tip: A good approach is to create a comprehensive checklist of potential risks for evaluation. For each risk that can be addressed through contract terms, build a playbook that includes key diligence questions, preferred contract clauses, and acceptable backup options. This will help ensure all critical areas are covered and allow you to handle each risk with consistency and clarity.
Step 3: Evaluate How Best to Mitigate the Identified Risks
Your company likely has processes in place to handle third-party risks, especially when it comes to data protection, vendor management, and quality control. However, not all risks may be covered, and they may need new or different mitigations. Start by asking:
- What existing processes already address AI vendor risks? For example, if you already have robust data privacy policies, consider whether those policies cover key AI-related risks, and if so, ensure they are incorporated into the AI vendor review process.
- Which risks remain unresolved? Identify the gaps in your current processes to identify unique residual risks – such as algorithmic biases or the need for external audits on AI models – that will require new and ongoing resource allocations.
- How can we mitigate the residual risks? Rather than relying solely on contractual provisions and commercial remedies, consider alternative methods to mitigate residual risks, including data access controls and other technical limitations. For instance, when it comes to sharing personal or other protected data, consider alternative means (including the use of anonymized, pseudonymized, or otherwise abstracted datasets) to help limit the exposure of sensitive information.
Pro Tip: You don’t always need to reinvent the wheel. Look at existing processes within your organization, such as those for data privacy, and determine if they can be adapted to cover AI-specific risks.
Step 4: Establish a Plan for Accepting and Governing Remaining Risks
Eliminating all AI vendor risks cannot be the goal. The goal must be to identify, measure, and mitigate AI vendor risks to a level that is reasonable and that can be accepted by a responsible, accountable person or committee. Keep these final considerations in mind:
- How centralized is your company’s decision-making process? Some carriers may have a centralized procurement team handling all AI vendor decisions, while others may allow business units more autonomy. Understanding this structure will guide how risks are managed.
- Who is accountable for evaluating and approving these risks? Should this decision be made by a procurement team, the business unit, or a senior executive? Larger engagements with greater risks may require involvement from higher levels of the company.
- Which risks are too significant to be accepted? In any vendor engagement, some risks may simply be unacceptable to the carrier. For example, allowing a vendor to resell policyholder information to third parties would often fall into this category. Those overseeing AI vendor risk management usually identify these types of risks instinctively, but clearly documenting them helps ensure alignment among all stakeholders, including regulators and affected parties.
One-Process-Fits-All Doesn’t Apply
As AI adoption grows in insurance, taking a strategic approach can help simplify review processes and prioritize efforts. These four steps provide the foundation for making informed, secure decisions from the start of your AI implementation project.
Evaluating AI vendors is a unique process for each carrier that requires clarity about the type of vendor, understanding the risks, identifying the gaps in your existing processes, and deciding how to mitigate the remaining risks moving forward. Each organization will have a unique approach based on its structure, corporate culture, and risk tolerance.
“Every insurance carrier that I’ve worked with has its own unique set of tools and rules for evaluating AI vendors, what works for one may not be the right fit for another.”
- Matt Kelly, Counsel at Debevoise & Plimpton.