Life & Disability

Remove Bottlenecks, Accelerate Decisions

AI built for the complexity of Life & Disability underwriting

Disability Image
Life & Disability benefits

Why global insurers choose Sixfold

1

> 50% reduction in review time

Spend less time digging through APSs and medical records — Sixfold extracts the key risk insights from a variety of data sources and presents them in one place.

2

The full health picture

AI that doesn’t just find facts, it tells you why they matter. Sixfold connects the dots between data and your underwriting rules, so you can quickly grasp the risk overview.

3

Compliance built-in

Because underwriting is all we focus on, we’re not just keeping up with compliance, we’re building for it, evolving our product as regulatory standards move forward.

Product Features
Product features

Connect the dots via 360° assessments

Provides contextual and digestible health and risk summaries

Presents insights on medical regimens, procedure history, and condition management

Highlights relevant hobbies, family history attributes, substance use and criminal record

Feature Image
Product features

Surface irregularities automatically

Identifies discrepancies across application documents

Highlights relevant data points in a user-friendly dashboard

Ensures consistent and reliable assessments

Product Features
Product features

Transparent decision support

Presents relevant insights gathered from sources such as APS files, MIB reports, labs, applications and more

Provides clear reasoning behind every data point highlighted

Verify risk insights with direct source links, right down to the exact page

Product Features
Feature Image
Transparency and compliance

Your compliance
team ❤️ Sixfold

Hallucination controls: Real data, trusted results.

Data privacy: Your data is your data, we help you keep it that way.

Full traceability: Always source findings and conclusions.

Card BG

Curious to explore some more?

As AI becomes more embedded in the insurance underwriting process, carriers, vendors, and regulators share a growing responsibility to ensure these systems remain fair and unbiased.

At Sixfold, our dedication to building responsible AI means regularly exploring new and thoughtful ways to evaluate fairness.1

We sat down with Elly Millican, Responsible AI & Regulatory Research Expert, and Noah Grosshandler, Product Lead on Sixfold's Life & Health team, to discuss how Sixfold is approaching fairness testing in a new way.

Fairness As AI Systems Advance

Fairness in insurance underwriting isn’t a new concern, but testing for it in AI systems that don’t make binary decisions is.

At Sixfold, our Underwriting AI for life and health insurers don’t approve or deny applicants. Instead, it analyzes complex medical records and surface relevant information based on each insurer's unique risk appetite. This allows underwriters to work much more efficiently and focus their time on risk assessment, not document review.

“We needed to develop new methodologies for fairness testing that reflect how Sixfold works.”

— Elly Millican, Responsible AI & Regulatory Research Expert

While that’s a win for underwriters, it complicates fairness testing. When your AI produces qualitative outputs such as facts and summaries, rather than scores and decisions, most traditional fairness metrics won’t work. Testing for fairness in this context requires an alternative approach.

“The academic work around fairness testing is very focused on traditional predictive models, however Sixfold is doing document analysis,” explains Millican. “We needed to develop new methodologies for fairness testing that reflect how Sixfold works.”

“The academic work around fairness testing is very focused on traditional predictive models, however Sixfold is doing document analysis,” explains Millican. “We needed to develop new methodologies for fairness testing that reflect how Sixfold works.”

“Even selecting which facts to pull and highlight from medical records in the first place comes with the opportunity to introduce bias. We believe it’s our responsibility to test for and mitigate that,” Grosshandler adds.

While regulations prohibit discrimination in underwriting, they rarely spell out how to measure fairness in systems like Sixfold’s. That ambiguity has opened the door for innovation, and for Sixfold to take initiative on shaping best practices and contributing to the regulatory conversation.

A New Testing Methodology

To address the challenge of fairness testing in a system with no binary outcomes, Sixfold is developing a methodology rooted in counterfactual fairness testing. The idea is simple: hold everything constant except for a single demographic attribute and see if and how the AI’s output changes.2

“Ultimately we want to validate that medically similar cases are treated the same when their demographic attributes differ,”

— Noah Grosshandler, Product Manager @Sixfold

“We start with an ‘anchor’ case and create a ‘counterfactual twin’ who is identical in every way except for one detail, like race or gender. Then we run both through our pipeline to see if the medical information that’s presented in Sixfold varies in a notable or concerning way” Millican explains.

“Ultimately we want to validate that medically similar cases are treated the same when their demographic attributes differ,” Grosshandler states.

Proof-of-Concept

For the initial proof-of-concept, the team is focused on two key dimensions of Sixfold’s Life & Health pipeline.

1. Fact Extraction Consistency
Does Sixfold extract the same facts from medically identical underwriting case records that differ only in one protected attribute?

2. Summary Framing and Content Consistency
Does Sixfold produce diagnosis summaries with equivalent clinical content and emphasis for medically identical underwriting cases?

“It’s not just about missing or added facts, sometimes it’s a shift in tone or emphasis that could change how a case is perceived,” Millican explains. “We want to be sure that if demographic details are influencing outputs, it’s only when clinically appropriate. Otherwise, we risk surfacing irrelevant information that could skew decisions.”

Expanding the Scope

Future testing will likely explore proxy variables such as ZIP codes, names, and socioeconomic indicators, which might implicitly shape model behavior.

While the team’s current focus is on foundational fairness markers (race and gender), the methodology is designed to evolve. Future testing will likely explore proxy variables such as ZIP codes, names, and socioeconomic indicators, which might implicitly shape model behavior.

“We want to get into cases where the demographic signal isn’t explicit, but the model might still infer something. Names, locations, insurance types, all of these could serve as proxies that unintentionally influence outcomes,” Millican elaborates.

The team is also thinking ahead to version control for prompts and model updates, ensuring fairness testing keeps pace with an evolving AI stack.

“We’re trying to define what fairness means for a new kind of AI system,” explains Millican. “One that doesn’t give a single output, but shapes what people see, read, and decide.”

Sixfold isn’t just testing for fairness in isolation, it’s aiming to contribute to a broader conversation on how LLMs should be evaluated in high-stakes contexts like insurance, healthcare, finance, and more.

That’s why Sixfold is proactively bringing this work to the attention of regulatory bodies. By doing so, we hope to support ongoing standards development in the industry and help others build responsible and transparent AI systems.

“This work isn’t just about evaluating Sixfold, it’s about setting new standards for a new category of AI." Grosshandler concludes.

“This work isn’t just about evaluating Sixfold, it’s about setting new standards for a new category of AI. Regulators are still figuring this out, so we’re taking the opportunity to contribute to the conversation and help shape how fairness is monitored in systems like ours,” Grosshandler concludes.

Positive Regulatory Feedback

When we recently walked through our testing methodology and results with a group of regulators focused on AI and data, the feedback was both thoughtful and encouraging. They didn’t shy away from the complexity, but they clearly saw the value in what we’re doing.

“The fact that it’s hard shouldn’t be a reason not to try. What you’re doing makes sense... You’re scrutinizing something that matters.” said one senior policy advisor.

“The fact that it’s hard shouldn’t be a reason not to try. What you’re doing makes sense... You’re scrutinizing something that matters.”

— Senior Policy Advisor

One of the key themes that came up during the meeting was the unique nature of generative AI, and why it demands a different kind of oversight. As one senior actuary and behavioral data scientist put it: “Large language models are more qualitative than quantitative... A lot of technical folks don’t really get qualitative. They’re used to numbers. The more you can explain how you test the language for accuracy, the more attention it will get.”

That comment really resonated. It reflects the heart of our approach, we’re not just tracking metrics. We’re evaluating how language evolves, how facts can shift, and how risk is framed and communicated depending on the inputs.

The Road Ahead

Fairness in AI is an ongoing commitment. Sixfold’s work in developing and refining fairness and bias testing methodologies reflects that mindset.

Fairness in AI isn’t a fixed destination, it’s an ongoing commitment. Sixfold’s work in developing and refining fairness and bias testing methodologies reflects that mindset.

As more organizations turn to LLMs to analyze and interpret sensitive information, the need for thoughtful, domain-specific fairness methods will only grow. At Sixfold, we’re proud to be at the forefront of that work.

Footnotes

1While internal reviews have not surfaced evidence of systemic bias, Sixfold is committed to continuous testing and transparency to ensure that remains the case as we expand and refine our AI systems.

2To ensure accuracy, cases involving medically relevant demographic traits, like pregnancy in a gender-flipped case, are filtered out. The methodology is designed to isolate unfair influence, not obscure legitimate medical distinctions.

Check Sources Instantly

Trust and transparency are essential when underwriters use AI in their daily work. Underwriters need to know that the information they rely on is accurate; otherwise, a policy decision could result in incorrect coverage, claims issues, or unnecessary risk for the carrier. 

One of the best ways to build that confidence is by clearly showing the source of each piece of information. That’s why we’re excited to introduce a new In-line Citations feature for our Life & Disability customers. This feature makes it easy to check the source behind any insight Sixfold surfaces.

So, how does it work?

When reviewing a case in Sixfold,  underwriters can now see exactly where each fact came from, including the document and page number. Here’s what you’ll see when clicking into a fact card:

  • Document category listed for each file.
  • Page number shown on hover
  • One-click access to the exact source page
  • All of the documents where the fact was found

Our goal? To increase underwriter confidence and efficiency by clearly showing the source of medical and lifestyle facts within the insurance application analysis.

New Info? Now Flagged for You

In Life and Disability underwriting, it’s common for some cases to take time, sometimes weeks, to gather all the documents needed for final analysis. The result? A lot of new information is coming in, and it’s not always clear what’s actually new facts.

That’s where our new capability, New Case Facts comes in.

Now, when new facts are surfaced within a case, you’ll see a bell icon next to the relevant fact card, a simple way to flag which facts came from the latest documents added. You can click into the fact to see more context, including which document category it came from.

This makes it easier to understand what’s been added, without having to reread the whole submission. It’s especially useful when multiple underwriters are collaborating on a case; one might start the analysis, while a colleague might actually finish it.

With new facts clearly marked, everyone can stay aligned and quickly assess what’s different and what it means for the overall risk profile of the applicant. 

In life and disability underwriting, one of the most time-consuming and error-prone steps is verifying an applicant’s self-reported information.

Why? Because applications are long and detailed, and even when applicants are trying to be honest, omissions, intentional or not, are common. This is a growing concern across the industry, a recent Munich Re’s survey identified applicant misrepresentation as the most rapidly increasing form of fraud.

Sixfold’s Discrepancy Scan capability was built to address exactly this issue. Sixfold’s AI is now able to automatically flag mismatches between what applicants report and what’s found in their medical records, giving underwriters a faster, more standardized way to catch inconsistencies before they become costly.

When risk is hiding in the records

When someone applies for individual life or disability coverage, they complete a health questionnaire, like Part II, eMed, or a Med Supplement, disclosing conditions, medications, and history. From there, the underwriter kicks off verification: ordering APS reports, Rx histories, labs, and other third-party records. 

As these often hundreds of pages of documents arrive, the underwriter is essentially playing detective — comparing what the applicant said to what the medical records reveal. Did the applicant disclose all relevant conditions? Are they taking medications they didn’t mention? Is there a difference in diagnoses or treatment history?

Underwriters dedicate significant time to identifying discrepancies because they are critical. A person's prescription history can reveal underlying health issues, sometimes even before a formal diagnosis is made. For example, a prescription for a weight-loss medication might indicate an associated morbidity.

Any inconsistency could signal fraud or simply an oversight.​ Either way, it matters.

See below for a quick product walkthrough with Noah Grosshandler, Product Manager at Sixfold.


The feature is currently focused on medications, but that’s just the beginning. We're planning to expand this capability to detect discrepancies across pre-existing conditions, procedures, family history, and lifestyle factors—always guided by what’s material to each insurer.

Minutes vs. hours of detective work

Sixfold’s new capability eliminates a critical bottleneck in underwriting. The traditional approach of manually reviewing hundreds of pages to spot inconsistencies is both time-intensive and susceptible to oversight.

The Discrepancy Scan changes that completely, surfacing critical discrepancies automatically instead. The result is a more efficient process where underwriters can confidently assess risk based on complete information, without the administrative burden of document comparison.

“Sixfold goes beyond summarizing medical histories, we spotlight the contradictions that can change a morbidity assessment. By drawing connections across medical records, we emphasize the most crucial facts for investigation.

This approach transforms hours of detective work into minutes, providing underwriters with confidence and efficiency in their decision-making processes.”

— Lana Jovanovic, Head of Product @ Sixfold

Get the full story upfront

Accuracy is everything in life and disability underwriting. With Sixfold’s automatic discrepancy detection, underwriters are able to get to a more accurate underwriting decision by:

  • Catching omissions and inconsistencies at the beginning of the review cycle

  • Reducing misclassification of risk due to overlooked or conflicting information

  • Detecting potential fraud patterns before they result in costly claims

  • Maintaining consistency and transparency when cases move between underwriters 

How the feature works

The Discrepancy Scan automatically compares the self-reported application data against the supporting medical documents and flags any mismatches related to material facts.

Prescriptions are often a leading indicator of an underlying diagnosis, one that could directly impact insurability or rating decisions. But not every medication matters the same way, and what’s considered “material” varies from carrier to carrier.

By securely ingesting each carrier’s unique underwriting guidelines, Sixfold identifies which medications are truly relevant in each context, connecting the dots between prescriptions, diagnoses, and underwriting impact.

Here’s how the feature works in practice:

1. Medical Document Review
Sixfold’s AI reviews both the submitted application and any supporting documents uploaded (APS, MIB, Rx histories, etc.) for medical data relevant for risk assessment.

2. Discrepancy Detection
Sixfold then compares the findings in the medical documentation to what the applicant reported. If a medication appears in the documents but not in the application, it’s flagged as a discrepancy.

3. Discrepancy Alert
Within the underwriter's dashboard, discrepancies appear clearly labeled with clear icons. Clicking into a card brings up the relevant context e.g., “Blood thinner mentioned in the medical report, not disclosed by the applicant.”

4. Clear Next Steps
Underwriters can use this insight to request clarification from the applicant or additional documentation from providers.

5. Always-Current Monitoring
Because documents arrive asynchronously, the system continually updates as new files are uploaded. Discrepancies are flagged dynamically based on the most current information.

Learn More

Insurtech Insights takes a closer look at the Discrepancy Scan
Interested in a hands-on demo? Reach out for a Sixfold walkthrough

Sixfold works seamlessly alongside your existing underwriting tools.

CTA Image