Published on: 
May 20, 2024

Building the AI Future With Sixfold’s Head of AI/ML

5 min read

We talked with Sixfold’s latest hire, Head of AI/ML Ian P. Cook, PhD about his career journey, how emerging technology will overcome long-standing industry challenges, and his new role as Sixfold’s data science leader.

Welcome to the team, Ian! Walk us through your career journey up to this point.

I caught the bug for quantitative work when I was in grad school studying Public Policy at the University of Chicago. After graduation, I worked in various policy analysis roles, including at the RAND Corporation as well as doing work for all the major defense agencies and other federal orgs.

While doing policy work, I was simultaneously pursuing my PhD at the University of Pittsburgh for Political Science. My trusty “dad joke” is that I wasn’t smart enough to do grad school just once. As part of my research, I taught myself Python and found that my skills in econometrics translated well to the then-exploding field of data science. After I received my degree, I worked with startups and went from building tech products to building tech teams as Chief Data Scientist with a GovTech company and Chief Technical Officer for a business analytics SaaS.

Are there any tech projects you’re particularly proud of?  

One of my favorite projects was a matching & recommendation tool for patients. A significant predictor of poor health outcomes is missing doctor appointments, but as it turns out people don’t just “miss” them, they avoid them when they’re not happy with the style or approach of a doctor or practice. This was a problem that me and my team believed could be engineered around. 

I oversaw the team building the machine learning functionality for a web tool that assessed both patient preferences and the style of healthcare providers and then turned that into a kind of Match.com for healthcare. Not only was it fun to build, but I’m particularly proud to know that it helped keep people getting the care they need.

Why Sixfold?

Sixfold immediately piqued my interest. The company is attacking a clear and sizable pain point for a well-defined customer (anyone with startup experience will tell you that’s not always the case). Plus, they’re doing it with what I see as generation-defining tech—LLMs are amazing in their own right, and having the opportunity to put them to practical use is an exciting opportunity. After meeting with the team and the leadership, I knew that this was where I wanted the next step of my career to be. Excited to get to work with an amazing crew—tip of the hat to Stewart, Drew, and the whole engineering team!

Everyone’s talking about LLM-powered generative AI these days. From your perspective, what are the most intriguing possibilities and potential risks of this emerging generation of tech?

I sit somewhere in the middle between the extremes of the AI discourse: I don’t think AI will give rise to a post-human apocalypse, but I also don’t believe we can just sit back and toss every hard problem at an AI and implement whatever solution pops out. 

We’re going to see these tools accelerate transformation across every industry. In most cases, that will ultimately be a good thing. However, without clear intention behind how they’re applied and oversight into how they’re trained and deployed, there’s a real risk for these tools to cause harm—unintentional or otherwise. Part of the attraction to Sixfold was their emphasis on applying AI responsibly.

As someone with a strong data science background, what does it mean for data to be useful, not just accessible?

Access is an aspect of keeping data well-controlled—like security measures and access control for personally identifiable information, health records, financial information, and other sensitive material. 

For data to be useful, it has to address real problems and it has to have been corralled in a thoughtful, purposeful manner. Usefulness requires someone to understand the question the data is meant to help answer and to be aware of potential biases—both statistical and human-generated—which might limit the applicability of the data.

How do you see your role as the AI/ML leader at Sixfold?

I see my chief responsibility as empowering underwriters to do their best work ever by augmenting Sixfold’s product with AI-powered tools. Achieving that means supporting the people who are developing, testing, and deploying those tools. Some days that might mean coordinating priorities and ensuring everyone has the information and resources to deliver. Some days it might mean being chest-deep in the code myself. And some days, I’m sure it’ll be a little of both. 

What do you see as the challenges of implementing AI in insurance vs other industries?

I’ll admit to being relatively new to the insurance industry. That said, even a n00b like myself understands that it comes with unique challenges like complying with regulations across multiple levels of government; implementing stringent processes to handle and distribute the personal, closely-held data of both individuals and corporations; and making a convincing argument for change in a well-established industry where many are content using tools and methods that’ve been around for decades.

As a seasoned technologist, do you think there are types of tasks that will always be better suited for humans, rather than machines? 

AI is going to take on the tasks that slow us down. I like to think of it as a bionic-like tool that augments and improves human performance.

It’ll free us up to focus on the most important—and frankly most meaningful and rewarding—parts of our jobs.

I’ll also add this: the better the machines get, the more we’re going to lean on philosophy, the most thoroughly human of disciplines. Discussions about LLMs are loaded with terms like “reasoning,” “thought,” and “knowledge,” which philosophers have been wrestling with for centuries. I’m reminded of the discourse in my philosophy courses around intention and will, which are completely distinct from the mechanistic processes in deep learning architectures. Philosophy is often derided as a field with little practicality, but as a technologist, I see it becoming more practical by the day.

How do you keep up with the latest developments in your field?

A lot of reading! There are tons of great newsletters that cover the field. Ben’s Bites is fantastic, but there are tons of great ones across Medium, Substack, and Beehiiv. I’m also a fan of podcasts like AI Daily Brief, The Cognitive Revolution, and Talking Machines. I like listening to those while doing chores, walking my dog, driving, etc. 

Keeping up with the latest research is always a challenge—it was an issue even back in my PhD days because there were always new papers coming out. Part of me feels lucky to have gone through that back then because now AI is moving at warp speed and it’s even harder to keep up. But I’ve learned to master the art of “informed skimming,” which means quickly reviewing summaries, conclusions, and writeups to find key terms that will tell you if the paper is relevant to the problem area you’re taking on.

What tools do you rely on the most for your work?

I’m a devoted fan of Pycharm for coding. I’ve tried switching to VS Code and other flashy new IDEs when they come up, but I always go back to Pycharm. For note-taking, I stick to Apple Notes. There’s a whole world of “second brain”/knowledge management tools out there for taking notes, but I have to refrain from those (anyone familiar with the word “Zettelkasten” knows the depths of that particular rabbit hole).

I learned during grad school that the more extensible a tool is, the more time I waste fiddling with configurations. Tweaking color themes is not “optimizing my workflow,” no matter how many times I repeat it to myself.

What is the best thing a person can do who wants to pursue a career in AI/ML?

Some requirements are hard to skip over: a decent amount of math, and enough knowledge to turn that math into code.

But you don’t have to be a genius at either one. Learn some matrix math, and then play with those matrices and see how you can apply them in predictive software. Then learn a little more, and implement a little more. The key to mastering any skill set is repetition and perseverance. To parrot that old quote about how one becomes a writer: write.

More specifically, I think there are three things everyone who wants to work in this field needs to know: SQL, Git, and enough of one programming language to be productive. SQL is the language of data: getting it, moving it, storing it, everything—if you can’t get at the data, it’s going to be hard to trust that you can work with it. Git proves that you understand versioning, reproducibility, and collaboration. When it comes to becoming thoroughly fluent in at least one programming language, I’ll sidestep the religious wars about which language is best or most useful and just say that the important thing is becoming really productive in at least one language.

Any fun tech projects that you're working on at the moment (non-Sixfold-related)?

I’m a fly fisher in my spare time, and I use a fly fishing app called onWater Fish. I reached out to the app’s dev team and learned they needed some ML-like support. So I pitched in to try out some new ideas. We’ve been successful in implementing some cool in-app computer vision work that anglers can use to record their catches (and brag to friends) all with one picture. It’s been a great way to apply my skills to a personal passion of mine, which has been truly rewarding. 

How can people follow your work?

I’m on LinkedIn, and try to post regularly on the practical application of AI, the future of work, and whatever else where I might have a useful take. For other social media, I can usually be found by searching @ianpcook.

Want to join Ian and the rest of the Sixfold team on our mission to transform insurance underwriting with AI? Check out our career page

Share this post
Maja Hamberg
Head of Marketing