Behind the Scenes

Sixfold Resources

Embark on a Journey of Discovery: Uncover a Wealth of Knowledge with Our Diverse Range of Resources.

Building the AI Future With Sixfold’s Head of AI/ML
Behind the Scenes

Building the AI Future With Sixfold’s Head of AI/ML

We talked with Sixfold’s Director of AI/ML Ian P. Cook, PhD about his career path and his role at SIxfold.

5 min read
Maja Hamberg

We talked with Sixfold’s latest hire, Head of AI/ML Ian P. Cook, PhD about his career journey, how emerging technology will overcome long-standing industry challenges, and his new role as Sixfold’s data science leader.

Welcome to the team, Ian! Walk us through your career journey up to this point.

I caught the bug for quantitative work when I was in grad school studying Public Policy at the University of Chicago. After graduation, I worked in various policy analysis roles, including at the RAND Corporation as well as doing work for all the major defense agencies and other federal orgs.

While doing policy work, I was simultaneously pursuing my PhD at the University of Pittsburgh for Political Science. My trusty “dad joke” is that I wasn’t smart enough to do grad school just once. As part of my research, I taught myself Python and found that my skills in econometrics translated well to the then-exploding field of data science. After I received my degree, I worked with startups and went from building tech products to building tech teams as Chief Data Scientist with a GovTech company and Chief Technical Officer for a business analytics SaaS.

Are there any tech projects you’re particularly proud of?  

One of my favorite projects was a matching & recommendation tool for patients. A significant predictor of poor health outcomes is missing doctor appointments, but as it turns out people don’t just “miss” them, they avoid them when they’re not happy with the style or approach of a doctor or practice. This was a problem that me and my team believed could be engineered around. 

I oversaw the team building the machine learning functionality for a web tool that assessed both patient preferences and the style of healthcare providers and then turned that into a kind of Match.com for healthcare. Not only was it fun to build, but I’m particularly proud to know that it helped keep people getting the care they need.

Why Sixfold?

Sixfold immediately piqued my interest. The company is attacking a clear and sizable pain point for a well-defined customer (anyone with startup experience will tell you that’s not always the case). Plus, they’re doing it with what I see as generation-defining tech—LLMs are amazing in their own right, and having the opportunity to put them to practical use is an exciting opportunity. After meeting with the team and the leadership, I knew that this was where I wanted the next step of my career to be. Excited to get to work with an amazing crew—tip of the hat to Stewart, Drew, and the whole engineering team!

Everyone’s talking about LLM-powered generative AI these days. From your perspective, what are the most intriguing possibilities and potential risks of this emerging generation of tech?

I sit somewhere in the middle between the extremes of the AI discourse: I don’t think AI will give rise to a post-human apocalypse, but I also don’t believe we can just sit back and toss every hard problem at an AI and implement whatever solution pops out. 

We’re going to see these tools accelerate transformation across every industry. In most cases, that will ultimately be a good thing. However, without clear intention behind how they’re applied and oversight into how they’re trained and deployed, there’s a real risk for these tools to cause harm—unintentional or otherwise. Part of the attraction to Sixfold was their emphasis on applying AI responsibly.

As someone with a strong data science background, what does it mean for data to be useful, not just accessible?

Access is an aspect of keeping data well-controlled—like security measures and access control for personally identifiable information, health records, financial information, and other sensitive material. 

For data to be useful, it has to address real problems and it has to have been corralled in a thoughtful, purposeful manner. Usefulness requires someone to understand the question the data is meant to help answer and to be aware of potential biases—both statistical and human-generated—which might limit the applicability of the data.

How do you see your role as the AI/ML leader at Sixfold?

I see my chief responsibility as empowering underwriters to do their best work ever by augmenting Sixfold’s product with AI-powered tools. Achieving that means supporting the people who are developing, testing, and deploying those tools. Some days that might mean coordinating priorities and ensuring everyone has the information and resources to deliver. Some days it might mean being chest-deep in the code myself. And some days, I’m sure it’ll be a little of both. 

What do you see as the challenges of implementing AI in insurance vs other industries?

I’ll admit to being relatively new to the insurance industry. That said, even a n00b like myself understands that it comes with unique challenges like complying with regulations across multiple levels of government; implementing stringent processes to handle and distribute the personal, closely-held data of both individuals and corporations; and making a convincing argument for change in a well-established industry where many are content using tools and methods that’ve been around for decades.

As a seasoned technologist, do you think there are types of tasks that will always be better suited for humans, rather than machines? 

AI is going to take on the tasks that slow us down. I like to think of it as a bionic-like tool that augments and improves human performance.

It’ll free us up to focus on the most important—and frankly most meaningful and rewarding—parts of our jobs.

I’ll also add this: the better the machines get, the more we’re going to lean on philosophy, the most thoroughly human of disciplines. Discussions about LLMs are loaded with terms like “reasoning,” “thought,” and “knowledge,” which philosophers have been wrestling with for centuries. I’m reminded of the discourse in my philosophy courses around intention and will, which are completely distinct from the mechanistic processes in deep learning architectures. Philosophy is often derided as a field with little practicality, but as a technologist, I see it becoming more practical by the day.

How do you keep up with the latest developments in your field?

A lot of reading! There are tons of great newsletters that cover the field. Ben’s Bites is fantastic, but there are tons of great ones across Medium, Substack, and Beehiiv. I’m also a fan of podcasts like AI Daily Brief, The Cognitive Revolution, and Talking Machines. I like listening to those while doing chores, walking my dog, driving, etc. 

Keeping up with the latest research is always a challenge—it was an issue even back in my PhD days because there were always new papers coming out. Part of me feels lucky to have gone through that back then because now AI is moving at warp speed and it’s even harder to keep up. But I’ve learned to master the art of “informed skimming,” which means quickly reviewing summaries, conclusions, and writeups to find key terms that will tell you if the paper is relevant to the problem area you’re taking on.

What tools do you rely on the most for your work?

I’m a devoted fan of Pycharm for coding. I’ve tried switching to VS Code and other flashy new IDEs when they come up, but I always go back to Pycharm. For note-taking, I stick to Apple Notes. There’s a whole world of “second brain”/knowledge management tools out there for taking notes, but I have to refrain from those (anyone familiar with the word “Zettelkasten” knows the depths of that particular rabbit hole).

I learned during grad school that the more extensible a tool is, the more time I waste fiddling with configurations. Tweaking color themes is not “optimizing my workflow,” no matter how many times I repeat it to myself.

What is the best thing a person can do who wants to pursue a career in AI/ML?

Some requirements are hard to skip over: a decent amount of math, and enough knowledge to turn that math into code.

But you don’t have to be a genius at either one. Learn some matrix math, and then play with those matrices and see how you can apply them in predictive software. Then learn a little more, and implement a little more. The key to mastering any skill set is repetition and perseverance. To parrot that old quote about how one becomes a writer: write.

More specifically, I think there are three things everyone who wants to work in this field needs to know: SQL, Git, and enough of one programming language to be productive. SQL is the language of data: getting it, moving it, storing it, everything—if you can’t get at the data, it’s going to be hard to trust that you can work with it. Git proves that you understand versioning, reproducibility, and collaboration. When it comes to becoming thoroughly fluent in at least one programming language, I’ll sidestep the religious wars about which language is best or most useful and just say that the important thing is becoming really productive in at least one language.

Any fun tech projects that you're working on at the moment (non-Sixfold-related)?

I’m a fly fisher in my spare time, and I use a fly fishing app called onWater Fish. I reached out to the app’s dev team and learned they needed some ML-like support. So I pitched in to try out some new ideas. We’ve been successful in implementing some cool in-app computer vision work that anglers can use to record their catches (and brag to friends) all with one picture. It’s been a great way to apply my skills to a personal passion of mine, which has been truly rewarding. 

How can people follow your work?

I’m on LinkedIn, and try to post regularly on the practical application of AI, the future of work, and whatever else where I might have a useful take. For other social media, I can usually be found by searching @ianpcook.

Want to join Ian and the rest of the Sixfold team on our mission to transform insurance underwriting with AI? Check out our career page

The Journey of an AI Scientist With Stewart Hu
Behind the Scenes

The Journey of an AI Scientist With Stewart Hu

Explore the fascinating career and daily work of Stewart Hu, an AI scientist at Sixfold. Discover how he stays current in the field of automated underwriting.

5 min read
Maja Hamberg

We recently sat down for a quick Q&A with Stewart Hu, AI scientist at Sixfold. Our conversation ranged from his career journey to how he stays current in the field, as well as the tasks on his daily agenda.

Let’s get this started! In your own words, what does your job as an AI scientist involve?

AI scientists engage in a lot of practical work. Despite our 'scientist' title, our roles often overlap with those of developers or research engineers. In fact, over 50% of our tasks are typical software engineering activities. We develop software grounded in foundational models, employing a range of techniques, not just AI.

Previously, AI encompassed anything linked to machine learning, but now it's more commonly associated with large language models like GPT. Our role includes integrating these models into software applications, utilizing models such as GPT-4, and even fine-tuning our custom models. Additionally, we apply both traditional machine learning and deep learning methods. This involves creating classifiers with techniques predating neural networks, like gradient boosting machines or random forests. At our core, we are software engineers crafting machine learning algorithms to address real-world challenges.

How did you get into the world of Generative AI? 

My fascination with AI really took off with GPT-3's emergence. But it was the debut of the stable diffusion model in August 2022 that truly captivated me. This revelation prompted me to pivot my career towards a tech startup specializing in deep learning and AI.

In the early stages of my career, I worked as a software engineer. This was followed by a ten-year journey in data science, beginning with statistical learning and gradually evolving into machine learning, deep learning, and finally AI. Essentially, I devoted my first decade to hardcore software development, and the next decade explored the realms of data science and machine learning.

Could you give some insights into what's on a typical to-do list for you?

My work is basically divided into three key areas.

Firstly, there's data management: sourcing appropriate data, organizing it properly, and conducting thorough analyses. A major chunk of our time is dedicated to dealing with data - acquiring, scrutinizing, and delving into it.

Secondly, I engage in software development, where my goal is to craft software that's not only reusable but also adaptable to growing complexities. This involves strategic software design to ensure it can be easily scaled up.

The third area is AI, particularly focusing on 'retrieval augmented generation’ . This entails extracting pertinent details from extensive document collections to accurately contextualize models like GPT-4. My day-to-day involves juggling these three components.

How would you distinguish a purpose-built AI tool from a generic one?

AI often gets hyped up with flashy demos requiring little coding. However, Sixfold is a purpose-built gen AI tool, our focus is on crafting solutions that address real-world business problems, not just making eye-catching demos. We use AI to make underwriters work faster, more accurate, and enjoyable. By taking over repetitive tasks, AI allows underwriters to focus on the more engaging aspects of their job.

Our platform is built with a strong emphasis on accountability, not just on interpretability or explainability. This means our solutions cite sources when making recommendations and provide actual source documents for our classifications. It's a practical, business-centric approach that boosts confidence in underwriting decisions.

What excited you the most about joining Sixfold?

Two things particularly drew me to Sixfold. First, the experienced team leading the company. The founders have a proven track record of creating substantial business value, blending tech knowledge with sharp business insight. Second, on a personal level, my wife has been in the insurance industry for over ten years, and I've always found it fascinating. Joining Sixfold presented a chance to dive deeper into this sector. 

It was the combination of the seasoned leadership and the company's expertise in insurance and underwriting that ultimately convinced me to become part of the team.

How do you stay engaged with the AI community? 

My go-to resource is X (formerly known as Twitter), where I've created a list named ‘AI Signals.' This list features over 100 experts deeply engaged in the field, tackling everything from fine-tuning models to enhancing the speed of large language model inference. While some of these individuals may not be widely known, their insights are incredibly valuable. 

Previously, I would follow arXiv for academic papers, GitHub for trending repositories, and Papers with Code to find research papers with their corresponding code. However, X has become my most essential tool. I regularly check updates from my list there to keep up-to-date with the latest developments.

That sounds like a great list! Can we share it with the readers?

Of course, happy to share it - here you go

How can people best follow your work?

I haven't been active on my blog lately, but I do maintain a GitHub repository named 'LLM Notes.' It serves as a practical guide for data scientists and machine learning practitioners. This repository is a compilation of the knowledge and insights I've gathered throughout my career. A few months back, I uploaded a wealth of information there, including lessons learned, common pitfalls, and personal experiences. It's a good resource for anyone interested in the field. 

Thanks for your time, Stewart! We’ll let you get back to your to-do list now.  

If you’d like an opportunity to work at Sixfold, check out our vacancies.

Q&A on Generative AI in Insurance With Our AI Engineer
Behind the Scenes

Q&A on Generative AI in Insurance With Our AI Engineer

Explore the world of generative AI in insurance and marketing with Drew Das, AI Engineer at Sixfold, who offers advice for aspiring professionals.

5 min read
Maja Hamberg

We recently conducted a Q&A with Drew Das, AI Engineer at Sixfold. Our discussion covered various topics, including the nuances of developing generative AI tools for marketing versus the insurance sector, advice for those aspiring to enter the field, and tools he enjoys using.

Let's start with your background and how you got into the world of AI?

I've always been in tech. I started in web development during high school, launching a business creating WordPress sites. I studied at UC Davis, where I also worked in the college IT department. I've spent nearly 10 years working in Silicon Valley, primarily in web development.

My first introduction to conversational interfaces was in 2017 with a company focused on chatbots. This early experience involved developing chatbots for job applications, targeting sectors like truck driving and foreign workers. We aimed to simplify the application process for those uncomfortable with traditional job sites.

I've also worked in cybersecurity, food delivery, and at Jasper, a startup in generative AI. At Jasper, I was the lead of content and involved in launching Jasper Chat, a chatbot product developed in under four days. This product became a primary feature for users. I also worked on an AI-based text editor, a first-of-its-kind product that helped introduce many to generative AI.

Could you describe in your own words what an AI engineer does? Specifically, what are your daily tasks and responsibilities?

The role of an AI engineer is still evolving, as AI is a relatively new field. Previously, we had machine learning engineers and software engineers with distinct functions. Machine learning engineers focused on training computer systems for making predictions using statistical methods. Software engineers, on the other hand, worked on translating business logic into application code, often using APIs provided by the machine learning team.

An AI engineer's role is broader than that of a machine learning engineer. It involves working with various AI tools, like ML models, vector databases, and advanced techniques. An essential skill for AI engineers is prompt engineering and understanding how these systems integrate. The primary objective is to combine these systems to create software that operates on top of data, rather than just converting business logic into code. This involves building a layer above data designed to emulate human behavior.

For example, in text matching, the goal is to accelerate tasks typically done by humans, such as researching and compiling data to make predictions. AI engineers strive to create systems that can perform these tasks as efficiently as humans.

Could you give some insights into what's on a typical to-do list for you?

Currently, my main focus is on improving text matching accuracy in our system. My task is to implement Retrieval-Augmented Generation (RAG) techniques, which include hybrid search, re-ranking, and new methods of data embedding. These techniques aim to improve our text matching accuracy. This task involves a lot of experimentation, implementing different systems, and optimizing them for better performance.

How can you in a simple way explain the difference between a purpose-built AI tool and a generic AI tool?

General AI tools, like GPT-3, are versatile. They can adapt to various tasks, such as classification or auto-completion. The interface is straightforward: input text, get text out. However, when you want the AI to use a specific language or guide a user in a certain way, prompt engineering becomes essential. People customize these general systems with specific instructions, but this can be cumbersome and has its limitations.

Purpose-built AI like Sixfold comes into play when you start fine-tuning the systems. This involves feeding them examples of data to achieve a desired tone, style, or structure. Additionally, building a knowledge retrieval system on your proprietary data can make your AI unique, providing access to information that other systems don't have. Customizing AI systems is challenging. It's one thing to create a demo or a cool AI video, but running these systems in a production environment, especially in enterprise-grade products, requires them to respond quickly, like within two seconds. So, a lot of work goes into customizing an AI system for practical, real-world applications.

What’s your take on transparency in AI, particularly with large language models? 

I believe transparency is crucial in AI, both within and outside organizations. Particularly, I'm referring to how AI tools are built and used. Although we're not yet at a point where AI systems are making all decisions autonomously, the trend is moving towards AI taking over more complex tasks. Take self-driving cars as an example: in the future, human-driven cars might be considered less trustworthy or even costlier to insure compared to AI-driven ones, potentially limiting human driving to specific scenarios like racetracks.

In such cases, it becomes vital for the AI systems controlling these cars to be transparent. We need to understand their training data, be aware of their limitations, and identify potential risks, especially since these systems significantly impact society. This transparency is essential because AI systems are not deterministic; their output heavily depends on the quality of the input data. Understanding what an AI system has been trained on gives us a clearer picture of its capabilities and limitations, which is essential as these systems become more integral to our daily lives. That's how I view transparency in AI.

What excites you the most about Sixfold?

At my last role, my focus was on solving marketing problems, specifically automating marketing systems and content generation. Marketing is inherently subjective, which makes it challenging to capture the right flavor of content. One dilemma in this field is the risk of producing generic or misleading content, which can be detrimental to society. 

The challenges in my current role are more complex. It's about understanding and utilizing the reasoning capabilities of the model, which involves deducing insights from a given set of data. This differs from just altering the tone of content for marketing purposes. 

For example, in marketing, success is often measured by a feedback loop, like how much generated content is retained in a final document, indicating user preference. However, in my current role, the metrics are different. We have a known 'ground truth' or a specific outcome we aim to achieve, and the goal is to develop a system that consistently aligns with this known outcome. This requires a higher level of accuracy and a different approach compared to marketing, where the outcomes are more subjective and based on individual perception.

How do you stay updated with the advancements in AI and large language models? Are there any newsletters or blogs you follow?

It's one of the biggest challenges in this field! You might spend months developing something innovative, only to find it's made obsolete by a new development the following week.

It can be frustrating, but it's also exhilarating. You learn to not get too attached to your work and treat it as part of a learning journey. The field is dynamic; each week brings something new that could either render your current work obsolete or introduce an exciting new method.

However, it's crucial for companies to remain disciplined and not get constantly sidetracked by every new innovation. Deciding whether to try out new technologies and determining their integration importance requires careful consideration.

As for staying informed, I don't stick to specific newsletters or blogs. I prefer a more hands-on approach, as I'm not from an academic background. I find YouTube content especially useful for seeing how new things are implemented. It's more about application for me – I need to see something in action to understand it. So, I explore various sources like Hacker News and YouTube, or anything relevant I come across on a particular topic.

Do you share your own work or insights publicly?

Actually, I don't engage much in open source work or sharing my projects publicly. After work, I prefer to disconnect and focus on other interests, like learning guitar. It's about maintaining a healthy balance. I'm fortunate that my work aligns closely with my interests. Over the past three years, I've had the opportunity to explore new techniques and projects that I've been curious about, right in my professional environment. For instance, this week I'm working on an advanced retrieval system, something I've always wanted to try.

Considering young engineers or students interested in entering this field, do you have any advice or recommendations for them?

My main advice is that simply following tutorials and reading books isn't enough to truly learn. The key is to build something. You need to apply what you've learned, either in a professional setting or through a personal project. In technology, and especially in AI, hands-on experience is essential. The possibilities with AI are vast. Building on top of existing AI systems is surprisingly accessible.

For instance, I'm currently working on an AI-based pet project. It's essentially a photo translator using a Raspberry Pi device, which has a computer, display, and camera. The idea is that you take a picture of something, and the system uses GPT for vision to describe what it sees. Then, using that description, it generates a DALLE 3 image related to the object and displays it on the screen. I call it the 'Unreal Camera.' For example, if you take a picture of a dog, it creates an artistic interpretation of that dog. It essentially presents a graphic version of whatever you photograph.

This kind of project would have been impossible to undertake alone a few years ago; it would have required a whole team and several years. Now, thanks to the power of AI, I was able to build it in just two days. So, my recommendation for anyone entering this field is to start building something practical and useful. That's the best way to learn and understand the potential of AI. 

Do you have a favorite AI tool at the moment?

I primarily use ChatGPT and a fascinating app called Perplexity, which is great for research. Perplexity is unique in how it can visit different websites and compile data. Another tool I frequently use is GitHub Copilot for coding. It's incredibly helpful, as it assists in writing code. These tools, especially Copilot, have been instrumental in my work. 

Thanks for your time, Drew! We’ll let you get back to your AI tasks now.

If you’d like an opportunity to work at Sixfold, check out our vacancies.