Case Studies
← Back to Blog

How we build our risk models

Product & Engineering
Mo El Mahallawy

Mo El Mahallawy

Co-Founder & CTO

April 14, 2022

A core value prop of Shepherd is our ability to serve our brokers with speed and efficiency. Our technology needs to help underwriters ingest submissions, price them quickly, and turn around quotes for brokers. To accomplish this, we made an early decision as a company to build our own policy admin system, integrated with Shepherd’s proprietary pricing models. We wanted to share a bit more about how we build and version our models.

From Excel to code

In the insurance industry, it’s fairly common for carriers to have risk models built in Excel workbooks by the actuarial team. This is the tool of choice, and frankly, Excel is a very powerful tool. Taking a complicated pricing workbook model and turning it into code is a unique engineering challenge, and each company has its own ways of approaching the solution. At Shepherd, we create engineering-friendly Excel models that can be digested by our coding scripts to generate files for usage within our model.

Models are a mix of formulas/calculations and configs

The common structure for our models: - Input sheets: these are sheets where we’d input data into the model - Calculation sheets: these are sheets that help us break down the different calculations we must do - Output sheets: this is the final output that is a product of the model - Lookup sheets: these contain constants that we use to lookup within our calculations (think vlookups)

Here’s an example sheet that we’ve created for the purpose of this blog post

Code to create code

Once our Excel models are in a format we’re happy with, we need to create constants files to be used within the formulas we will write. For this, we’ve created scripts that import the Excel file and export Javascript files that our code can then use. We also set specific schemas and any mutations we wish to export. For example, we might want to convert state (insured location) names that are in our Excel sheet into their corresponding abbreviations. The end result of this step is having a folder with all constant files in one place.

We found this strategy to be especially useful for our team as formulas should (hopefully) stay consistent while constants can change or update. For example, we may decide we want to change a variable that affects how much commission we charge. This is an easy change for anyone (eng or non-eng) to change within Excel, then re-run our script against the Excel file to generate the code. Overall, this has given our insurance team the ability to make changes easily and quickly, then have them reflected in the model with a quick turnaround from engineering. The additional benefit is the time savings on engineering resources, whereby the team can continue to focus on new initiatives rather than constantly updating existing pricing models.


Creating the formulas

Building the logic to match formulas within a workbook file is the most time-consuming and error-prone element of this process. Engineers need to follow the various calculations done within the Excel model and write code to match outputs.

const findFixedFees = ({
}: {
  profitLoadFeeOverride?: Percentage | null;
}): {
  frontingFee: Percentage;
  tpaFee: Percentage;
  triaFee: Percentage;
  employersLiabilityFee: Percentage;
  profitLoadFee: Percentage;
} => {
  const frontingFee = constantsToVariables['Fronting Fee'].value;
  const tpaFee = constantsToVariables.TPA.value;
  const triaFee = constantsToVariables.TRIA.value;
  const employersLiabilityFee = constantsToVariables["Employer's Liability"].value;
  const profitLoadFee = profitLoadFeeOverride || constantsToVariables['Profit Load '].value;
  return { frontingFee, tpaFee, triaFee, employersLiabilityFee, profitLoadFee };

We’ve expended a large effort to combat errors by doubling down on a strong testing culture for our models and double-checking our work with the insurance team. We’ve also created testing helpers to connect to the Google Sheet model, fill in data, and capture the output of the model. After that’s done, we then run those same inputs against our code models to make sure it matches. Overall, if we could export the functions from Excel directly into code, we would – but this is for later exploration and another blog post!

Our own internal library to version our changes

Once models are complete, we publish them under an internal library that houses all our models. It’s currently a Javascript library, hosted on our internal npm. This has given us the ability to make changes to our models freely, and bump the version when we are ready to release (typical practice for releasing open source code). We then import this library into our backend service that runs the calculations. While it’s convenient to have it in one place, we’ve found the benefits outweigh the drawbacks: - Allows for engineers to split their work and make changes to models without fear of it affecting stored data - We use Typescript and this asserts our model changes are safe to be released - We can test our pricing against multiple versions of the library - We can log which model version was used to price a certain quote. This is especially important as we want to continue iterating and changing our models while honoring previous prices - Allows for testing changes on the model – we can retroactively test new changes to the models against existing or previous policies to refine our pricing

Other options we’ve considered

Prior to building our models, we decided to spend 2 weeks exploring different ways other insurtechs have solved this same challenge. Here’s a shortlist of options we’ve considered and why we decided to shy away from them.

Run your models via Google API

If your models are stored and edited on Google Sheets, why not run the models there? At the surface, this isn’t a bad idea but it quickly would run into a few problems: - Latency issues - Lack of validation on inputs - Pricing two quotes at the same time would have collisions (we could copy the model for each quote for pricing but that’s error-prone) - Changes to the model would mean we’d need to remap cells in the sheet with the code we’ve written (for filling) - No clear path on handling model changes

Run your Excel file on the server, during runtime

This is another option we’ve considered that is similar to the prior but would solve some latency and collision issues. It’d also mean we can store the different model versions with our code allowing us to run the appropriate one. Unfortunately, our calculations are pretty complex and we haven’t found a library that’d make it easy to run calculations within Excel. Most libraries are great at reading and writing to Excel files, but not for running calculations. We could have explored some Microsoft APIs to solve this, but ultimately, testing would have been challenging. We may in the future explore this option to speed up the process of building more models.

Separate model calculations into a separate server, deploy an instance for each git commit or PR

One of our fellow insurtech peers does this. Every change they have for their models results in a new server deploy which their backend API calls for model calculations. This could be a fairly simple server that just does that one thing – receiving an API request, running calculations, returning the results. We did consider this option deeply but felt like having a Typescript library that is imported by our Typescript backend would maximize type safety and ultimately reduce type errors. And it works! We’ve had many instances where we’ve renamed variables in our library and can quickly identify them when a new version is installed and imported into our backend. This would have been difficult to do with an API.

Finding our edge

Our Head of Underwriting, Costas Hadjipateras, wrote a great post about underwriting as a blend of art and science. Our ability to leverage engineering to iterate on the science is our unique edge as a modern insurtech. As a result our platform to empowers our underwriting team to do their best work (the art). Building our policy admin system and our risk models in-house affords us the opportunity to dig deep into our data calculations and make adjustments to our models at record speed.

Read more from Shepherd


Ready to work with Shepherd?

Any appointed broker can send submissions directly to our underwriting team