Docs Menu
Docs Home
/ /
Atlas Architecture Center
/ / /

Automating Digital Underwriting with Machine Learning

Leverage Machine Learning with real-time data processing and automate digital underwriting.

Use cases: Gen AI, Analytics

Industries: Insurance, Financial Services, Healthcare

Products and tools: Time Series, Atlas Charts, Spark Connector

Partners: Databricks

Imagine being able to offer your customers personalized, usage-based premiums that take into account their driving habits and behavior. To do this, you'll need to gather data from connected vehicles, send it to a machine learning platform for analysis, and then use the results to create a personalized premium for your customers. You’ll also want to visualize the data to identify trends and gain insights. This unique, tailored approach will give your customers greater control over their insurance costs while helping you to provide more accurate and fair pricing.

In the GitHub repo, you will find detailed, step-by-step instructions on how to build the data upload and transformation pipeline leveraging MongoDB Atlas platform features, as well as how to generate, send, and process events to and from Databricks.

By the end of this demo, you’ll have created a data visualization with Atlas Charts that tracks the changes of automated insurance premiums in near real-time:

Financial Services: Banks and financial institutions must be able to make sense of time-stamped financial transactions for trading, fraud detection, and more.

Retail: Real-time insights into what’s going on right now.

Healthcare: From the modes of transportation to the packages themselves, IoT sensors enable supply chain optimization while in-transit and on-site.

An illustration shows a reference architecture

Figure 1. Reference Architecture With MongoDB

A basic example data model to support this use case would include customers, the trips they take, the policies they purchase, and the vehicles insured by those policies.

This example builds out three MongoDB collections, as well two materialized views. The full Hackloade data model which defines all the MongoDB objects within this example can be found on GitHub,.

An illustration shows the MongoDB Data model approach

Figure 2. MongoDB Data model approach

A dataset including the total distance driven in car journeys is loaded into MongoDB and a daily cron job is run every day at midnight that summarizes the daily trips and compiles them into a document stored in a new collection called “CustomerTripDaily.” A monthly cron job is run on the 25th day of each month, aggregating the daily documents and creating a new collection called “Customer Trip Monthly.” Every time a new monthly summary is created, an Atlas function posts the total distance for the month and baseline premium to Databricks for ML prediction. The ML prediction is then sent back to MongoDB and added to the “Customer Trip Monthly” document. As a final step, you can visualize all of your data with MongoDB Charts.

1

The data processing pipeline component of this example consists of sample data, a daily materialized view, and a monthly materialized view. A sample dataset of IoT vehicle telemetry data represents the motor vehicle trips taken by customers. It’s loaded into the collection named ‘customerTripRaw’ (1). The dataset can be found on GitHub and can be loaded via MongoImport or other methods. To create a materialized view, a scheduled trigger executes a function that runs an aggregation pipeline. This then generates a daily summary of the raw IoT data and places it in a materialized view collection named ‘customerTripDaily’ (2). Similarly for a monthly materialized view, a scheduled trigger executes a function that runs an aggregation pipeline that summarizes the information in the ‘customerTripDaily’ collection on a monthly basis and places it in a materialized view collection named ‘customerTripMonthly’ (3).

See the following Github repos to create the data processing pipeline:

An illustration shows on how to create a data processing pipeline
click to enlarge

Figure 3. Creating a data processing pipeline

2

The decision-processing component of this example consists of a scheduled trigger that collects the necessary data and posts the payload to a Databricks ML Flow API endpoint. (The model was previously trained using the MongoDB Spark Connector on Databricks.) It then waits for the model to respond with a calculated premium based on the miles driven by a given customer in a month. Then the scheduled trigger updates the ‘customerPolicy’ collection to append a new monthly premium calculation as a new subdocument within the ‘monthlyPremium’ array.

See the following Github repos to create the data processing pipeline:

Automating Calculations with Machine Learning Model
click to enlarge

Figure 4: Automating Calculations with Machine Learning Model

3

Once the monthly premium calculations have been appended, it’s easy to set up Atlas Charts to visualize your newly calculated usage-based premiums. Configure different charts to see how premiums have changed over time to discover patterns.

  • Jeff Needham, MongoDB

  • Ainhoa Múgica, MongoDB

  • Luca Napoli, MongoDB

  • Karolina Ruiz Rogelj, MongoDB

Back

AI-Powered Call Center Intelligence with MongoDB

On this page