Designing an AI agent to optimize extruder operations: A real-life example

Designing an AI agent to optimize extruder operations: A real-life example

This last video of our five-part series on understanding Autonomous Systems describe step by step how the solution that helps make the perfect Cheetos.

All Autonomous Systems (AS) designs start with extracting the human subject matter process expertise and converting it into implementable heuristics that helps define the reward function used to train the AI agent using Deep Reinforcement Learning. This first step of the Machine Teaching process is critical because choices made at this stage will impact the project success over the long term.

With this project, the teams initially faced two obstacles:

  1. A key system input was the physical snack characteristics such at is length, color, or curvature. Therefore, a new capture mechanism for these data points had to be designed.
  2. The system’s complexity was such that “traditional” modeling methods for the simulation element was not an appropriate option.

Capturing the snack’s visual characteristics

AI vision model diagram

To capture the snack characteristics, the customer leveraged a custom Vision AI model to capture all the needed data accurately and in real-time.

This Vision AI serves two purposes:

  1. Gather real-life data to create a training dataset for the simulator (see below for more)
  2. Be one of the inputs for the deployed AI agent, aka the Project Bonsai “brain”.

Building the AI-based simulator

AI-based simulator example diagram

As traditional model-based simulation was not an option, the Neal Analytics team built a custom AI model. This simulator, acting as a black box more than a system components-based model, models the system inputs and outputs, not its internal behavior.

Using real-life production data, including the snacks visual characteristics extracted from the vision AI model, the AI simulator learned which input combinations produces which output without ever know what the systems it simulates is made of.

Training the AI agent, aka “the brain”

Training the AI agent brain diagram

Once trained the simulator was used to train the AI agent. This AI simulator was deployed on Azure which allowed the teams to run multiple brain training in parallel and in matter of hours.

In turn, multiple brain architectures and reward function strategies could be simultaneously tested to select the best possible one.

Three is better than one

Some of you may be a bit confused by now as three separate AI are needed for this solution. The following diagram summarize which AI is used when, both during brain training and when it is deployed.

The three AIs are:

  • A vision AI to capture the snack’s visual characteristics
  • A AI-based simulator to train the AI agent
  • A Microsoft Project Bonsai brain (AI agent) to optimize the extrusion process.

AI training and production flow

Deploying the brain

The AI agent was then deployed in a real-life test production line to validate its impact. The initial strategy is to use the agent as an operator “advisor” (i.e. the agent does not control the system but offers options to the operator who then decides). However, initial tests showed that an “operator as AI agent supervisor” model, where the agent directly modify the control parameters with the operator, was also a viable option.

A analogy to these two philosophies can be made with a driving use case. With a GPS, the driver decides on what to do and where to go based on the GPS guidance. In a auto-pilot mode, the car drives itself, but drivers need to keep their hands on the wheel to take over whenever it’s necessary.

Learn more about this extrusion process real-life example in this video.

Additional reference material: