7 steps to design and deploy a Project Bonsai-based AI agent for an extruder

7 steps to design and deploy a Project Bonsai-based AI agent for an extruder

Extruders are used across many industries to manufacture products based on a wide variety of raw materials, from feedstock in food manufacturing to plastic and metal for all kinds of industrial and consumer products.

One of the challenges that all these processes have in common is to ensure constant quality output even with input characteristics variability and extruders with uneven behavior based on in-tolerance specification differences and wear.

Autonomous Systems designed and deployed using the Microsoft Bonsai platform leverage AI agents (aka “brains”) that will add an adaptable intelligent control layer on top of existing process controllers such as PCLs, PIDs, etc. While traditional automation control systems optimize for setpoints (i.e., target parameters), Autonomous Systems agents optimize for results. (i.e., actual output specifications).

Designing and deploying an extruder Autonomous Systems AI agent requires the following specific steps:

  1. Identify key system variables with Machine Teaching
  2. Collect the training data needed to train the AI simulator
  3. Train the AI simulator
  4. Define the reward function
  5. Train the agent using Deep Reinforcement Learning
  6. Test the agent with the simulator
  7. Deploy the agent in the production

Step 1: Identify key system variables with Machine Teaching

A manufacturing process using an extruder is influenced by many variables. By leveraging subject matter experts using Machine Teaching, the team selects the key inputs, outputs, and environmental variables needed to design the agent parameters.

For instance, in a food manufacturing process that combined an extruder and an out of all the possible variables, the team selected:

AI agent inputs:
  • Extruder
    • Water rate
    • Flour rate
    • Screw speed
    • Cutter speed
    • Screw torque
  • Oven temperature
  • System output: Snack visual appearance (multiple data points, see below for more details)
AI agent outputs (i.e., extruder parameters controlled by the agent):
  • Water rate
  • Flour rate
  • Screw speed
  • Cutter speed

key system variables for MT

Step 2: Collect the training data needed to train the AI simulator

Autonomous Systems are trained using Deep Reinforcement Learning  (DRL). This requires the development of an accurate process simulator. For extruders use-cases, the team concluded that the most effective approach was to build an AI simulator that would learn the system behavior vs. a component modeling approach using a standard industry solution.

Training this type of AI simulator requires a large amount of real-life process data. In many cases, all or most of these data points are readily accessible through the existing control system. In the example above, all but the snack visual characteristics were at the team’s disposal by simply capturing the data at the control system level.

When one or more data point is not readily available, the team needs to find an alternative solution to programmatically (i.e., without the need of a human manually recording the data). In this example, the snack visual characteristics such as size, curvature, and color were not available.

To circumvent this challenge,  the food manufacturer built an AI-powered vision model to extract these datapoints real-time and at the scale required to build the AI simulator training dataset.

Step 3: Train the AI simulator

Using this data, the team can then train an accurate AI simulator. As more data and more edge cases are recorded on the live manufacturing line, the simulator can be extended to eventually enable the AI agent to support a wider range of situations.

Step 4: Define the reward function

Using Machine Teaching methods here too, and in parallel with the previous steps, the team leverages subject matter expert’s real-life experience to define heuristics that can be translated into a reward function that will be used to train the agent using DRL.

A reward function will “tell” the AI agent during training whether the action it took was appropriate or not.

Step 5: Train the agent using Deep Reinforcement Learning

Once both the reward function and the AI simulator are ready, and leveraging the Microsoft Project Bonsai platform, the AI agent “self-trains” by iterating over hundreds of thousands to millions of cycles.

As all this training happens using simulators and the Azure Cloud, multiple reward functions and brain designs can be trained and tested in parallel.

Deep Reinforcement Learning

Step 6: Test the agent with the simulator

Similarly, once the agent is trained, the team can test it in a safe virtual environment by leveraging the simulator. This time, the simulator is not used to train the AI agent (no reward function involved), but instead to monitor its effectiveness.

Step 7: Deploy the agent in production

Once the tests have proven successful, the AI agent is deployed, usually locally in a container hosted on an Azure edge device such as the Azure Stack Edge, and tested on the real process.

Initially, the agent will advise an operator that will then decide to change the controller parameters or not. Once this phase demonstrates the agent efficacy, the customer can decide to either remain in this “Operator Adviser” mode or to switch to an “Operator Supervised” mode.

  • Operator Adviser mode: Agent advises human operator who can then decide to follow or ignore recommended controller parameters
  • Operator Supervised mode: Agent modifies process controller parameters as needed under the supervision of a human operator

An iterative approach

These seven steps are a gross simplification of a process that can take months before it converges to an implementable solution. However, at a high-level, they provide a good overview of the end-to-end process a manufacturer will have to follow to design, train and deploy an Autonomous System AI agent using the Microsoft Project Bonsai platform.

Furthermore, although they were depicted as a sequential and linear progression, the reality will be quite different. During this process, redesigning, fine-tuning, additional data gathering, and likes will be required. Business life is always messier than the theory supposes.

A more apt representation of the end-to-end process is the following diagram. It captures not only the business element that needs to start the process before getting into the technological aspects. It also highlights the non-sequential and iterative aspect of a typical extruder project, or for that matter any Autonomous System project.

 

Approach to develop AI modelA team of experts can be a crucial element of success and accelerate this seven-step process. Neal Analytics acts as a one-stop-shop for cloud, data, and AI solutions. With our expertise in Autonomous Systems, we can design, train, and deploy a Project Bonsai-based AI agent to fit your needs. Learn more about our roles here.

 

Learn more about Autonomous Systems: