AI controlled robotic arm for coin bags handling using Microsoft Project Bonsai
A financial institution research lab developed a unique AI control system for a robotic arm to manipulate heavy coin bags with an accuracy of 95% in real-life conditions. Neal Analytics Autonomous Systems experts helped design and train the solution leveraging the Microsoft Project Bonsai platform.
The institution needs to handle large quantities of coin bags. Carts containing up to 40 coin bags need to be unloaded by human operators for counting and repackaging. Each bag weighs about 2kg (4.4 lbs.) and can be hard to reach by hand for the ones at the bottom of the cart. The manual handling caused unnecessary stress on operators’ backs and arms as this was a repeated task carried out multiple times a day.
The physical demands of this task also quickly tire human operators, making it difficult to scale operations during peak times.
The research lab decided to look for a solution to leverage a robotic arm to pick up each bag from a cart and drop them on a table at the operator level. However, existing simple robotic control systems could not solve this problem as the coin bags could have unpredictable shapes and locations. Also, traditional robotic control systems could not adjust to these changing operating conditions dynamically.
Instead, the customer decided to work with the Microsoft AI team to find a workable and innovative solution.
To solve this challenge, the customer decided to leverage Microsoft Project Bonsai, a platform used to design, train, and deploy Autonomous Systems.
Autonomous System AI agents, aka “brains,” self-train using the concept of Deep Reinforcement Learning (DRL). This trial-and-error approach requires the availability of an accurate simulator as the AI agent cannot self-train on a real-life system. Therefore, the customer decided to develop a MuJoCo physics-based simulation to train the agent.
After analysis and considering the inputs of both Neal Analytics and Microsoft experts, as well as the customer’s operators and researchers, the team decided to build a solution that would use a single AI agent (i.e., Project Bonsai “brain”). Vision sensors would then provide the necessary brain inputs first to train then operate the agent.
Using the simulator, each agent training – i.e., from start to working brain- required between 300,000 and 5 million DRL training cycles and would take up to 20 hours.
Once training was complete, the customer would test the AI agent with different cart setups, ranging from 5 to 40 bags, to evaluate whether the robot arm could find, pick up, and finally drop each bag successfully on the table.
After multiple tests using different training and design strategies, the AI-controlled robot was able to pick up bags 95% of the time on the first attempt. For the remaining 5%, a second try would take place if the robot missed the first time.
In addition, the robot was able to perform this task at a speed close to human speed, making it a viable solution for the next step in this project: field deployment.
As the robot’s speed was limited due to the requirements of the inverse kinetic calculations needed for the brain to instruct the controller, it also meant that further speed increase would be possible through optimization of these kinetic calculations.