Leverage the power of reinforcement learning to build your next AI solution
One of the key limitations of deep learning-powered artificial intelligence is that deep neural networks (DNN) require a large amount of labeled data to be trained. Autonomous systems use reinforcement learning on simulators to bypass this labeled data limitation and truly embrace the benefits of DNN-powered AI (“AI”).
Labeled data, i.e. data that a human has created, validated, or tagged, is expensive to source, error-prone at scale, and in certain situations not even available. For instance, training a speech recognition AI model requires hundreds of hours of human-captioned audio before being remotely useable. Similarly, a machine translation model from language A to B may require a couple of million human-translated sentences before being able to reach an appropriate quality level. Therefore, in most industries, accessibility to this labeled data is often a showstopper for most real-life applications that target a company-specific need.
Reinforcement learning enables companies to go around these intrinsic AI training limitations by training the model not on labeled data but by letting it self-learn through trial and error. However, as it is not possible to let the AI run hundred of thousands or millions of tests on a live device or process, the first step to reinforcement learning is to build a reliable simulator.
Another element often hampering real-life use of custom AI is the complexity associated with defining the most appropriate architecture for a given problem. In fact, devising the best DNN architecture for a given problem is usually a task left to PhDs and often leads to peer-reviewed scientific publications.
To circumvent this challenge, machine teaching leverages human experience-based heuristics to build architectures that – to a certain extent- help break the black-box aspect of AI models.
Industrial and business processes simulations
Without the availability of realistic process simulators, autonomous systems are not possible as they are the critical part to enable reinforcement learning to be possible.
These simulators come in various shapes and complexity levels. With simple systems, i.e. those with a limited number of inputs and outputs, it may be possible to build an effective simulator with relatively straight forward linear or polynomial models.
However, most real-life industrial or business processes require the use of dedicated simulation tools, such as MathWorks’ Simulink or others. In the most complex situations, dedicated AI models might even be needed to simulate the process.
Machine teaching is the approach that leverages human experience and its associated heuristics to create more explainable AI. It creates architectures that go beyond the AI black box approach by using smaller and more constrained models that focus on solving for one clearly identifiable challenge.
For instance, instead of a single model that uses IR imaging to control an industrial oven, two sequential and explainable models could be used. The first one would be an AI that would be able to estimate a product temperature based on IR imaging. The second one a control system that would modify input parameters based on the temperature calculated by the first model.
With machine teaching, process specialists can build AI-powered control systems that will leverage their expertise effectively without the need to become not AI experts. Check out these cool animated demos from Microsoft to see examples of autonomous systems use-cases.
Reinforcement learning with Microsoft Project Bonsai
Microsoft Autonomous Systems, a platform integrated into Microsoft portfolio after the Bons.ai acquisition, is a solution that enables non-AI experts to build reinforcement learning-based AI solutions for manufacturing process control.
The Microsoft Brain uses input from a simulator to train its various components. Each component is defined based on human expertise and the associated heuristics.
Once trained, the brain can be used in one of two ways:
- Human augmentation: the brain will proactively offer to the operator what it calculated as the best possible option. This can significantly increase operator effectiveness, reduce quality issues, and avoid down time. For instance, a brain could warn an operator of potential failure to ensure that the equipment is maintained or repaired pro-actively, before it impacts production.
- Human supervised AI control: The brain will effectively take control of the process and the human will then be responsible to monitor and control the AI. A typical example of this type of situation outside of manufacturing are self-driving cars where drivers must always be ready to take over if needed but can just rely on self-driving for 99%+ of the time.