The last ST live stream of electronica 2022 just ended as we looked at innovations bringing machine learning to the edge. Our experts showed tools that simplify algorithm creation and devices that bring intelligence to sensors. Consequently, this democratization means that industries are coming together to promote interoperability, and ST is releasing software packages to make cloud connectivity far more accessible. Here’s a deep dive into the products and solutions we addressed on our talk show. You can also watch it on demand if you missed it.
Table of Contents
Industrial machine learning at the edge
In the field of industrial AI, machine learning at the edge is highly promising because it brings a lot of advantages without relying on expensive infrastructures. For example, a machine can learn the patterns of a fan, detect minor aberrations, and alert users before a significant failure occurs without ever asking a server farm to run inferences. As a result, it saves a lot of bandwidth. Running on the edge also means analyzing pictures to track people coming and going, a critical application to monitor capacity in a post-pandemic society without compromising privacy.
Most recently, on the ST Blog, we saw how a machine learning application in the middle of a forest could detect smoldering and alert authorities of a fire before it spread (Silvanet: an STM32WL can live 15 years outdoors to protect against forest fires). A machine in the middle of nowhere must last decades on a single battery and therefore has extreme power requirements. Such an application is only possible if it can run ML at the edge.
Demo 1: Proteus kit with NanoEdge AI Studio v3.2
The Proteus sensor evaluation kit (STEVAL-PROTEUS1) makes machine learning at the edge more accessible. The board combines two primary features: sensors designed with industrial applications in mind and a microcontroller, the STM32WB5MMG, which can run machine learning applications and wirelessly send results thanks to its integrated Bluetooth 5.2 transceiver, thus combining two critical trends in industrial circles, AI and wireless communication. Moreover, the Proteus board itself includes other industrial sensors, such as the STTS22H thermometer, accurate at ± 0.5 °C (-10 °C to +60 °C), or the IIS3DWB and the IIS2DLPC accelerometers, enabling the creation of a wide range of machine learning applications at the edge. The Proteus sensor evaluation kit is available at major distributors for around USD50.
In the demo featured at electronica 2022, the Proteus sensor kit uses NanoEdge AI Studio v3.2 to run training and inference operations on the same microcontroller. The utility uses the sensors on the Proteus board to monitor the vibrations emanating from a motor. The NanoEdge AI Studio then uses a PC tool to create a neural network for machine learning and compile an application that runs on the MCU found on the Proteus board. It, therefore, represents one of the most straightforward ML solutions at the edge for industrial AI applications.
What is so unique about the Proteus industrial sensor kit?
The Proteus industrial sensor kit features the ISM330DHCX, the most accurate industrial inertial sensor on the market with a machine-learning core. ST also houses the board in a plastic case with a battery for greater flexibility. Indeed, the solution can run untethered, making prototyping operations vastly simpler. For instance, effective data collection must take place over long periods. The plastic case and battery make it possible to strap the Proteus system on various equipment and surfaces to collect data and test accuracy. And, thanks to the Bluetooth transceiver included in the MCU, it’s possible to get data and results off the board without physically connecting to it.
What is so special about NanoEdge AI Studio?
Traditionally, training operations take place on a separate and more powerful machine. Collecting data can thus be difficult as engineers wire a PC to a board to stream data off a platform. In many instances, spaces in a factory environment are tight or sensitive, so installing all this hardware is challenging. Hiring a dedicated team to collect and clean the data is also exorbitantly expensive. To solve this challenge, NanoEdge AI Studio uses an advanced mathematical paradigm to do all this work on a microcontroller.
NanoEdge AI Studio can also help deal with uncertainty. To build a classic machine learning application, developers need a lot of data to determine nominal and abnormal behaviors. Engineers must have anomaly data to test whether their algorithm works. However, some industrial situations aren’t replicable, and engineers can’t always pre-train a model to detect every unexpected behavior. In such cases, it is critical to run a machine learning application that can learn and infer on the same system without supervision, without developers specifying what is expected and what is not but instead letting the machine make those conclusions.
What is so special about NanoEdge AI Studio v3.2?
Since the introduction of NanoEdge AI Studio v3, the ST utility supports new families of algorithms. The tool has always provided support for anomaly detection and classification. With version 3.0, ST added extrapolation and outliers. Extrapolation, also called regression, analyzes patterns between known variables before extrapolating behaviors for unknown variables. Developers can measure a fan’s performance when operating between 100 ºC and 150 ºC and use regression to anticipate behavior at 160 ºC. NanoEdge AI Studio even supports more advanced analyses beyond the linear regression of this example. Outlier detection is similar to anomaly detection but is created by only feeding nominal data.
NanoEdge AI Studio v3.2 ushers new logging capabilities. While the previous version did have a data logging feature, data collection was restricted to our STWIN industrial node kit (STEVAL-STWINKT1B). Moreover, the data captured required developers to process it in Python or another framework before they could use it. With NanoEdge AI Studio v3.2, ST brings USB live data logs on many more boards, including the Proteus system. NanoEdge AI Studio can thus use a broader range of sensors. The data logging system is also far more flexible and includes data manipulation features. It is, therefore, possible to shape the information within NanoEdge AI Studio instead of using Python.
New communication protocol for smart home, smart building, and IoT: Matter
One aspect of our lives that is directly impacted by artificial intelligence is home automation and IoT. Asking Siri to lock doors or Alexa to turn off the lights is increasingly popular, practical, and can positively impact the environment. For instance, a machine-learning system can be trained on the patterns of a household to lower or increase heating and cooling automatically. Similarly, machine learning can use location data on phones to locate household members and determine whether to turn the HVAC on or off.
Today’s challenge is that smart home systems are notoriously incompatible from one technology to the next. A product compatible with Alexa Smart Home, Amazon’s solution, may not work with HomeKit, Apple’s system, or Google Home. It’s a challenge for end users that must keep up with all these frameworks. To solve this problem, the industry came together to create Matter.
The initiative comes from the Connectivity Standard Alliance (CSA), which counts Amazon, Apple, Google, Samsung, the Zigbee Alliance, ST, and more among its members. In a nutshell, Matter relies on IPv6 technologies to make each device worldwide addressable and on a 2.4 GHz mesh network based on Thread or Wi-Fi. The CSA also added Bluetooth LE support to help users add products to their network. Matter products will be compatible with one another, regardless of their maker. Furthermore, the standard sets the specification for border routers that connect Matter products to the Internet. As a result, consumers no longer have to buy dedicated hubs or routing terminals for each brand, simplifying setups and reducing costs.
Demo: Using Matter on an STM32WB
The ST demo shows how the STM32WB can support Matter in end devices and a gateway. Mimicking a lighting application, the showcase runs a Matter stack on an end device using an STM32WB55 and a border router utilizing a microprocessor, which communicates with a radio co-processor (RCP) also equipped with another STM32WB. The border router accesses the Internet. In the demo, the lighting system is already provisioned, meaning that it belongs to the secure wireless network and can be driven to turn lights on or off or dim them. Down the line, makers can use Thread or Wi-Fi to facilitate the exchange of information between the end device and the border router.
Why running a Matter stack on an STM32WB is noteworthy?
Behind the scenes, the fact the demo runs on an STM32WB can make designing a Matter system simpler. The microcontroller includes an 802.15.4 radio that supports Thread, thus vastly reducing costs. Typically, engineers must add additional RF and passive components on their PCB to support all the necessary wireless protocols. Thanks to the STM32WB55, teams can simplify designs and enjoy a smaller BOM.
The software stack dedicated to the new standard does take more than what’s available in the STM32WB55. However, the ST implementation shows how to use an external QSPI flash to run the Matter stack. Since adding an external flash module is far more straightforward, the ST demo shows how makers can make Matter more accessible by providing one of the most efficient ways to run the protocol. ST will launch a software development kit (SDK) supporting the creation of Matter applications on its STM32WB during the first quarter of 2023.
Ubiquitous cloud connectivity
Another critical aspect of Industrial AI is cloud connectivity. While machine learning at the edge means running the AI algorithm locally, it’s always with the intent of sending results to a cloud platform or using the Internet to send commands and updates, among other things. For instance, a smart factory can use machine learning on an edge device to determine when a piece of equipment is exhibiting abnormal behavior and is about to fail. Still, it is pointless unless a system relays that message to users. Most of the time, this message transmission takes place through a cloud dashboard shared among the relevant parties within a company.
Similarly, communication from the cloud to the machine learning application is equally important, as we demonstrated on the ST Blog when covering the ST Authorized Partner Percepio (Percepio DevAlert and STM32: A Sandbox and Tools to Crush Frustrating Bugs and Malicious Attacks). Cloud connectivity is critical to provide provisioning mechanisms to add or remove edge devices rapidly and securely. Connection to the Internet also offers the ability to update the firmware over the air. The challenge is that cloud connectivity requires a dedicated platform with a microcontroller, secure element, and more. Managing a Wi-Fi or cellular modem can be demanding, so ST offers software solutions that can run on an STM32U5.
Demo: An STM32U5 to connect to Azure IoT or AWS IoT
The ST demonstration shows two software packages: one dedicated to Azure (X-CUBE-AZURE) and another for AWS (X-CUBE-AWS). Thanks to recent updates, both support the STM32U5 discovery kit (B-U585I-IOT02A). The system connecting to Microsoft’s cloud (Azure) relies on an STMOD+ 4G cellular modem, while the one for Amazon’s solution (AWS) uses Wi-Fi. The support for a cellular connection is currently only available on the Azure software package. In both instances, the ST solution takes advantage of the STSAFE secure element found on the demo boards to store keys that ensure the secure provision of the device onto the cloud platform.
The ST software solutions also take advantage of the TrustZone support by the STM32U5. In a nutshell, TrustZone provides a way to implement security features by isolating sensitive code into a more restricted environment. Hence, the cloud connectivity aspect lives in a different space than other applications to protect sensitive information in case of a hack. The ST software package has a dedicated TF-M (Trusted Firmware-M) implementation, meaning developers can study our source code to jump-start their projects.
Last but not least: ISPU
A recent trend, started by ST in Industrial AI, is the ability to run a machine learning algorithm within a sensor instead of using a microcontroller. The first device to provide this capability commercially was the LSM6DSOX, launched in 2018. The sensor we saw today in the Proteus industrial kit, the ISM330DHCX, is part of this legacy, as it represents an even more accurate and powerful version of the 2018 design. A sensor with a machine learning core provides power savings that are unattainable on a microcontroller. The system can make decisions while saving significant battery life by processing data inside the sensor.
Strengthened by the success of its sensors with a machine learning core, ST pushed the envelope by releasing the ISM330IS, the first device with an Intelligent Sensor Processing Unit (ISPU). Launched in 2022, the sensor has vastly superior computing capabilities than anything we’ve shipped so far. The focus is on making sensors even more helpful by moving greater processing power one step closer to the data acquisition. And to make the device more accessible, we have a compiler that will efficiently program the ISPU to run any C-code algorithm. Finally, the component uses a traditional package, so there’s no need to change the board design to upgrade to this new sensor with ISPU.
Demo: Sensor fusion on the ISM330IS
The ISM330IS demo at electronica showcases a computational operation never performed before on a sensor. In the demo, a sensor board featuring the ISM330IS (the STEVAL-MKI233KA) rests on a motherboard (STEVAL-MKI109V3). As users pick up this stack, they move through a 3D environment. The ST sensor uses its ISPU to do sensor fusion calculations, meaning that it takes the information from accelerometers and gyroscopes and processes them to determine a position in space. The results of the computations inside the sensor are then reflected on the screen as an orientation within the 3D space.
What is the intelligent sensor processing unit, and what can it do?
The intelligent sensor processing unit inside the ISM330IS is a 32-bit RISC Harvard architecture running at 5 or 10 MHz. It features a four-stage pipeline and a floating-point unit (FPU). In the demo, the ISPU performs a series of calculations to represent an orientation in 3D space using quaternions, a complex number system. The ISPU also includes 8 KB of RAM for data and 32 KB of RAM for the application and a set of 16-bit length instructions optimized for neural networks. Hence, the computational capabilities of the ISPU don’t only apply to sensor fusion but machine learning algorithms.
ST is showcasing a sensor fusion demo because it enables engineers to imagine how they could use the capabilities of the ISPU to run complex applications that are popular in the industrial AI world, like predictive maintenance. Moreover, besides the power savings, using such a microarchitecture has performance benefits. For instance, the ISPU can throw an interrupt in only four clock cycles rather than 15 for a traditional Arm Cortex core.
To help developers get started, ST provides software tools to help take advantage of the ISPU in the ISM330IS. For instance, the new NanoEdge AI Studio Sensor can run an anomaly detection algorithm on the ISPU. We even held a webinar recently to help engineers get started. The application is thus a testament to what users can run and what a company like ST can do when it brings hardware and software together. Put simply, machine learning on sensors is entering a new chapter.