From cloud-first to edge-ready: why AI needs to evolve
Artificial Intelligence (AI) has come a long way in the last decade. While cloud-based infrastructure enabled massive leaps in AI capabilities, it also introduced new challenges, such as latency, energy consumption, and privacy concerns. In response, companies and researchers started focusing on edge AI, where data is processed locally on embedded systems such as sensors and microcontrollers. The Edge AI Foundation (formerly known as TinyML) has been a driver in this transition since its creation in 2018 by bringing together industry leaders and innovators to make AI smarter, faster, and more pervasive at scale.
The evolution of AI development
Artificial Intelligence started to attract mainstream attention around 2012, when deep learning models achieved remarkable success in areas like image recognition, speech processing, and translation. The emergence of cloud computing drove these advancements, offering the computational resources required to train complex models on large datasets.
Over the next several years, companies invested heavily in cloud-based AI infrastructure, using high-performance GPUs to develop ever-larger models, including the early versions of generative AI. By 2017, AI was firmly embedded in cloud services offered by major tech providers.
However, as adoption grew, so did concerns about latency, bandwidth, privacy, and the high energy costs of transmitting and processing data remotely. These limitations became more pronounced as AI moved into latency-critical applications like autonomous systems, wearable tech, and industrial automation. The result was a gradual understanding that not all AI needs to live in the cloud and that many use cases could benefit from performing AI computations locally, closer to the data source. This awareness laid the foundation for the next significant shift: AI at the edge.
Why edge AI is the next big step
As AI becomes more integrated into everyday life, the need for fast, private, and energy-efficient processing is growing. Traditional cloud-based AI requires data to be sent to remote servers for analysis and decision-making, introducing latency, increasing energy use, and raising privacy concerns. Edge AI solves these challenges by enabling intelligent processing directly on local devices, such as sensors, microcontrollers, and other embedded processing solutions. This eliminates or reduces the need to constantly connect to the cloud.
Thanks to key advancements like the STM32 microcontroller family and ST’s intelligent MEMS sensors portfolio, from inertial measurements units to biosensors, edge AI is no longer experimental; it’s a pervasive solution. Edge AI technologies empower devices to execute neural network inference locally, facilitating tasks such as anomaly detection, pose estimation, gesture recognition, environmental monitoring, and more. These functions are crucial in fields like industrial automation, healthcare, and innovative consumer products, where low latency, robust security, and minimal energy use are essential.
Among these innovations it is worth mentioning the STM32N6 series, ST’s first microcontroller family to integrate the Neural-ART Accelerator, a proprietary Neural Processing Unit (NPU). This NPU accelerates AI inference workloads directly on the MCU, drastically reducing latency, power consumption and off-loading the CPU for AI computing. Designed for demanding edge applications, the STM32N6 combines advanced performance, a unique video acquisition pipeline, and unprecedented energy efficiency with the flexibility of the STM32 programming ecosystem.
The Edge AI Foundation: a community driving change
The Edge AI Foundation is a strategic think tank committed to advancing edge AI across sectors. It hosts global conferences, such as annual events in Europe, the U.S., and Asia, connecting academia and industry. As a non-profit, the Foundation doesn’t promote specific products; instead, it fosters knowledge exchange, networking, joint research, and alignment on frameworks and tools.

The journey of the Edge AI Foundation began in 2018 with the establishment of the TinyML Foundation, a collaborative community initiated by industry leaders such as Google, ARM, and STMicroelectronics.
The Foundation’s goal was to create a community of experts who could prove that machine learning could be executed even on ultralow-power devices (under 1mW), unlocking a new class of applications that operate independently of cloud infrastructure. This first step addressed the growing demand for real-time, energy-efficient, and privacy-preserving AI applications in areas like wearables, smart homes, and industrial IoT. And it was relatively quick to achieve.
As the field matured, the scope of applications expanded beyond simple models to encompass more complex tasks, such as generative and agentic AI, computer vision, and natural language processing, all executed at the edge. Following this evolution, in 2024 the TinyML Foundation rebranded itself as the Edge AI Foundation, reconfirming its commitment to advancing AI technologies that operate at the network’s edge.

Today, the Edge AI Foundation brings together a diverse community of researchers, developers, business leaders, and policymakers to address the challenges and opportunities in deploying AI at the edge. The foundation aims to make edge AI technology accessible and impactful for all. To achieve this mission, the Foundation has launched several initiatives:
- Edge AI Working Groups: focus groups on Generative AI, Blueprints, Dataset and Benchmarking, Neuromorphic and Marketing.
- Edge AI Labs: a platform providing access to high-quality datasets, models, and code to accelerate edge AI research and development.
- Edge AIP (Academia & Industry Partnership): a program promoting collaboration between industry partners and academic institutions to develop educational materials, certification programs, and scholarship opportunities.
STMicroelectronics as Strategic Leader Sponsor of the Edge AI Foundation

STMicroelectronics began collaborating with the Edge AI Foundation in 2018. This relationship started after ST demonstrated its STM32Cube.AI pre-production tool version at the CES event in Las Vegas. This seminal project led to an invitation by Pete Warden (past TensorFlow Lite tech lead at Google) to the first TinyML US Forum in early 2019. ST showcased live AI demonstrations of its standard STM32 microcontrollers on that occasion.

Nowadays, as a Strategic Leader Sponsor, ST participates in and lead working groups, contributes to event programming, and engages in the Foundation’s governance.
For example, Danilo Pau (Technical Director, IEEE, AAIA and ST Fellow in System Research) chairs the Foundation’s Gen EDGE AI working group, organizing forums, producing white papers and initiating research projects leveraging existing ST AI products. Additionally, Giuseppe Desoli (ST’s Company Fellow, SRA Chief architect, Senior Director of Artificial Intelligence & Embedded Architectures) was appointed a Board member in 2025.
This participation supports the promotion of ST’s edge AI solutions, including its portfolio of AI-enabled microcontrollers, such as STM32 general-purpose MCUs and Stellar automotive MCUs, along with sensors and the comprehensive software tools ecosystem that make up the ST Edge AI Suite, while also building a network of authorized partners.
From smart devices to autonomous agents
Edge AI is expected to power the next generation of intelligent agents and systems capable of reasoning, planning, adapting and acting. We are already starting to see early use cases: AI-powered thermostats that learn user behavior, voice assistants that operate offline, intelligent voice transcriptors and humanoid robots that help with manufacturing tasks. In the future, these autonomous systems could help manage entire smart buildings, improve energy efficiency, or support industrial automation, all without needing a constant connection to the cloud.
To support this shift, edge AI platforms need to be highly efficient in both energy and performance. ST is enabling customers to implement edge AI daily thanks to innovations that facilitate edge AI on a broad range of STM32 microcontroller family, Microcontrollers integrating an NPU, and intelligent MEMS sensors with two technologies: the intelligent Sensor Processing Unit (ISPU) and machine learning core (MLC).
As an active contributor to the Edge AI Foundation, ST will continue to influence this community, enabling faster innovation and more sustainable technology at the edge.
- Join us in July at the next Edge AI Foundation event in Milan
- Learn more about ST’s edge AI portfolio
- Discover more on the Edge AI Foundation