
Beyond the Cloud, Advanced AI Computer Vision
Euranova and STMicroelectronics have announced a joint initiative to bring advanced computer vision capabilities to edge devices, enabling real-time inference without cloud connectivity.
From cloud dependency to edge autonomy
The collaboration focuses on deploying optimized neural networks on STMicroelectronics' STM32 microcontroller family — devices with as little as 256KB of RAM and no GPU. This opens up use cases in environments where cloud connectivity is unreliable, expensive, or simply too slow.
When a defect detection system on a factory floor needs to make a decision in 10 milliseconds, sending an image to the cloud and waiting for a response isn't an option. The intelligence has to live on the device.
Key technical achievements
The partnership has yielded several breakthroughs in model compression and deployment:
Quantization-aware training
We developed a training pipeline that produces models natively optimized for 8-bit integer arithmetic — the native format of STM32 processors. Unlike post-training quantization, which can degrade accuracy by 5–15%, our approach maintains within 1.2% of full-precision accuracy on standard benchmarks.
import torch
from euranova.edge import QuantizationAwareTrainer
trainer = QuantizationAwareTrainer(
model=mobilenet_v3_small,
target_device="stm32h7",
bit_width=8,
calibration_dataset=val_loader,
)
quantized_model = trainer.train(
train_loader=train_loader,
epochs=50,
learning_rate=1e-4,
)
# Export to STM32 format
trainer.export(quantized_model, "model.tflite", optimize_for="latency")
Architecture search for microcontrollers
Not all neural network architectures are equal when targeting MCUs. We built a hardware-aware neural architecture search (NAS) system that explores the design space with STM32 constraints as first-class objectives:
- Flash memory — Model weights must fit in the device's flash storage
- SRAM — Intermediate activations must fit in available RAM
- Latency — Inference must complete within the application's real-time budget
- Power — Total energy per inference must stay within the device's power envelope
The NAS system discovered architectures that are 3.2x faster than manually adapted MobileNets while achieving comparable accuracy.
Industrial applications
Three pilot projects are already underway:
- Automotive quality inspection — Detecting surface defects on painted car body panels at line speed (120 parts/minute), running entirely on an STM32H7 with a 2MP camera module
- Smart agriculture — Identifying crop diseases from leaf images captured by solar-powered sensors in fields with no cellular coverage
- Warehouse safety — Detecting PPE compliance (hard hats, safety vests) on workers using ceiling-mounted cameras with on-device processing
What's next
The next phase of the collaboration will focus on on-device learning — enabling models to adapt to their specific deployment environment without sending data back to the cloud. This is particularly important for manufacturing applications where the visual characteristics of defects evolve over time.
Euranova and STMicroelectronics will present the full technical results at Embedded World 2026 in Nuremberg.


