Hardware & Embedded

Edge AI

Machine learning that runs on the MCU. We work with TensorFlow Lite Micro and CMSIS-NN to quantise models, profile their footprint, and fit them into the RAM and flash you actually have. Useful when latency, privacy, or offline operation rule out a cloud round-trip — anomaly detection on vibration data, keyword spotting, simple computer vision on grayscale frames.

What we offer

Capabilities

Model quantisation: INT8, INT4, mixed-precision
TensorFlow Lite Micro on Cortex-M and ESP32
CMSIS-NN kernels for ARM targets
Sensor preprocessing pipelines: windowing, FFT, MFCC
Memory and latency profiling on the actual silicon
Model retraining loops driven by field data

Tech stack

What we reach for

TensorFlow Lite MicroCMSIS-NNEdge ImpulseONNX RuntimePyTorchTensorFlow

Our process

How we deliver

01

Frame the problem

What signal, what decision, what latency budget.

02

Model & quantise

Train in PyTorch or TF, quantise to fit the MCU.

03

Profile

Real silicon, real RAM and flash, real latency.

04

Iterate

Retrain on field data; ship updates via OTA.

Talk to us

Interested in this service?

Tell us what you're building. We'll let you know whether it's a fit, and where it isn't.