Skip to main content

MLCommons New MLPerf Tiny v1.3 Benchmark Results Released

SAN FRANCISCO, Sept. 17, 2025 (GLOBE NEWSWIRE) -- Today, MLCommons® announced results for its industry-standard MLPerf® Tiny v1.3 benchmark suite, which is designed to measure the performance of “tiny” neural networks in an architecture-neutral, representative, and reproducible manner. These networks are typically under 100 kB and process data from sensors including audio and vision to provide endpoint intelligence for low-power devices in the smallest form factors.

Version 1.3 adds a new test: a one-dimensional depthwise separable convolutional neural network (1D DS-CNN). A 1D DS-CNN is trained on sequential data, such as sensor readings or audio waveforms, and is often used to identify signals, triggers, or threshold events in continuous, real-time data streams. The new test measures performance in recognizing “wake words” in a continuous audio stream. Tiny ML deployments often monitor a continuous stream of data from a sensor, such as a microphone, accelerometer, or camera. The streaming scenario in the new test exercises capabilities important to these deployments, such as low-power idle, rapid wake-up, data ingestion, and feature extraction. The wake-word detection is just one example of a streaming task. Others include speech enhancement, real-time translation, or industrial monitoring for preventive maintenance. The capability to evaluate streaming scenarios opens up many new opportunities.

This version also introduces a new, open-source test harness for running the benchmark suite. This simplifies the process for submitters of gaining access to and executing the test harness, as well as troubleshooting any issues that occur while running the tests.

MLPerf Tiny v1.3 participation and submitters

This release includes 70 results across five benchmark tests, submitted by four participants: Kai Jiang, Qualcomm, ST Microelectronics, and Syntiant. This includes 27 power results, an increase from the previous release. Five hardware platforms were benchmarked for the first time in this release.

View the Results

To view the results for MLPerf Tiny v1.3, please visit the Tiny benchmark results.

We invite stakeholders to join the MLPerf Tiny working group and help us continue to evolve the benchmark suite.

About MLCommons

MLCommons is the world’s leader in AI benchmarking. An open engineering consortium supported by over 125 members and affiliates, MLCommons has a proven record of bringing together academia, industry, and civil society to measure and improve AI. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. Since then, MLCommons has continued using collective engineering to build the benchmarks and metrics required for better AI – ultimately helping to evaluate and improve AI technologies’ accuracy, safety, speed, and efficiency.

For additional information on MLCommons and details on becoming a member, please visit MLCommons.org or email participation@mlcommons.org.

Press Inquiries: Contact press@mlcommons.org.


Primary Logo

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.