Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...
You are here : design-reuse-embedded.com  > Embedded Processing  > Arm Core centric Platform
Online Datasheet        Request More Info
All Silicon IP


The Arm Machine Learning processor is an optimized, ground-up design for machine learning acceleration, targeting mobile and adjacent markets. The solution consists of state-of-the-art optimized fixed-function engines that provide best-in-class performance within a constrained power envelope.

Additional programmable layer engines support the execution of non-convolution layers, and the implementation of selected primitives and operators, along with future innovation and algorithm generation. The network control unit manages the overall execution and traversal of the network and the DMA moves data in and out of the main memory.

Onboard memory allows central storage for weights and feature maps, thus reducing traffic to the external memory and therefore, power.


  • Most efficient solution to run neural networks.
  • Designed for the mobile and adjacent markets.
  • Optimized, ground-up design for machine learning acceleration.
  • Best-in-class performance with state-of-the-art, fixed-function engines.
  • Programmable engines for future innovation and algorithms.
  • Massive efficiency uplift from CPUs, GPUs, DSPs and accelerators.
  • Completes Arm's heterogeneous Machine Learning platform solution.
  • Enabled by open-source software.
  • Industry-leading performance in thermally- and cost-constrained environments.
  • When combined with the Arm Object Detection processor, provides highly efficient and optimized people detection.


  • Mobile
  • AR/VR
  • IoT
  • Smart camera
  • Healthcare/Medical
  • Logistics
  • Small area
  • Robotics
  • Home
  • Consumer
  • Drones
  • Wearables

Block Diagram


  • Specially designed to provide outstanding performance for mobile; optimizations provide a further increase in real-world use cases up to 4.6 TOPs.
  • Best-in-class efficiency at 3 TOPs/ W.
  • Programmable layer engines for futureproofing.
  • Highly tuned for advanced geometry implementations.
  • Onboard memory reduces external memory traffic.
  • Arm NN acts as a translation layer between major neural networks frameworks, such as TensorFlow and Caffe, and the Arm Machine Learning processor, as well as other Arm IP.

Partner with us

Visit our new Partnership Portal for more information.

Submit your material

Submit hot news, product or article.

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2018 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted,
reposted, duplicated or otherwise used without the
express written permission of Design And Reuse.