Industry Expert Blogs

Moshe Sheier

Artificial Intelligence on the Edge, from Sensor Fusion to Deep Neural Networks

Moshe Sheier - CEVA, Inc.
Jan 22, 2023

The advantages of AI at the edge are now obvious. Where real-time response is critical, for example in safety sensitive applications. To minimize communication cost to transmit raw data to the cloud. To reduce power, protect privacy and increase scalability to multiple edge nodes. All these requirements are best served through intelligence within edge devices, rather than in a remote cloud service. However, the range of edge applications we now know cannot be served by a single AI engine. A home appliance may only need to recognize a simple set of voice commands or a picture on a food container. More sophisticated systems for surveillance or industrial robotics may fuse multiple inputs, from image sensors, microphones, motion sensors, and more. At the high-end, recognition systems for autonomous or semi-autonomous driving require very sophisticated deep neural networks (DNN). The CEVA SensPro2 and NeuPro-M platforms span this range.

The Market for Edge AI

The market for AI processor chips at the edge is anticipated to grow by around a 20% CAGR from now through the end of the decade. This will be driven by growing adoption/evolution in smart devices for consumers in cameras, wearables and home automation; in automotive for improved levels of safety and autonomy; in industry for surveillance, robotics, machine/plant control and predictive maintenance.

Products most likely to succeed in these areas naturally must have the functional and performance capabilities to meet the recognition demands for which they are applied. They must be consumer-priced and/or cost-effective at scale, and they should minimize incremental load on existing wireless infrastructure. They must also be software upgradeable to adapt to emerging solutions in the fast-evolving AI technology space.

Sensor Fusion and SensPro2

All but the very simplest intelligent edge devices now use more than one sensor. Fusing information from two or more sensors commonly allows an intelligent system to deliver higher accuracy or provide complementary information. For example in automated parallel parking or automated valet parking, vision sensing or radar for available space detection may be combined with ultrasonic detection for ranging, and also possibly combined with IMU input to further refine positioning estimates. These features might be complemented by SLAM to navigate through a parking lot for auto valet parking.

CEVA SensPro2 Sensor Hub DSP is the best fit for sensor hub/fusion applications. SensPro2 is the second generation of CEVA's sensor hub DSP, allowing for multiple sensor inputs: image, microphone, radar, time of flight, IMU and more. Software based neural networks run fast on this DSP architecture thanks to a rich complement of hardware support features, including vector units with flexible MAC operation range, integer and floating-point math support, application-specific ISA extensions and a comprehensive non-linear instruction set. Through these capabilities SensPro2 delivers up to 2X faster AI, 6X faster SLAM, 8X faster radar and 10X faster audio than the previous generation SensPro.

Artificial intelligence on the edge through SensPro2 is already deployed in SoCs in a wide range of consumer applications, for example a recent published release for a new Novatek surveillance SoC.

DNN Intelligence and NeuPro-M

The high end of edge intelligence demands deep neural networks (DNN) support, requiring high levels of parallelism and bandwidth optimization together with heterogenous accelerators for the very latest AI algorithms. A good application example of the first need is in free-space detection for autonomous/semi-autonomous driving. This aims to find the safe driving area along a road/highway, avoiding obstacles, opposite direction lanes and unpaved shoulders and dividers. Response time must be fast, so a forward-facing road image is broken into say 4 sub-frames which are processed in parallel. Free-space detection is run on each sub-image then recombined to provide the complete result. NeuPro-M supports up to 8 parallel engines for this class of artificial intelligence on the edge applications.

Accuracy and performance expectations don't stop at parallelism. Network developers now want to take advantage of specialized functions, now hardware accelerated in NeuPro-M. These include matrix decomposition, sparsity, Winograd and mixed precision neural operations, all available in each parallel engine.

NeuPro-M was released in 2022, winning "Most Promising Product" at EE Awards Asia and "Honorable Mention for Best Edge AI Processor" at 2022 Edge AI and Vision Alliance Product of the Year. It is already deployed in multiple SoCs under design, which are expected to start appearing in end products in the coming years.

Future-Proofing Solutions

Software-only AI solutions running on a standard CPU or GPU are too inefficient and power hungry to be practical but have the theoretical appeal that you can always change the software if needs change no hardware change is required. Is it possible to get all the performance and power advantages of hardware acceleration while still retaining flexibility for upgrades as AI technologies and network layers evolve? With both SensPro2 and NeuPro-M it is. The vector DSP foundation of these AI solutions ensures your ability to upgrade product implementations in software as market needs and networks advance. The CEVA Deep Neural Network (CDNN) AI compiler streamlines implementation from standard networks (TensorFlow, PyTorch etc.) to a mapping into the processor IP as implemented in your specific SoC. You can also control optimizations in this step to take full advantage of special accelerators, such as those in NeuPro-M, or to add accelerators of your own that you might need in your design. Such extensions are supported through CEVA's CDNN-Invite API.

Click here to read more ...



I understand
This website uses cookies to store information on your computer/device. By continuing to use our site, you consent to our cookies. Please see our Privacy Policy to learn more about how we use cookies and how to change your settings if you do not want cookies on your computer/device.