Search Solutions  
32 "AI Processors" SoCs

AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

Highly Scalable and Efficient Second-Generation ML Inference Processor

Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

AI Innovation for Edge and Endpoint Devices

InferX X1 Edge Inference Co-Processor
The InferX X1 Edge Inference Co-Processor is optimized for what the edge needs: large models and large models at batch=1.

NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

NMAX512 : Neural Inferencing Tile for 1 to >100 TOPS
NMAX has a unique new architecture that loads weights rapidly compared to existing solutions.

nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area
nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area


As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...

4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

CortiCore - Neural Processing Engine
Roviero has developed a natively graph computing processor for edge inference. CortiCore architecture provides the solution via its unique instruction set that dramatically reduces the compiler comple...

General Purpose Neural Processing Unit
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (SoC) developers, Quadrics General Purpose Neural Processing Unit (GPNPU)...

Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

Spiking Neural Processor

Innatera's ultra-efficient neuromorphic processors mimic the brain's mechanisms for processing sensory data. Based on a proprietary analog-mixed signal computing architecture, Innatera'...

v-MP6000UDX processor
Deep learning has quickly become a must-have technology to bring new smart sensing and intelligent analysis capabilities to all of our electronics. Whether it s self-driving cars that need to understa...

ZIA DV500 Series - Ultra Low Power Consumption Processor IP for Deep Learning
AI inference processor IP, which achieves smaller size and ultra-low power consumption by being optimized for object recognition and scene understanding often used in industrial equipment and automobi...

ZIA DV700 Series - Configurable AI inference processor IP
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, and sounds on the edge side where real-time property, safety, privacy p...

ZIA ISP- Small-size ISP IP ideal for AI camera systems
Small-size ISP (Image Signal Processing) IP ideal for AI camera systems.

Artificial Intelligence Cores
The most flexible solution on the market, by giving the user the ability to select the best combination of performance, power and cost.

C860 High-performance 32-bit multi-core processor with AI acceleration engine
C860 utilizes a 12-stage superscalar pipeline, with a standard memory management unit, and can run Linux and other operating systems.

Jotunn - Generative AI Platform

The "Memory Wall" was first conceived as a theory by Wulf and McKee in 1994. It posited that the development of the processing unit (CPU) far outpaced that of the memory. As a result the ...

POLYN PPGis a Neuromorphic Analog Signal Processor (NASP) with Direct Analog Input (DAI) for real time edge pulse determination at a fraction of the power consumed by traditional devices.

Speedster7t FPGAs
Speedster®7t FPGAs are optimized for high-bandwidth workloads and eliminate the performance bottlenecks associated with traditional FPGAs.


Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2023 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.