Search Solutions  
117 "Artificial Intelligence" SoCs

AI IP for Cybersecurity monitoring - Smart Monitor
In cryptography, an attack can be performed by injecting one or several faults into a device, thus, disrupting its functional behavior. Commonly used techniques to inject faults consists of introducin...

AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications

DesignWare EV Embedded Vision Processors provide high-performance processing capabilities at a power and cost point low enough for embedded applications, while maintaining flexibility to support an...

Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

c.WAVE100 - Deep Learning based Fully Hardwired Object Detection IP
Chips&Media s Computer Vision IP is Deep Learning based Object Detection with capability to process 4K resolution at 30 FPS input in real time.

CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

CEVA NeuPro-S - Edge AI Processor Architecture for Imaging & Computer Vision
NeuPro-S™ is a low power AI processor architecture for on-device deep learning inferencing, imaging and computer vision workloads.

CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

Highly Scalable and Efficient Second-Generation ML Inference Processor

Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

AI Innovation for Edge and Endpoint Devices

EV74 processor IP for AI vision applications with 4 vector processing units
The DesignWare® ARC® EV71, EV72, and EV74 Embedded Vision Processor IP provides high performance, low power, area efficient solutions for a standalone computer vision and/or AI algorithms engine or as...

EV7x Vision Processors

The Synopsys EV7x Vision Processors' heterogeneous architecture integrates vector DSP, vector FPU, and a neural network accelerator to provide a scalable solution for a wide range of current an...

EV7xFS Vision Processors for Functional Safety

The ASIL B or D Ready Synopsys EV7xFS Embedded Vision Processors enable automotive system-on-chip (SoC) designers to accelerate Advanced Driver Assistance Systems (ADAS) and autonomous vehicle appl...

HBM3 PHY for AI and machine learning model training

The Rambus High-Bandwidth Memory generation 3 (HBM3) PHY is optimized for systems that require a high-bandwidth, low-latency memory solution. The memory subsystem PHY supports data rates up to 8.4 ...

IMG Series4 Neural Network Accelerator (NNA)

IMG Series4 next-generation neural network accelerator (NNA) is ideal for advanced driver-assistance systems (ADAS) and autonomous vehicles such as robotaxis. The range of cores incorporates sophis...

InferX X1 Edge Inference Co-Processor
The InferX X1 Edge Inference Co-Processor is optimized for what the edge needs: large models and large models at batch=1.

NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

Neural Network Accelerator

The new PowerVR Series2NX Neural Network Accelerator (NNA) delivers high performance computation of neural networks at very low power consumption in minimal silicon area. It is designed to power in...

Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

NMAX512 : Neural Inferencing Tile for 1 to >100 TOPS
NMAX has a unique new architecture that loads weights rapidly compared to existing solutions.

nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area
nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area

PowerVR Neural Network Accelerator

Optimised for cost-sensitive devices

The PowerVR AX2145's streamlined architecture delivers performance-efficient neural network inferencing engine for ultra-low bandwidth systems ...

PowerVR Neural Network Accelerator

The Series3NX-F brings programmable extensibility to the Series3NX architecture. It combines a Series3NX core with a neural network programmable unit (NNPU); a highly neural network optimised GPGPU...

PowerVR Neural Network Accelerator
The PowerVR AX2185 is the highest performing neural network accelerator per mm2 in the market. Featuring eight full-width compute engines the AX2185 delivers up to 4.1 Tera Operations Per Second (TOPS...

PowerVR Neural Network Accelerator - cost-sensitive solution for low power and smallest area

PowerVR AX3125 is a cost-sensitive solution for low-power applications – i.e. where power draw is a key issue. It can run at up to 0.6 TOPS in IoT devices, entry-level smart cameras, embedded...

PowerVR Neural Network Accelerator - perfect choice for cost-sensitive devices

The PowerVR AX3145 is an ideal choice for more cost-sensitive devices with basic performance requirements. Optimised for ultra-low bandwidth new Series3NX features reduce the overall implementation...

PowerVR Neural Network Accelerator - The ideal choice for mid-range requirements

The PowerVR AX3365 delivers up to 2.0 TOPS performance in a small silicon area. With its low power consumption, it is ideal for mid-range smartphones, smart surveillance and DTV/set-top box video c...

PowerVR Neural Network Accelerator - The perfect choice for cost-sensitive devices

With 4.0 TOPS in a smaller silicon area than its predecessor, the PowerVR AX3385 is suitable for high-end smartphones, smart cameras and DTV/set-top box. It can be used for video stream analysis, c...

PowerVR Neural Network Accelerator - The ultimate solution for high-end neural networks acceleration

With more than double the performance of the previous generation, the PowerVR AX3595 is the flagship of our new range of single-core designs. More than this, it's the fundamental engine that de...

Smart Data Acceleration
Rambus Smart Data Acceleration (SDA) research program is focused on tackling some of the major issues facing data centers and servers in the age of Big Data. The SDA Research Program has been explorin...

Vivante VIP8000
The Vivante VIP8000 consists of a highly multi-threaded Parallel Processing Unit, Neural Network Unit and Universal Storage Cache Unit.


As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...

FlexNoC AI Package
Complements FlexNoC interconnect IP, adding technologies for artificial intelligence (AI) and machine learning (ML) chip design.

 Previous Page
 | 1 | 2 | 3 | 
Next Page 

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2023 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.