www.design-reuse-embedded.com
Search Solutions  
OK
98 "Artificial Intelligence" SoCs

1
AI IP for Cybersecurity monitoring - Smart Monitor
In cryptography, an attack can be performed by injecting one or several faults into a device, thus, disrupting its functional behavior. Commonly used techniques to inject faults consists of introducin...

2
AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

3
ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications

DesignWare EV Embedded Vision Processors provide high-performance processing capabilities at a power and cost point low enough for embedded applications, while maintaining flexibility to support an...


4
Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

5
c.WAVE100 - Deep Learning based Fully Hardwired Object Detection IP
Chips&Media s Computer Vision IP is Deep Learning based Object Detection with capability to process 4K resolution at 30 FPS input in real time.

6
CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

7
CEVA NeuPro-S - Edge AI Processor Architecture for Imaging & Computer Vision
NeuPro-S™ is a low power AI processor architecture for on-device deep learning inferencing, imaging and computer vision workloads.

8
CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

9
CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

10
Cortex-M55
The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

11
Cortex-M85
A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

12
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

13
Ethos-N78
Highly Scalable and Efficient Second-Generation ML Inference Processor

14
Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

15
Ethos-U65
AI Innovation for Edge and Endpoint Devices

16
HBM3 PHY for AI and machine learning model training

The Rambus High-Bandwidth Memory generation 3 (HBM3) PHY is optimized for systems that require a high-bandwidth, low-latency memory solution. The memory subsystem PHY supports data rates up to 8.4 ...


17
Heterogeneous and Secure AI/ML Processor Architecture for Smart Edge Devices
NeuPro-M™ redefines high performance AI (Artificial Intelligence) and ML (Machine Learning) processing for smart edge devices and edge compute with heterogeneous and secure architecture.

18
IMG Series4 Neural Network Accelerator (NNA)

IMG Series4 next-generation neural network accelerator (NNA) is ideal for advanced driver-assistance systems (ADAS) and autonomous vehicles such as robotaxis. The range of cores incorporates sophis...


19
InferX X1 Edge Inference Co-Processor
The InferX X1 Edge Inference Co-Processor is optimized for what the edge needs: large models and large models at batch=1.

20
NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

21
Neural Network Accelerator

The new PowerVR Series2NX Neural Network Accelerator (NNA) delivers high performance computation of neural networks at very low power consumption in minimal silicon area. It is designed to power in...


22
Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

23
NMAX512 : Neural Inferencing Tile for 1 to >100 TOPS
NMAX has a unique new architecture that loads weights rapidly compared to existing solutions.

24
nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area
nnMAX 1K AI Inference IP for 2 to >100 TOPS at low power, low die area

25
PowerVR Neural Network Accelerator
The PowerVR AX2185 is the highest performing neural network accelerator per mm2 in the market. Featuring eight full-width compute engines the AX2185 delivers up to 4.1 Tera Operations Per Second (TOPS...

26
PowerVR Neural Network Accelerator

Optimised for cost-sensitive devices

The PowerVR AX2145's streamlined architecture delivers performance-efficient neural network inferencing engine for ultra-low bandwidth systems ...


27
PowerVR Neural Network Accelerator

The Series3NX-F brings programmable extensibility to the Series3NX architecture. It combines a Series3NX core with a neural network programmable unit (NNPU); a highly neural network optimised GPGPU...


28
PowerVR Neural Network Accelerator - cost-sensitive solution for low power and smallest area

PowerVR AX3125 is a cost-sensitive solution for low-power applications – i.e. where power draw is a key issue. It can run at up to 0.6 TOPS in IoT devices, entry-level smart cameras, embedded...


29
PowerVR Neural Network Accelerator - perfect choice for cost-sensitive devices

The PowerVR AX3145 is an ideal choice for more cost-sensitive devices with basic performance requirements. Optimised for ultra-low bandwidth new Series3NX features reduce the overall implementation...


30
PowerVR Neural Network Accelerator - The ideal choice for mid-range requirements

The PowerVR AX3365 delivers up to 2.0 TOPS performance in a small silicon area. With its low power consumption, it is ideal for mid-range smartphones, smart surveillance and DTV/set-top box video c...


31
PowerVR Neural Network Accelerator - The perfect choice for cost-sensitive devices

With 4.0 TOPS in a smaller silicon area than its predecessor, the PowerVR AX3385 is suitable for high-end smartphones, smart cameras and DTV/set-top box. It can be used for video stream analysis, c...


32
PowerVR Neural Network Accelerator - The ultimate solution for high-end neural networks acceleration

With more than double the performance of the previous generation, the PowerVR AX3595 is the flagship of our new range of single-core designs. More than this, it's the fundamental engine that de...


33
Smart Data Acceleration
Rambus Smart Data Acceleration (SDA) research program is focused on tackling some of the major issues facing data centers and servers in the age of Big Data. The SDA Research Program has been explorin...

34
Vivante VIP8000
The Vivante VIP8000 consists of a highly multi-threaded Parallel Processing Unit, Neural Network Unit and Universal Storage Cache Unit.

35
memBrain

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...


36
FlexNoC AI Package
Complements FlexNoC interconnect IP, adding technologies for artificial intelligence (AI) and machine learning (ML) chip design.

37
AI Accelerator

The accelerator is designed for generic applications to compute 3D tensor convolved by 4D tensor to increase the efficiency by 10x.

Marquee created the microarchitecture from the specificatio...


38
Face Detection and Tracking
Xilinx Zynq-7000 All Programmable SoC architecture, which combines dual-core ARM Cortex-A9 processors and programmable logic on a single device, enables for single-SoC implementations of multiple real...

39
MPSoC Multi-Camera Vision Kit
The logiVID-ZU Vision Development Kit provides system designers with everything they need to efficiently develop multi-camera vision applications on the Xilinx® Zynq® UltraScale+™ MPSoC devices.

40
v-CNNDesigner tool
The new v-CNNDesigner tool automatically translates trained neural networks into optimized implementations that run efficiently on the v-MP6000UDX architecture.

 | 
 Previous Page
 | 1 | 2 | 3 | 
Next Page 
 | 
 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2023 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.