www.design-reuse-embedded.com
Search Solutions  
OK
120 "Artificial Intelligence" SoCs

1
AI IP for Cybersecurity monitoring - Smart Monitor
In cryptography, an attack can be performed by injecting one or several faults into a device, thus, disrupting its functional behavior. Commonly used techniques to inject faults consists of introducin...

2
ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications

DesignWare EV Embedded Vision Processors provide high-performance processing capabilities at a power and cost point low enough for embedded applications, while maintaining flexibility to support an...


3
Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

4
Cortex-M55
The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

5
Cortex-M85
A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

6
Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
Synopsys ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applications requiring AI enabled SoCs. The ARC NPX6 NPU IP is designed f...

7
Ethos-N78
Highly Scalable and Efficient Second-Generation ML Inference Processor

8
Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

9
Ethos-U65
AI Innovation for Edge and Endpoint Devices

10
EV74 processor IP for AI vision applications with 4 vector processing units
The DesignWare® ARC® EV71, EV72, and EV74 Embedded Vision Processor IP provides high performance, low power, area efficient solutions for a standalone computer vision and/or AI algorithms engine or as...

11
EV7x Vision Processors

The Synopsys EV7x Vision Processors' heterogeneous architecture integrates vector DSP, vector FPU, and a neural network accelerator to provide a scalable solution for a wide range of current an...


12
EV7xFS Vision Processors for Functional Safety

The ASIL B or D Ready Synopsys EV7xFS Embedded Vision Processors enable automotive system-on-chip (SoC) designers to accelerate Advanced Driver Assistance Systems (ADAS) and autonomous vehicle appl...


13
HBM3 PHY for AI and machine learning model training

The Rambus High-Bandwidth Memory generation 3 (HBM3) PHY is optimized for systems that require a high-bandwidth, low-latency memory solution. The memory subsystem PHY supports data rates up to 8.4 ...


14
Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

15
NeuroWeave SDK - Faster Product Development for the Evolving AI Market
A common AI software solution for faster product development Developing an agile software stack is important for successful artificial intelligence and machine learning (AI/ML) deployment at the edge...

16
Smart Data Acceleration
Rambus Smart Data Acceleration (SDA) research program is focused on tackling some of the major issues facing data centers and servers in the age of Big Data. The SDA Research Program has been explorin...

17
Tensilica AI Max - NNA 110 Single Core
Single-core neural network accelerator offering from 0.5 to 4 TOPS Optimized for machine learning inference applications

18
Tensilica Vision 110 DSP
The latest addition to the Vision DSP family, built using 128-bit SIMD and offering up to 0.4TOPs of performance

19
Tensilica Vision 130 DSP
First DSP for embedded vision and AI with millions of units shipped in the market

20
Tensilica Vision 230 DSP
Built on our latest Xtensa NX architecture and offers up to 2.18TOPS of performance

21
Tensilica Vision 240 DSP
Built using 1024-bit SIMD and offering up to 3.84TOPS of performance

22
AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

23
c.WAVE100 - Deep Learning based Fully Hardwired Object Detection IP
Chips&Media s Computer Vision IP is Deep Learning based Object Detection with capability to process 4K resolution at 30 FPS input in real time.

24
CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

25
CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

26
CEVA NeuPro-S - Edge AI Processor Architecture for Imaging & Computer Vision
NeuPro-S™ is a low power AI processor architecture for on-device deep learning inferencing, imaging and computer vision workloads.

27
CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

28
CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

29
CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

30
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

31
Maestro AI
Intelligent Clock Networking Solutions that Adapt on the Fly

32
NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

33
Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

34
Tessent AI IC debug and optimization
The multicore architectures of SoCs for machine learning (ML) and artificial intelligence (AI) applications provide unique challenges for development, verification, and validation teams. The larger nu...

35
Vivante VIP8000
The Vivante VIP8000 consists of a highly multi-threaded Parallel Processing Unit, Neural Network Unit and Universal Storage Cache Unit.

36
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

37
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

38
AI Accelerator IP- ENLIGHT
OPENEDGES™ Artificial Intelligence Compute Engine ENLIGHT™ is a deep learning accelerator IP technology delivers unrivaled compute density and energy efficiency. ENLIGHT™ NPU IP ...

39
memBrain

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...


40
AI Accelerator

The accelerator is designed for generic applications to compute 3D tensor convolved by 4D tensor to increase the efficiency by 10x.

Marquee created the microarchitecture from the specificatio...


 | 
 Previous Page
 | 1 | 2 | 3 | 
Next Page 
 | 
 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.