www.design-reuse-embedded.com
Search Solutions  
OK
61 "AI Processor" IP

1
Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
Synopsys ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applications requiring AI enabled SoCs. The ARC NPX6 NPU IP is designed f...

2
AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

3
CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

4
CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

5
CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

6
CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

7
Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
The Cadence® Tensilica® NNE 110 offers an energy-efficient hardware-based AI engine that can be paired with a Tensilica based DSP. The NNE 110 targets a variety of applications including audio, voice,...

8
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

9
Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

10
NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

11
Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

12
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

13
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

14
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

15
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

16
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

17
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

18
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

19
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

20
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

21
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

22
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

23
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

24
Neural Network Processor IP
VIP9000Pico processor family offers extremely low power, programmable, scalable and extendable solutions for markets that demand low power AI devices.

25
Neural Network Processor IP
VIP9000 Series supports all popular deep learning frameworks (TensorFlow, Pytorch, TensorFlow Lite, Caffe, Caffe2, DarkNet, ONNX, NNEF, Keras, etc.) as well as programming APIs like OpenCL and OpenVX....

26
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

27
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

28
Neural Network Processor IP
Vivante’s VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices.

29
Tensilica DSP IP supports efficient AI/ML processing
The Cadence AI IP platform includes the extensible DSP platform from Cadence, which provides flexible instruction sets designed to perform artificial intelligence and machine learning (AI/ML) workload...

30
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

31
ENLIGHT Pro - 8/16-bit mixed-precision NPU IP

The state-of-the-art inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and more. ENLIGHT Pro is meticulously engineered to deliv...


32
memBrain

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...


33
Complete Neural Processor for Edge AI
Akida first neuromorphic IP available on the market. Inspired by the biological function of neurons and engineered on a digital logic process, the Akida’s event-based spiking neural network (SNN) perf...

34
General Purpose Neural Processing Unit
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (SoC) developers, Quadrics General Purpose Neural Processing Unit (GPNPU)...

35
RISC-V Tensor Unit
The bulk of computations in Large Language Models (LLMs) is in fully-connected layers that can be efficiently implemented as matrix multiplication. The Tensor Unit provides hardware specifically tailo...

36
RISC-V-based AI IP development for enhanced training and inference
Tenstorrent develops AI IP with precision, anchored in RISC-V’s open architecture, delivering specialized, silicon-proven solutions for both AI training and inference. Our platforms are optimized for ...

37
Ultra Low Power AI core
Akida Pico accelerates a set of highly optimized temporal event-based neural network models to create an ultra energy-efficient, and purely digital, event-based processing architecture. Akida Pico fea...

38
AI processing engine
AON1010™ belongs to the highly optimized AONVoice™ Neural Network cores for Voice and Audio recognition. This solution is optimized for processing microphone data for applications including voice and ...

39
Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

40
Artificial Intelligence Cores
The most flexible solution on the market, by giving the user the ability to select the best combination of performance, power and cost.

 | 
 Previous Page
 | 1 | 2 | 
Next Page 
 | 
 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.