Search Solutions  
116 "Artificial Intelligence" IP

AI IP for Cybersecurity monitoring - Smart Monitor
In cryptography, an attack can be performed by injecting one or several faults into a device, thus, disrupting its functional behavior. Commonly used techniques to inject faults consists of introducin...

ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications

DesignWare EV Embedded Vision Processors provide high-performance processing capabilities at a power and cost point low enough for embedded applications, while maintaining flexibility to support an...

Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology.

A New Milestone for High-Performance Microcontrollers, Arm Cortex-M85 is the highest performing Cortex-M processor with...

Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
Synopsys ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applications requiring AI enabled SoCs. The ARC NPX6 NPU IP is designed f...

Highly Scalable and Efficient Second-Generation ML Inference Processor

Ethos-U55 Embedded ML Inference for Cortex-M Systems
Unlock the Benefits of AI with this Best-in-Class Solution

AI Innovation for Edge and Endpoint Devices


Accelerate Edge AI Innovation

AI data-processing workloads at the edge are already transforming use cases and user experiences. The third-generation Ethos NPU helps meet the needs of future e...

EV74 processor IP for AI vision applications with 4 vector processing units
The DesignWare® ARC® EV71, EV72, and EV74 Embedded Vision Processor IP provides high performance, low power, area efficient solutions for a standalone computer vision and/or AI algorithms engine or as...

EV7x Vision Processors

The Synopsys EV7x Vision Processors' heterogeneous architecture integrates vector DSP, vector FPU, and a neural network accelerator to provide a scalable solution for a wide range of current an...

EV7xFS Vision Processors for Functional Safety

The ASIL B or D Ready Synopsys EV7xFS Embedded Vision Processors enable automotive system-on-chip (SoC) designers to accelerate Advanced Driver Assistance Systems (ADAS) and autonomous vehicle appl...

HBM3 PHY for AI and machine learning model training

The Rambus High-Bandwidth Memory generation 3 (HBM3) PHY is optimized for systems that require a high-bandwidth, low-latency memory solution. The memory subsystem PHY supports data rates up to 8.4 ...

Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

NeuroWeave SDK - Faster Product Development for the Evolving AI Market
A common AI software solution for faster product development Developing an agile software stack is important for successful artificial intelligence and machine learning (AI/ML) deployment at the edge...

Smart Data Acceleration
Rambus Smart Data Acceleration (SDA) research program is focused on tackling some of the major issues facing data centers and servers in the age of Big Data. The SDA Research Program has been explorin...

Tensilica AI Max - NNA 110 Single Core
Single-core neural network accelerator offering from 0.5 to 4 TOPS Optimized for machine learning inference applications

Tensilica Vision 110 DSP
The latest addition to the Vision DSP family, built using 128-bit SIMD and offering up to 0.4TOPs of performance

Tensilica Vision 130 DSP
First DSP for embedded vision and AI with millions of units shipped in the market

Tensilica Vision 230 DSP
Built on our latest Xtensa NX architecture and offers up to 2.18TOPS of performance

Tensilica Vision 240 DSP
Built using 1024-bit SIMD and offering up to 3.84TOPS of performance

AI-Capable 3D GPU
Vivante AI-GPU augments the stunning 3D graphics rendering capabilities of the Vivante 3D engines with a dedicated Neural Network Engine (NN) and Tensor Processing Fabric to support advanced capabili...

c.WAVE100 - Deep Learning based Fully Hardwired Object Detection IP
Chips&Media s Computer Vision IP is Deep Learning based Object Detection with capability to process 4K resolution at 30 FPS input in real time.

CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

CEVA NeuPro-S - Edge AI Processor Architecture for Imaging & Computer Vision
NeuPro-S™ is a low power AI processor architecture for on-device deep learning inferencing, imaging and computer vision workloads.

CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

Maestro AI
Intelligent Clock Networking Solutions that Adapt on the Fly

NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

Tessent AI IC debug and optimization
The multicore architectures of SoCs for machine learning (ML) and artificial intelligence (AI) applications provide unique challenges for development, verification, and validation teams. The larger nu...

Vivante VIP8000
The Vivante VIP8000 consists of a highly multi-threaded Parallel Processing Unit, Neural Network Unit and Universal Storage Cache Unit.

4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

AI Accelerator IP- ENLIGHT
OPENEDGES™ Artificial Intelligence Compute Engine ENLIGHT™ is a deep learning accelerator IP technology delivers unrivaled compute density and energy efficiency. ENLIGHT™ NPU IP ...

ENLIGHT Pro - 8/16-bit mixed-precision NPU IP

The state-of-the-art inference neural processing unit (NPU) IP is suitable for high-performance edge devices including automotive, cameras, and more. ENLIGHT Pro is meticulously engineered to deliv...

 Previous Page
 | 1 | 2 | 3 | 
Next Page 

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.