www.design-reuse-embedded.com
32 "AI and Machine learning" Solutions

1
CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors.

2
CEVA-BX1 Multipurpose DSP/Controller
The CEVA-BX architecture delivers excellent all-round performance for a new generation of smart devices by providing the perfect alternative for special purpose DSPs and MCUs with DSP co-processors that cannot handle the diverse algorithm needs of today s applications.

3
CEVA-BX2 Multipurpose DSP/Controller
CEVA-BX2 is a multipurpose hybrid DSP and Controller, designed for the inherent low power requirements of DSP kernels with high-level programming and compact code size requirements of a large control code base.

4
NeuPro Family of AI Processors
Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, smartphones, surveillance, automotive, robotics, medical and industrial.

5
Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor

6
Tensilica DNA Processor Family for On-Device AI
Built for AI processing with industry-leading performance and power efficiency
Enabling On-Device AI Across a Wide Range of Inference from 0.5 to 100s of TMACs

7
Tensilica Vision DSPs for Imaging, Computer Vision, and Neural Networks
The Cadence® Tensilica® Vision digital signal processor (DSP) family offers a much-needed breakthrough in terms of energy efficiency and performance that enables applications never before possible in a programmable device.

8
WhisPro Speech recognition software package
WhisPro is a neural network based speech recognition software package, allowing customers to add voice activation to voice-enabled IoT devices. WhisPro™ is targeting the rapidly growing use of voice as a primary human interface for intelligent cloud-based services and edge devices.

9
InferX X1 Edge Inference Co-Processor
The InferX X1 Edge Inference Co-Processor is optimized for what the edge needs: large models and large models at batch=1.

10
NMAX512 : Neural Inferencing Tile for 1 to >100 TOPS
NMAX has a unique new architecture that loads weights rapidly compared to existing solutions.

11
PowerVR AX2185
Optimised for performance efficiency, the PowerVR AX2185 is the highest performing neural network accelerator per mm2 in the market.

12
PowerVR Series2NX Neural Network Accelerator (NNA)
The new PowerVR Series2NX Neural Network Accelerator (NNA) delivers high-performance computation of neural networks at very low power consumption in minimal silicon area.

13
PowerVR Series3NX
The highest performance neural network inference accelerator

14
Smart Data Acceleration
Rambus Smart Data Acceleration (SDA) research program is focused on tackling some of the major issues facing data centers and servers in the age of Big Data. The SDA Research Program has been explorin...

15
TritonAI 64 Platform for AI-enabled Edge SoCs
Wave Computing's customizable, AI-enabled platform merges a triad of powerful technologies to efficiently address use case requirements for inferencing at the edge.

16
FlexNoC AI Package
Complements FlexNoC interconnect IP, adding technologies for artificial intelligence (AI) and machine learning (ML) chip design.

17
neuASIC 7nm Platform for Machine Learning ASIC Design
Through customized, targeted IP offered in 7nm FinFET technology and a modular design methodology, the neuASIC platform removes the restrictions imposed by changing AI algorithms.

18
v-CNNDesigner tool
The new v-CNNDesigner tool automatically translates trained neural networks into optimized implementations that run efficiently on the v-MP6000UDX architecture.

19
v-MP6000UDX processor
Deep learning has quickly become a must-have technology to bring new smart sensing and intelligent analysis capabilities to all of our electronics. Whether it s self-driving cars that need to understa...

20
Accelerator for Convolutional Neural Networks
Gyrfalcon Technologies(GTI) offers silicon proven, acceleration IP for Convolutional Neural Networks used in image classification, object detection, natural language processing and other artificial in...

21
AGICIP AIM - AI Nature Memory
AI Nature Memory (Artificial Intelligence Nature Memory) is the basic memory block for AI cognitive core, while the AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others, yet memory is one of the key factors.

22
AI Accelerator IP- ENLIGHT
OPENEDGES™ Artificial Intelligence Compute Engine ENLIGHT™ is a deep learning accelerator IP technology delivers unrivaled compute density and energy efficiency. ENLIGHT™ NPU IP ...

23
AXDieIO
The AXDieIO IP utilizes the silicon-proven AXLinkIO transceiver architecture for die-to-die, in package, type of channel links.

24
Convolutional Neural Network Accelerator ASIC/FPGA core - iBexNeuralCAx16
iBexNeuralCAx16 ASIC/FPGA core is a programmable convolutional accelerator ASIC core.

25
Lightspeeur 2803S Neural accelerator
Lightspeeur® 2803 is the latest generation AI CNN accelerator for applications requiring high performance audio and video processing for advanced edge, desktop and data center deployments.

26
Akida Neuromorphic IP
The Akida Neuromorphic IP offers unsurpassed performance on a performance-per-watt basis. The flexible Neural Processing Cores (NPCs) which form the Akida Neuron Fabric can be configured to perform co...

27
Goya Deep learning inference processor
Habana Labs Goya™ is the industry's first commercially available deep learning inference processor product-line designed specifically to deliver superior performance, power efficiency and cost savings.

28
Lattice Compact CNN Accelerator IP Core
The Lattice Semiconductor Compact CNN Accelerator IP Core is a calculation engine for Deep Neural Networks with fixed point weight or binarized weight. It calculates many layers of neural networks inc...

29
Machine Learning / On-device Artificial Intelligence
Neural network algorithms for always-on, low power face detection using low resolution image sensor

30
NetSpeed Orion AI
The first interconnect solution specifically for artificial Intelligence applications
Orion AI delivers extreme performance and ultimate efficiency for next-gen AI SoCs

31
QuickAI
The new QuickAI platform provides an all-inclusive low power solution and development environment to economically incorporate the benefits of AI in endpoint applications.

32
Renesas embedded Artificial Intelligence ( e-AI )
To meet the demands of Industry 4.0/IIoT, the production system must provide advanced production control and maintenance. To that end, precise sampling / collecting of data at the endpoint is essentia...

 Back

Partner with us

Visit our new Partnership Portal for more information.

Submit your material

Submit hot news, product or article.

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2018 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted,
reposted, duplicated or otherwise used without the
express written permission of Design And Reuse.