www.design-reuse-embedded.com
Search Solutions  
OK
5 "Deep Learning" SoCs

1
CDNN Deep Learning Compiler
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.

2
High performance-efficient deep learning accelerator for edge and end-point inference
AndesAIRE™ AnDLA™ I350 is a deep learning accelerator (DLA) designed to enable high performance-efficient and cost-sensitive AI solutions for edge and end-point inference. It supports popular deep lea...

3
Akida Neuromorphic IP
The Akida Neuromorphic IP offers unsurpassed performance on a performance-per-watt basis. The flexible Neural Processing Cores (NPCs) which form the Akida Neuron Fabric can be configured to perform co...

4
AGICIP AIM - AI Nature Memory
AI Nature Memory (Artificial Intelligence Nature Memory) is the basic memory block for AI cognitive core, while the AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others, yet memory is one of the key factors.

5
Goya Deep learning inference processor
Habana Labs Goya™ is the industry's first commercially available deep learning inference processor product-line designed specifically to deliver superior performance, power efficiency and cost savings.

 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.