Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...
You are here : design-reuse-embedded.com  > Artificial Intelligence  > AI Processor



As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functions - like video and voice recognition. Deep Neural Networks (DNNs) used AI applications require a vast number of Multiply-Accumulate (MAC) operations to generate weight values. These weights then need to be kept in local storage for further processing. This huge amount of data cannot fit into the on-board memory of a stand-alone digital edge processor.

Based on SuperFlash® technology and optimized to manage Vector Matrix Multiplication (VMM) for neural network inference, our memBrain™ neuromorphic memory product improves system architecture implementation of VMM through an analog compute-in-memory approach, enhancing AI inference at the edge. Current neural net models may require 50M or more weights for processing. The memBrain neuromorphic memory product stores synaptic weights inside the floating gate to offer significant system latency improvements such as reducing system bus latencies when fetching from off-chip DRAM. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10 to 20 times power reduction and significantly lower cost with improved inference frame latency.

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.