Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...
You are here : design-reuse-embedded.com  > Artificial Intelligence  > AI and Machine learning
Download Datasheet        Request More Info
All Silicon IP


NMAX has a unique new architecture that loads weights rapidly compared to existing solutions.

Block Diagram


  • modular from 1 to >100 TOPS,
  • scalable: as you double the silicon area, you double the throughput in TOPS (it is throughput that matters),
  • low latency: NMAX loads weights fast, so performance at batch = 1 is usually as good as large batch sizes; this is critical for edge applications,
  • low cost: NMAX uses the MACs with 60-90% utilization, whereas existing solutions are often <25%. This means NMAX gets more throughput out of less silicon area,
  • low power: NMAX uses on-chip SRAM very efficiently to generate high bandwidth so we need little DRAM. Data Center class performance is achievable with 1 LPDDR4 DRAM for ResNet-50 and 2 for YOLOv3,
  • able to run any kind of neural network or multiple at once,
  • programmed using Tensorflow or Caffe.

Partner with us

Visit our new Partnership Portal for more information.

Submit your material

Submit hot news, product or article.

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2018 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted,
reposted, duplicated or otherwise used without the
express written permission of Design And Reuse.