Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...
You are here : design-reuse-embedded.com  > Artificial Intelligence  > AI and Machine learning

nearbAI - IP cores for ultra-low power AI-enabled devices


Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. It provides immediate visual and spatial feedback based on sensory inputs, which is a necessity for live augmentation of the human senses.

  • Optimized neural network inferencing for visual, spatial and other applications

  • Unparallelled flexibility: customized & optimized for the customer s use case

  • Produces the most optimal NPU IP core for the customer s use case: power, area, latency and memories trade-off

  • Minimized development & integration time

Ideal for battery-powered mobile, XR and IoT devices

Why nearbAI?
Highly computationally efficient and flexible NPUs

  • Enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally

  • Enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses

  • Enable smart and flexible capabilities ... fill the gap between swiss-army knife XR / AI mobile processor chips and limited-capability edge IoT / AI chips

Let s do a custom benchmark together:
provide us with your use case:

- Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras

- Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.