IP cores for ultra-low power AI-enabled devices

Overview

Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. It provides immediate visual and spatial feedback based on sensory inputs, which is a necessity for live augmentation of the human senses.

  • Optimized neural network inferencing for visual, spatial and other applications
  • Unparallelled flexibility: customized & optimized for the customer’s use case
  • Produces the most optimal NPU IP core for the customer’s use case: power, area, latency and memories trade-off
  • Minimized development & integration time

Ideal for battery-powered mobile, XR and IoT devices

Why nearbAI?

Highly computationally efficient and flexible NPUs
  • Enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally
  • Enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses
  • Enable smart and flexible capabilities ... fill the gap between “swiss-army knife” XR / AI mobile processor chips and limited-capability edge IoT / AI chips

Let's do a custom benchmark together:
provide us with your use case:

• Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras

• Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node

Tech Specs

Part NumbernearbAI
Short DescriptionIP cores for ultra-low power AI-enabled devices
Provider
Target Process NodeIndependent
I understand
This website uses cookies to store information on your computer/device. By continuing to use our site, you consent to our cookies. Please see our Privacy Policy to learn more about how we use cookies and how to change your settings if you do not want cookies on your computer/device.