- AI IP for Cybersecurity monitoring - Smart Monitor
- ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications
- Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
- EV74 processor IP for AI vision applications with 4 vector processing units
- EV7x Vision Processors
- EV7xFS Vision Processors for Functional Safety
- More Products...
IP-SOC DAYS 2025 IP-SOC DAYS 2024 IP-SOC DAYS 2023 IP-SOC DAYS 2022 IP-SOC DAYS 2021 IP-SOC 2024 IP-SOC 2023 IP-SOC 2022 IP-SOC 2021
|
|||||||
![]() |
|

RaiderChip unveils its fully Hardware-Based Generative AI Accelerator: The GenAI NPU
- 'World's First' Photonic AI Processor Installed in Supercomputing Facility
- Renesas Introduces 64-bit RZ/G3E MPU for High-Performance HMI Systems Requiring AI Acceleration and Edge Computing
- Samsung to produce Tesla's next-gen AI chips in $16.5b foundry breakthrough
- Designing the future: AI innovation accelerated through university collaboration
- Synopsys strengthens India's role with push for world's first AI-based chip foundry
- Perceptia Releases Design Kit for pPLL05 on GlobalFoundries 22FDX Platform (Jul. 31, 2025)
- Creonic Releases DVB-S2X Demodulator Version 6.0 with Increased Bitwidth and Annex M Support (Jul. 31, 2025)
- Japan unveils $550B investment to reshape global trade, strengthen chip supply chains, and support TSMC's U.S. expansion under new U.S. tariff deal (Jul. 31, 2025)
- Silvaco to Acquire Mixel, Inc. a Provider of Low-Power, High-Performance Mixed-Signal Connectivity IP Solutions (Jul. 30, 2025)
- 'World's First' Photonic AI Processor Installed in Supercomputing Facility (Jul. 30, 2025)
- See Latest News>>
Jan. 27, 2025 –
The new embedded accelerator boosts inference speed by 2.4x, combining complete privacy and autonomy with a groundbreaking innovation: it eliminates the need for CPUs.
Spain, January 27, 2024 -- RaiderChip has officially launched the GenAI NPU, a fully hardware-based accelerator that sets new standards for efficiency and scalability in Generative AI. The GenAI NPU retains the key features of its predecessor, the GenAI v1: offline operation and autonomous functionality.
Additionally, it becomes fully stand-alone by embedding all Large Language Models (LLMs) operations directly into its hardware, thereby eliminating the need for CPUs.
RaiderChip GenAI NPU running the Llama 3.2 1B LLM model and streaming its output to a terminal
Thanks to its fully hardware-based design, the GenAI NPU achieves unprecedented levels of efficiency, unattainable by hybrid designs. According to RaiderChip CTO Victor Lopez: “By eliminating latency caused by hardware-software communication, we achieve superior performance while removing external dependencies, such as CPUs. The performance that you see is what you will get, regardless of the target electronic system where the accelerator is integrated. This improves energy efficiency and ensures fully predictable performance—advantages which make the GenAI NPU the ideal solution for embedded systems.”
Furthermore, the new design optimizes token generation speed per available memory bandwidth, multiplying it by 2.4x, while enabling the use of more cost-efficient memories like DDR or LPDDR without relying on expensive options such as HBM to achieve excellent performance. It also delivers equivalent results with fewer components, reducing size, cost, and energy consumption. These features allow for the development of more affordable and sustainable generative AI solutions, with faster return on investment and seamless integration into a variety of products tailored to different needs.
With this innovation, RaiderChip strengthens its strategy of offering optimized solutions based on affordable hardware, designed to bring generative AI to the Edge. These solutions ensure complete privacy and security for applications thanks to their ability to operate entirely offline and on-premises, while eliminating dependence on the cloud and recurring monthly subscriptions.
WANT TO KNOW MORE?