- AI IP for Cybersecurity monitoring - Smart Monitor
- ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications
- Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
- EV74 processor IP for AI vision applications with 4 vector processing units
- EV7x Vision Processors
- EV7xFS Vision Processors for Functional Safety
- More Products...
IP-SOC DAYS 2025 IP-SOC DAYS 2024 IP-SOC DAYS 2023 IP-SOC DAYS 2022 IP-SOC DAYS 2021 IP-SOC 2024 IP-SOC 2023 IP-SOC 2022 IP-SOC 2021
|
|||||||
![]() |
|

Dnotitia and HyperAccel Partner to Develop RAG-Optimized AI Inference System
- See the 2025 Best Edge AI Processor IP at the Embedded Vision Summit
- Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell Systems to Transform AI-Driven Silicon, Systems and Drug Design
- Cadence Accelerates Physical AI Applications with Tensilica NeuroEdge 130 AI Co-Processor
- Kyocera Licenses Quadric's Chimera GPNPU AI Processor IP
- The future of AI runs on the GPU
- Perforce Partners with Siemens for Software-Defined, AI-Powered, Silicon-Enabled Design (May. 16, 2025)
- Semidynamics: From RISC-V with AI to AI with RISC-V (May. 16, 2025)
- TSMC Board of Directors Meeting Resolutions (May. 16, 2025)
- Arm Evolves Compute Platform Naming for the AI Era (May. 16, 2025)
- Secafy Licenses Menta's eFPGA IP to Power Chiplet-Based Secure Semiconductor Designs (May. 15, 2025)
- See Latest News>>
- Dnotitia and HyperAccel partner to integrate their AI semiconductor chips, delivering the world's first AI inference solution specifically optimized for Retrieval-Augmented Generation (RAG)
- This cutting-edge system is expected to significantly enhance AI search accuracy and energy efficiency, transforming AI operations through advanced RAG optimization
dnotitia.com, Mar. 20, 2025 –
Dnotitia, Inc. (Dnotitia), a startup specializing in integrated artificial intelligence (AI) and semiconductor solutions, today announced a strategic partnership with HyperAccel, a fabless semiconductor startup focusing on AI acceleration, to jointly develop an AI inference system optimized to Retrieval-Augmented Generation(RAG). This collaboration will integrate Dnotitita’s Vector Data Processing Unit (VDPU) chip with HyperAccel’s Large Language Model (LLM) accelerator chip, known as the LLM Processing Unit (LPU), into a unified system.
As data retrieval becomes increasingly crucial in AI services, with expanding data volumes and diverse data modalities, there is a rising demand for faster and more efficient data retrieval capabilities. Traditional systems rely heavily on software-based retrieval and handle LLM-based GenAI processes separately, leading to slower response times and higher power consumption. Dnotitia enables real-time retrieval and utilization of large-scale multimodal data by leveraging its VDPU, while HyperAccel maximizes AI model performance with its LPU chip. By combining these two technologies and optimizing at the system level, the companies aim to create the world’s first RAG-specialized AI system capable of handling both retrieval and inference simultaneously.
“As LLM services become widespread, the demand for data retrieval is also rapidly increasing,” said Moo-Kyoung (MK) Chung, CEO of Dnotitia. “Through this collaboration, we will introduce a new concept of AI system that not only optimizes the inference of AI models but also streamlines data retrieval. By applying long-term memory to AI, we can gain a deeper understanding of user data and provide more precise, customized services, and this will reduce hallucinations and serve as a key milestone toward more specialized and personalized AI services”
“Addressing computational bottlenecks while simultaneously improving performance and efficiency is the core challenge in AI semiconductor innovation,” said Joo-Young Kim, CEO of HyperAccel. “Our partnership seeks to introduce an optimized AI system tailored specifically for RAG and LLM applications, setting a critical milestone that will revolutionize how AI system operate.”
About Dnotitia, Inc.
Dnotitia is an AI and semiconductor company that creates innovative value through the convergence of artificial intelligence (AI) and data, providing high-performance and low-cost LLM solutions. Leveraging Dnotitia’s world’s first Vector Data Processing Unit (VDPU), the company offers ▲Seahorse, the high-performance vector database, which supports Retrieval-Augmented Generation (RAG) solutions – a key technology for Gen AI. Additionally, Dnotitia offers ▲Mnemos, a personal/Edge LLM device based on its proprietary LLM foundation model.
-
Seahorse is indexes various types of multi-modal data, such as text, images, and videos, into vector form, providing semantic search that extracts information reflecting meanings and contexts based on user queries. Seahorse can be used not only in RAG systems but also for implementing semantic search across all digital data stored globally.
-
Mnemos is a solution designed to address the high costs and resource consumption of AI. It is a compact edge device capable of running high-performance LLM without the need for a data center. Leveraging Dnotitia’s RAG and LLM optimization technology, Mnemos delivers high-performance LLM services using minimal GPU/NPU resources.
Founded in 2023, Dnotitia has grown to a team of over 90 employees in a short period of time and has established strategic partnerships across various industries. By integrating specialized semiconductors and optimized algorithms, Dnotitia aims to usher in a new era of AI. Through the fusion of data with AI to develop AI with long-term memory, Dnotitia envisions realizing a low-cost AGI (Artificial General Intelligence) accessible to everyone, creating a future where the benefits of AI can be enjoyed by all.
For more information about Dnotitia, please visit: www.dnotitia.com .