IP-SOC DAYS 2025 IP-SOC DAYS 2024 IP-SOC DAYS 2023 IP-SOC DAYS 2022 IP-SOC DAYS 2021 IP-SOC 2024 IP-SOC 2023 IP-SOC 2022 IP-SOC 2021
|
|||||||
![]() |
|

Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design
Highlights:
- Neo NPUs efficiently offload from any host processor and scale from 8 GOPS to 80 TOPS in a single core, extending to hundreds of TOPS for multicore
- AI IP delivers industry-leading AI performance and energy efficiency for optimal PPA and cost points
- Targets a broad range of on-device and edge applications, including intelligent sensors, IoT, audio/vision, hearables/wearables, mobile vision/voice AI, AR/VR and ADAS
- Comprehensive, common NeuroWeave SDK addresses all target markets across a broad array of Cadence AI and Tensilica IP solutions
www.cadence.com/en_US/home.html, Sept. 13, 2023 –
SAN JOSE, Calif.– Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor–including application processors, general-purpose microcontrollers and DSPs–with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a "one-tool" AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
"While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices," said Bob O'Donnell, president and chief analyst at TECHnalysis Research. "From consumer to mobile and automotive to enterprise, we're embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process."