- 2-16Gbps Die-to-Die (D2D) Multi-Protocol IO Supporting BOW, OHBI and UCIe
- 25-112Gbps Extra Short-Reach (XSR) Multi-Standard SerDes (MSS)
- D2D PHY (Die-to-Die Interface)
- D2D Controller IP (Die-to-Die Interface)
- DesignWare Die-to-Die Controller IP with AXI Interface
- DesignWare Die-to-Die PHY IP in TSMC N7 Process
- More Products...
IP-SOC DAYS 2025 IP-SOC DAYS 2024 IP-SOC DAYS 2023 IP-SOC DAYS 2022 IP-SOC DAYS 2021 IP-SOC 2024 IP-SOC 2023 IP-SOC 2022 IP-SOC 2021
|
|||||||
![]() |
|

Why UCIe is Key to Connectivity for Next-Gen AI Chiplets
- Putting UCIe in Context
- Rambus Delivers Industry-Leading Client Chipsets for Next-Generation AI PC Memory Modules
- Arteris Wins Two Gold and One Silver Stevie® Awards in the 2025 American Business Awards®
- Arteris Revolutionizes Semiconductor Design with FlexGen - Smart Network-on-Chip IP Delivering Unprecedented Productivity Improvements and Quality of Results
- Predictions for Multi-Die System Designs in 2025
- NVIDIA Unveils NVLink Fusion for Industry to Build Semi-Custom AI Infrastructure With NVIDIA Partner Ecosystem (May. 20, 2025)
- sureCore extends its sureFIT design service to include custom memory solutions for AI applications (May. 20, 2025)
- Versatile Whitebox 1G Ethernet PHY IP Core with BroadR-Reach™ for Connected Automotive and Industrial Systems (May. 19, 2025)
- Codasip: Toward Custom, Safe, Secure RISC-V Compute Cores (May. 19, 2025)
- Semidynamics: From RISC-V with AI to AI with RISC-V (May. 19, 2025)
- See Latest News>>
Feb. 06, 2025 –
By Letizia Giuliano, VP of IP Products, Alphawave Semi
EETimes (February 6, 2025)
Deploying AI at scale presents enormous challenges, with workloads demanding massive compute power and high-speed communication bandwidth.
Large AI clusters require significant networking infrastructure to handle the data flow between the processors, memory, and storage; without this, the performance of even the most advanced models can be bottlenecked. Data from Meta suggests that approximately 40% of the time that data resides in a data center is wasted, sitting in networking.
In short, connectivity is choking the network, and AI requires dedicated hardware with the maximum possible communication bandwidth.
Deploying AI at scale presents enormous challenges, with workloads demanding massive compute power and high-speed communication bandwidth.
Large AI clusters require significant networking infrastructure to handle the data flow between the processors, memory, and storage; without this, the performance of even the most advanced models can be bottlenecked. Data from Meta suggests that approximately 40% of the time that data resides in a data center is wasted, sitting in networking.
In short, connectivity is choking the network, and AI requires dedicated hardware with the maximum possible communication bandwidth.
The large training workloads of AI create high-bandwidth traffic on the back-end network, and this traffic generally flows in regular patterns and does not require the packet-by-packet handling needed in the front-end network. When things are working properly, they operate with very high levels of activity.