- AI IP for Cybersecurity monitoring - Smart Monitor
- ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications
- Enhanced Neural Processing Unit for safety providing 32,768 MACs/cycle of performance for AI applications
- EV74 processor IP for AI vision applications with 4 vector processing units
- EV7x Vision Processors
- EV7xFS Vision Processors for Functional Safety
- More Products...
IP-SOC DAYS 2025 IP-SOC DAYS 2024 IP-SOC DAYS 2023 IP-SOC DAYS 2022 IP-SOC DAYS 2021 IP-SOC 2024 IP-SOC 2023 IP-SOC 2022 IP-SOC 2021
|
|||||||
![]() |
|

Samsung boosts AI in-memory processing with CXL
- How AI Is Proving Useful in New Materials Discovery for the 2nm Era
- Keysight and Synopsys employ AI for RF design migration
- Arteris Wins "AI Engineering Innovation Award" at the 2025 AI Breakthrough Awards
- AI Chip Leader Sets Up In Base; Data Center, IoT, Automotive Among Growth Drivers
- BrainChip and HaiLa Partner to Demonstrate Ultra-Low Power Edge AI Connectivity for IoT Sensor Applications
- Tenstorrent Acquires Blue Cheetah Analog Design (Jul. 02, 2025)
- Consumer-Tech Brand, Nothing, Taps Ceva's RealSpace Software to Bring Immersive Spatial Audio to Headphones and Earbuds (Jul. 02, 2025)
- Intel Reportedly Weighs Dropping 18A, Bets on 14A to Attract Clients and Challenge TSMC (Jul. 02, 2025)
- Arteris Expands Multi-Die Network-on-Chip Design IP and Software (Jul. 02, 2025)
- Three Pillars for Semiconductor Success in the Chiplet Economy (Jul. 02, 2025)
- See Latest News>>
Samsung has developed in-memory processing to boost the performance of AI systems in data centres using the latest interconnect standards.
eenewseurope.com, Oct. 25, 2022 –
The HBM-PIM (high bandwidth memory, processing in memory) chips are being used in AMD's instinct Mi100 AI accelerator. Samsung then developed an HBM-PIM Cluster with 96 Mi100 cards and applied it to various large-scale AI and High-Performance Computing (HPC) applications using 200Gbit/s Infiniband switches.
Compared to existing GPU accelerators, tests showed that, on average, the addition of HBM-PIM improved performance by more than double and energy consumption reduced by more than 50%.
For the latest AI models, accuracy tends to have a direct correlation with volume size, which points to a major hurdle. With existing memory solutions, computing this amount of data can be bottlenecked if the DRAM capacity and bandwidth for data transference are not adequately supported for Hyperscale AI models.
If a large capacity language model proposed by Google is trained on a cluster consisting of 8 accelerators, using a GPU accelerator equipped with HBM-PIM can save 2,100 GWh of energy per year and cut down 960 thousand tons of carbon emissions.
With software integration, pairing commercially available GPUs with HBM-PIM can reduce the bottleneck caused by memory capacity and bandwidth limitations in Hyperscale AI data centres.
Samsung has developed software using SYCL, an open software standard, to define specifications that can use GPU accelerators. With this software, customers will be able to use PIM memory solutions in an integrated software environment. Codeplay, recently acquired by Intel, is a key developer of SYCL.
"Codeplay is proud to have been deeply involved in defining the SYCL standard and playing a role in creating the first conformant product." said Charles Macfarlane, Chief Business Officer for at Codeplay Software, and the one in charge of working together on the SYCL standardization. "Our work with Samsung in simplifying software development via Samsung's PIM systems opens up a much greater ecosystem of tools for scientists, allowing them to focus on algorithm development rather than hardware-level details."