www.design-reuse-embedded.com
Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...

High-bandwidth memory (HBM) options for demanding compute

While unbeatable in terms of performance, HBM is expensive and power hungry for many applications. We review the many memory different options for demanding compute applications.

www.embedded.com/, Jan. 15, 2024 – 

Explosive growth of generative artificial intelligence (AI) applications in recent quarters has spurred demand for AI servers and skyrocketing demand for AI processors. Most of these processors – including compute GPUs from AMD and Nvidia, specialized processors like Intel's Gaudi or AWS's Inferentia and Trainium and FPGAs – use high-bandwidth memory (HBM) as it provides the highest memory bandwidth possible today. As a result, memory makers Micron, Samsung, and SK Hynix were set to double bit output of HBM in 2023 and increase it further in 2024, according to TrendForce, a pledge that is set to become a challenge for the industry.

But there are a lot of AI processors, particularly those that are designed to run inference workloads, as well as HPC processors that take advantage of GDDR6/GDDR6X or even LPDDR5/LPDDR5X. Furthermore, general purpose CPUs, which can also run AI workloads (optimized for particular instructions), are poised to use commodity memory, which is why in the coming years we are going to see MCRDIMMs and MRDIMMs memory modules that will significantly increase capacity and bandwidth to new levels. But HBM is set to remain bandwidth king.

click here to read more...

 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.