www.design-reuse-embedded.com
Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...

AI expands HBM footprint

Jan. 24, 2022 – 

By Gary Hilson, EETimes (January 20, 2022)

High bandwidth memory (HBM) is becoming more mainstream. With the latest iteration’s specifications approved, vendors in the ecosystem are gearing to make sure it can be implemented so customers can begin to design, test and deploy systems.

The massive growth and diversity in artificial intelligence (AI) means HBM is less than niche. It’s even become less expensive, but it’s still a premium memory and requires expertise to implement. As a memory interface for 3D-stacked DRAM, HBM achieves higher bandwidth while using less power in a form factor that’s significantly smaller than DDR4 or GDDR5 by stacking as many as eight DRAM dies with an optional base die which can include buffer circuitry and test logic.



HBM IP CoresHBM Verification IPs

Like all memory, HBM makes advances in performance improvement and power consumption with every iteration. A key change when moving to HBM3 from HBM2 will a 100% performance improvement in the data transfer rate from 3.2/3.6Gbps to 6.4Gbps max per pin, said Jinhyun Kim, principal engineer with Samsung Electronics’ memory product planning team.

A second fundamental change is a 50% increase in the maximum capacity from 16GB (8H) to 24GB (12H). Finally, HBM3 implements on-die error correction code as an industry-wide standard, which improves system reliability, Kim said. “This will be critical for the next generation of artificial intelligence and machine learning systems.”

Click here to read more ...

 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.