Skip to content

Nvidia's GTC 2026 reveals breakthroughs in HBM4 memory for next-gen AI hardware

A signed wafer and a 20% efficiency leap: Micron's HBM4 chips take center stage at Nvidia's landmark event. The race for AI memory supremacy just got fiercer.

The image shows a close up of a computer memory module on a white background with a barcode on it....
The image shows a close up of a computer memory module on a white background with a barcode on it. The memory module is labeled with the words "Nana" and "256mb ddr2-2666mhz pc2-5300" printed on it, indicating that it is a high-speed memory card.

Nvidia's GTC 2026 reveals breakthroughs in HBM4 memory for next-gen AI hardware

Nvidia's GTC 2026 event has put memory technology in the spotlight, with major advances in HBM4 and HBM4e chips. The company's CEO, Jensen Huang, made high-profile visits to key suppliers, signing commemorative items and showcasing new partnerships. Among the standout announcements, Micron's early HBM4 deliveries and mass production plans stole attention after earlier doubts about its role in Nvidia's supply chain.

On November 17, 2025, Micron delivered its first HBM4 chips to Nvidia—a milestone marked by a signed wafer from both CEOs at GTC 2026. The company is now ramping up mass production of 36GB 12-layer stacks, offering over 1.1 TB/s per pin and total bandwidth exceeding 2.8 TB/s. Larger 48GB 16-layer versions were also demonstrated, providing 33% more capacity. These chips, designed for Nvidia's Vera Rubin platforms, achieve over 20% better energy efficiency than HBM3E, with volume shipments expected by fiscal year 2028.

Micron's push includes a strategic acquisition—the PSMC Tongluo site in Taiwan—to expand cleanroom capacity and meet surging AI demand. Meanwhile, SK Hynix showcased its own 48GB modules, with industry consensus pointing to 12 or 16 layers as the optimal stack size for performance and efficiency.

Huang's tour included stops at Samsung, SK Hynix, and Micron, all critical suppliers for Nvidia's next-gen memory needs. The Feynman architecture, set to use HBM4e, will feature a custom base die paired with stacked memory. Beyond HBM, manufacturers are also advancing LPDDR5X, with LPDDR6 on the horizon.

The event confirmed Nvidia's deepening ties with memory suppliers, as HBM4 and HBM4e take centre stage for AI hardware. Micron's early deliveries and production ramp-up signal a shift in the supply landscape, while SK Hynix and Samsung continue to refine their own high-capacity solutions. These developments will shape the next wave of AI accelerators, with efficiency and bandwidth as key drivers.

Read also:

Latest