The AI chip war isn’t fought in boardrooms alone. Sometimes, it starts with a bucket of fried chicken.

NVIDIA CEO Jensen Huang’s habit of meeting with semiconductor executives over casual meals—including a recent visit to 99 Chicken in Santa Clara with SK Group Chairman Tony Chey—has become a defining feature of how the company secures the memory supplies critical to its AI dominance. While competitors scramble for DRAM and HBM allocations, NVIDIA’s early long-term agreements (LTAs) and Huang’s direct relationships with Samsung and SK hynix are ensuring the company stays ahead in a market where shortages have persisted for quarters.

The stakes couldn’t be higher. HBM4 memory, essential for next-generation AI accelerators like NVIDIA’s Rubin servers, is now a bottleneck for the entire industry. SK hynix is poised to supply over 50% of NVIDIA’s initial HBM4 inventory, a move that underscores the company’s ability to lock in critical components before rivals even enter negotiations.

More Than Just Supply: A Shift in Memory Strategy

Beyond raw volume, the discussions between Huang and SK hynix’s leadership have centered on two key areas: SOCAMM, a low-power memory module designed for NVIDIA’s upcoming Rubin AI servers, and the restructuring of Solidigm—a SK Group subsidiary—to focus exclusively on AI memory solutions. The shift reflects a broader trend: NVIDIA isn’t just buying memory; it’s reshaping how it’s produced and allocated.

This isn’t the first time Huang has used informal settings to broker deals. Last October, he was photographed sharing a meal with Samsung Electronics’ CEO Jay Y. Lee at a fried chicken restaurant in South Korea, a meeting that reportedly paved the way for deeper collaboration on memory supply. The pattern suggests a deliberate strategy: by removing the formality of corporate meetings, Huang fosters trust and long-term partnerships that competitors struggle to replicate.

How NVIDIA’s CEO Turns Fried Chicken into a $660 Billion Supply Chain Strategy

The $660 Billion Question: Why Supply Chain Control Matters

NVIDIA’s ability to secure memory isn’t just about avoiding shortages—it’s about maintaining control over the entire AI infrastructure buildout, which some analysts estimate will exceed $660 billion in the coming years. While other companies may rely on spot-market purchases or last-minute negotiations, NVIDIA’s LTAs and direct relationships with manufacturers ensure stable supply chains, even as demand for DRAM and HBM continues to surge.

When asked about potential memory shortages, Huang has dismissed concerns, citing NVIDIA’s early LTAs as a safeguard. The message is clear: in an industry where timing and relationships dictate success, NVIDIA’s approach is setting a new standard.

What’s Next for HBM4 and Beyond

The focus on HBM4 isn’t just about meeting current demand—it’s about preparing for the next wave of AI acceleration. With SK hynix now aligned under a new AI-focused strategy, NVIDIA’s access to high-bandwidth memory is expected to remain unmatched. Meanwhile, SOCAMM’s role in Rubin servers suggests NVIDIA is betting heavily on a memory architecture that balances performance with power efficiency, a critical factor as data centers expand.

For now, the fried chicken dinners may seem like an odd way to run a supply chain, but the results speak for themselves. While others navigate a fragmented memory market, NVIDIA’s CEO is turning informal meetings into a blueprint for dominance—one bucket at a time.