Enterprise AI infrastructure is entering a new phase of security—one where hardware-level encryption becomes the default rather than the exception. Nvidia’s Vera Rubin NVL72, announced at CES 2026, marks a turning point by encrypting every data path within a rack containing 72 GPUs and 36 CPUs, including the entire NVLink fabric. This is not just an upgrade; it’s a fundamental rethinking of how organizations protect their most valuable assets in an environment where cyberattacks are increasingly automated.

The Rubin platform introduces what Nvidia calls rack-scale confidential computing—a system where trust is no longer assumed but verified through cryptographic proof. For security teams, this means moving away from relying on contractual agreements with cloud providers to a model where every component of the infrastructure can prove it hasn’t been tampered with. The stakes are higher than ever: as AI training costs balloon and nation-state adversaries deploy autonomous attack agents, the cost of unprotected models is becoming unbearable.

Why Security Can No Longer Keep Pace

The financial risks of insecure AI infrastructure are growing exponentially. Research indicates that frontier model training costs have surged 2.4 times annually since 2016, with billion-dollar runs potentially on the horizon in just a few years. Yet security measures designed to protect these investments are struggling to keep up. A significant portion of organizations lack proper AI access controls, leaving them vulnerable to breaches that cost millions more than traditional data leaks.

One recent incident underscores the dangers: a state-sponsored group exploited an AI model to conduct a large-scale cyberattack with minimal human intervention. The attack autonomously discovered vulnerabilities, harvested credentials, and categorized stolen data—demonstrating how quickly adversaries can scale operations when armed with advanced foundation models. This sets a precedent where even the most sophisticated defenses may be outpaced without hardware-level protections in place.

Performance: Rubin vs. Blackwell in a New Security Context

The Rubin NVL72 doesn’t just offer security; it delivers raw performance that outpaces its predecessor, Nvidia’s Blackwell GB300 NVL72. Where the Blackwell platform achieved 1.44 exaFLOPS of inference compute (FP4), Rubin jumps to 3.6 exaFLOPS—a more than twofold improvement. This isn’t just about speed; it’s about bandwidth and scalability.

  • Inference compute: Rubin delivers 3.6 exaFLOPS, compared to Blackwell’s 1.44 exaFLOPS.
  • Per-GPU NVLink bandwidth: Rubin doubles Blackwell’s capacity to 3.6 TB/s.
  • Rack NVLink bandwidth: Rubin reaches 260 TB/s, up from Blackwell’s 130 TB/s.
  • HBM bandwidth per GPU: Rubin offers ~22 TB/s, nearly three times Blackwell’s ~8 TB/s.

These improvements mean that organizations can process larger datasets more securely and at higher speeds. The encryption isn’t an afterthought; it’s baked into every bus, ensuring that data in transit or at rest remains protected without performance trade-offs.

A Shifting Industry Landscape

Nvidia is not alone in recognizing the need for secure AI infrastructure. The Confidential Computing Consortium and IDC report that 75% of organizations are already adopting confidential computing solutions, with nearly half in production or pilot stages. However, challenges remain—84% of respondents cite attestation validation difficulties, and a skills gap persists, hindering widespread adoption.

CES 2026 - Raquel Urtasun 01

AMD’s Helios rack offers an alternative approach, built on open standards like Meta’s Open Rack Wide specification. While it delivers slightly lower performance (2.9 exaFLOPS FP4 compute), its focus on interoperability through consortia like Ultra Accelerator Link and Ultra Ethernet provides flexibility for enterprises with diverse infrastructure needs. The competition between Nvidia and AMD is forcing security leaders to weigh integrated encryption against open-standards flexibility—a choice that will shape the future of secure AI deployments.

Practical Steps for Security Teams

Rack-scale encryption changes the game, but it’s not a silver bullet. Security teams must still adhere to zero-trust principles and integrate governance from the earliest stages of model development. Here’s how organizations can leverage this new era

  • Before deployment: Verify cryptographic attestation before signing any contracts with cloud providers. If a provider cannot demonstrate robust attestation capabilities, it should be a red flag in negotiations.
  • During operation: Maintain separate enclaves for training and inference to minimize exposure. Security teams must be embedded in the model pipeline from day one—adding security as an afterthought leads to costly vulnerabilities that could have been engineered out earlier.
  • Across the organization: Conduct joint exercises between security and data science teams to identify weaknesses before attackers do. Shadow AI incidents, which now account for 20% of breaches, disproportionately expose sensitive customer data and intellectual property.

The financial stakes are clear: breaches involving unsanctioned tools cost an average of $4.63 million, with one in five breaches exposing high-value assets like PII and IP. For organizations investing $50 million or more in a single training run, the lack of hardware-level encryption leaves them vulnerable to nation-state adversaries capable of machine-speed attacks.

Looking Ahead

The GTG-1002 campaign proved that autonomous cyberattacks are no longer hypothetical. With AI-driven intrusions becoming more sophisticated and less reliant on human intervention, the need for cryptographically attested infrastructure has never been greater. Nvidia’s Vera Rubin NVL72 sets a new benchmark by encrypting every component in a rack, while AMD’s Helios offers a path for those prioritizing open standards.

For security leaders, the question isn’t whether they can afford to deploy these solutions—it’s whether they can afford not to. The cost of inaction is measured in millions of dollars per breach, and with training runs reaching billions, the window for change is closing rapidly. CES 2026 highlighted this shift, but the race to secure AI infrastructure has already begun.

The future belongs to those who treat encryption not as an optional layer but as a foundational requirement—one that must be verified at every level, from the chip to the cloud.