Nvidia Introduces Enhanced Vera Rubin Chip at CES 2026: Everything You Need to Know

Nvidia Participates in Big Tech Mergers by Licensing Groq Technology and Recruiting Executives
Nvidia’s CEO Jensen Huang unveiled the latest Rubin computing architecture during the Consumer Electronics Show (CES) 2026 in Las Vegas. Named after astronomer Vera Florence Cooper Rubin, this innovative architecture aims to meet the rapidly increasing computational needs of AI systems.

The Rubin architecture features a six-chip system that integrates a Vera CPU along with two Rubin GPUs, delivering remarkable advancements in both speed and energy efficiency.

“The Rubin platform employs extreme codesign across the six chips — the NVIDIA

Vera CPU, NVIDIA Rubin GPU, NVIDIA NVLink™ 6 Switch, NVIDIA ConnectX®-9 SuperNIC, NVIDIA BlueField®-4 DPU, and NVIDIA Spectrum™-6 Ethernet Switch — to drastically reduce training time and inference token costs,” stated the AI giant Nvidia.
This architecture is already being produced and is expected to see further expansion in the latter half of the year. “Vera Rubin is specifically designed to tackle the significant challenge we face: the soaring computational requirements for AI. As of today, I can confirm that Vera Rubin is in full production,” Huang addressed the audience.

Nvidia’s evaluations indicate that the new architecture will outperform the previous Blackwell architecture by three and a half times in model-training tasks and five times faster in inference tasks, achieving a peak performance of 50 petaflops. Moreover, it will support eight times more inference compute per watt.

The design of the Rubin architecture centers around the Rubin GPU, enhancing storage and interconnection through the Bluefield and NVLink systems. The new Vera CPU is tailored for agentic reasoning.

Dion Harris, Nvidia’s senior director of AI infrastructure solutions, underscored the significance of innovative storage solutions, remarking, “As new workflow types, such as agentic AI or long-term tasks, are enabled, they impose substantial stress and requirements on your KV cache.”

Rubin chips are already scheduled for deployment by major cloud service providers, including Amazon Web Services, Anthropic, and OpenAI. Additionally, this architecture will be integrated into HPE’s Blue Lion supercomputer and the Doudna supercomputer at Lawrence Berkeley National Lab.

Nvidia’s unwavering commitment to hardware development has transformed it into the world’s most valuable company, with the Rubin architecture set to bolster its dominance in the AI sector.

Previous Article

RBI Releases Revised Guidelines on Related-Party Lending, Permitting Non-Compliant Transactions Under Specific Conditions

Next Article

US broadens list of nations where applicants are required to submit bonds of up to $15,000 for visa applications.