NVIDIA Unveils Quantum-AI Integration and Record-Breaking Blackwell Architecture at GTC 2025
By Allison Proffitt
October 30, 2025 | Forty years after physicist Richard Feynman first imagined a quantum computer capable of simulating nature directly, the industry has achieved a fundamental breakthrough: creating one logical qubit that is coherent, stable, and error-corrected. This milestone, explained Jensen Huang, NVIDIA’s CEO, has opened the door to practical quantum computing applications.
Huang used NVIDIA’s GTC 2025 keynote—held this week in Washington, D.C.—to share his vision for the convergence of quantum computing and artificial intelligence, while revealing staggering demand for the company’s next-generation Grace Blackwell architecture that signals a fundamental shift in how the world computes.
The challenge, however, remains formidable. Today's qubits remain stable for only a few hundred operations, but solving meaningful problems requires trillions of operations. The solution lies in quantum error correction—a process that requires measuring auxiliary qubits without disturbing the primary qubits that contain the actual computational information.
To address this challenge, NVIDIA announced NVQLink, a high-speed interconnect system that enables quantum computer control, calibration, and quantum error correction while connecting quantum processing units (QPUs) to GPU supercomputers for hybrid simulations. The architecture is designed to scale from today's hundreds of qubits to future systems with hundreds of thousands of qubits.
Researchers and developers can access NVQLink through its integration with the NVIDIA CUDA-Q software platform, an open-source quantum development platform that orchestrates the hardware and software needed to run useful, large-scale quantum computing applications, allowing computation to move between quantum and classical processors within microseconds—the critical latency needed for effective quantum-classical cooperation.
CUDA-Q is “qubit-agnostic”, the company says, seamlessly integrating with all QPUs and qubit modalities and offering GPU-accelerated simulations when adequate quantum hardware isn’t available.
Unprecedented Industry Support
The response from the quantum computing ecosystem has been remarkable, Huang reported. Seventeen quantum computer companies are supporting NVQLink, along with eight Department of Energy laboratories including Berkeley, Brookhaven, Fermi Labs, Lincoln Laboratory, Los Alamos, Oak Ridge, Pacific Northwest, and Sandia National Lab.
In a major announcement for American scientific competitiveness, he said, the Department of Energy is partnering with NVIDIA to build seven new AI supercomputers to advance the nation's science. Huang credited Energy Secretary Chris Wright for bringing renewed focus to ensuring America leads in scientific computing.
The AI Scaling Revolution
Beyond quantum computing, Huang outlined why AI infrastructure demand has reached unprecedented levels. The key lies in three new "scaling laws" that govern AI development: pre-training (foundational learning), post-training (skill development), and inference-time thinking.
Huang explained that thinking—what he called inference—requires extraordinary computational resources, contrary to earlier industry assumptions that "inference is easy." He emphasized that regurgitating memorized content is easy, but thinking is fundamentally hard.
This year marked a critical inflection point. AI models became smart enough that people are willing to pay for them, creating a virtuous cycle: smarter models attract more users, more usage requires more compute, and more compute enables smarter models. For example, NVIDIA itself pays for every license of Cursor, an AI coding assistant that has dramatically improved productivity for its software engineering workforce.
The End of Moore's Law and Extreme Co-Design
Huang declared that Dennard scaling—which governed transistor performance improvements—stopped nearly a decade ago, and while transistor counts continue to grow, their performance and power efficiency have slowed tremendously.
With Moore's Law reaching its limits just as two exponential demands converge—the computational requirements of AI's three scaling laws and the explosive growth in AI usage—NVIDIA's answer is what Huang calls "extreme co-design."
NVIDIA is the only company that can start from a blank sheet of paper and simultaneously redesign computer architecture, chips, systems, software, model architecture, and applications, Huang claimed. This approach extends across multiple scales: scaling up entire racks as single computers, scaling out via Spectrum X AI Ethernet, and connecting multiple data centers through Spectrum XGS (gigascale).
Grace Blackwell: 10x Performance, 10x Lower Cost
The result of this extreme co-design is Grace Blackwell with NVLink 72—a system that Huang called the most radically redesigned computer since IBM's System 360.
According to independent benchmarks by SemiAnalysis, Grace Blackwell delivers 10 times the performance of NVIDIA's previous flagship H200 GPU—despite having only twice the number of transistors. The secret lies in architectural innovations that allow the system to handle massive AI models with mixture-of-experts architectures far more efficiently.
Perhaps more remarkably, Grace Blackwell generates tokens at the lowest cost in the industry, despite being the most expensive computer to build, because its token generation capability is so superior that it produces the best total cost of ownership.
The business implications are staggering. NVIDIA announced visibility into half a trillion dollars in cumulative orders for Blackwell and early Rubin systems through 2026—making it potentially the first technology company in history with such forward visibility.
The company has already shipped 6 million Blackwell GPUs in the first 3.5 quarters of production, with orders for 20 million additional Blackwell GPUs (each Blackwell system contains two GPUs) in the next five quarters. This represents five times the growth rate of the previous Hopper generation.
The top six cloud service providers—Amazon, CoreWeave, Google, Meta, Microsoft, and Oracle—are making massive capital expenditure investments, which NVIDIA noted comes at an ideal time as Grace Blackwell enters volume production globally.
Beyond AI: The Accelerated Computing Transition
Huang emphasized that two platform shifts are occurring simultaneously. Beyond the AI revolution, the entire computing industry is transitioning from general-purpose computing to accelerated computing for tasks ranging from data processing to semiconductor mask computation, independent of AI applications.
NVIDIA's GPU is the only processor that can handle traditional accelerated computing workloads, classical machine learning algorithms like XGBoost and recommender systems, plus cutting-edge AI—unlike specialized AI chips that can only handle narrow workloads, Huang said.
As quantum computing, AI, and accelerated computing converge, Huang's vision is clear: the future of computing isn't just faster chips, but entirely reimagined systems where quantum processors, AI models, and GPU supercomputers work in concert to solve problems impossible for any single technology alone.
With half a trillion dollars in orders, support from national laboratories, and breakthrough architectures delivering 10x improvements, NVIDIA is positioning itself not just as a chip company, but as the architect of a new era in computing—one where the boundaries between quantum mechanics, artificial intelligence, and classical computation dissolve into unified systems capable of tackling humanity's greatest challenges.


