In the ever-evolving world of the internet, speed is everything. People want fast connectivity, and content at an instant.
And this time, Nvidia announces NVLink Fusion—a next-generation interconnect technology that promises to revolutionize how AI systems are built. Unveiled at GTC 2024, NVLink Fusion extends the company's proprietary high-speed NVLink communication standard beyond its own GPUs, opening the doors for third-party CPUs and custom accelerators to connect directly into Nvidia's AI ecosystem.
Unlike traditional interconnects such as PCIe Gen5, NVLink Fusion boasts an eye-watering 1.8 terabytes per second of bidirectional bandwidth per GPU.
NVLink, which is Nvidia's processor interconnect technology, is mostly used in big data centers. The system is packed full of CPUs and GPUs, and this NVLink Fusion can be scaled up to 72 accelerators in a single system.
That’s more than just a performance bump.

CEO Jensen Huang even went as far as saying that NVLink Fusion has the capacity to "move more traffic than the entire internet."
At its core, NVLink Fusion is about flexibility.
It's designed for companies and hyperscalers who want semi-custom AI solutions, enabling them to integrate Nvidia's GPUs with third-party processors—whether that’s a general-purpose CPU from Qualcomm, a neural network accelerator from MediaTek, or something entirely bespoke.
This creates a much more inclusive playground where different silicon vendors can collaborate without compromising speed or efficiency.
The implications can be huge.
For enterprises and data centers managing massive AI models—from LLMs to real-time inference engines—NVLink Fusion means faster data movement, better energy efficiency, and, perhaps most importantly, the ability to fine-tune their infrastructure to match their exact needs.
In the big picture, this marks a new chapter in AI infrastructure—one where power, performance, and flexibility meet in harmony.
NVLink Fusion is certainly a technological upgrade.
However, Huang didn’t specify exactly when this peak internet bandwidth he was referring to. He only said about being 900 terabits per second (Tb/s).
As of 2024, the total international internet bandwidth reached approximately 1,479 Tb/s, marking a 22% increase from the previous year’s 1,217 Tb/s. That 2023 figure itself reflected a 23% growth compared to 2022, when bandwidth stood at 997 Tb/s—a 28% jump from 2021’s 786 Tb/s.
Each Nvidia's NVLink Fusion technology has a bidirectional bandwidth of 1.8 terabytes per second (TB/s), which is significantly outperforming traditional interconnects like PCIe Gen5.
To put this into perspective, 1.8 TB/s, which translates to 14.4 Tb/s from a single NVLink Fusion-enabled GPU cannot transfer 2024's total global internet bandwidth, which stood at 1,479 Tb/s.
However, the system also has a NVLink Spine that ties together many GPU-accelerated nodes within a rack. Enabling 72 switches to be connected (specifically as part of the GB200 NVL72 system), it can achieve a total bandwidth of up to 130 terabytes per second (TB/s). This is made possible with NVLink Switch chip, which facilitates high-speed communication among the GPUs within the domain.

To put this in perspective, 130 TB/s is equivalent to 1,040 terabits per second (Tb/s), or about 70% of the entire global internet bandwidth in 2024, but can certainly transfer the entire internet if introduced in 2022.
Regardless, Nvidia isn’t doing this alone.
It's already partnering with tech heavyweights like Fujitsu, Marvell, Alchip Technologies, and Synopsys to build an ecosystem around NVLink Fusion. By inviting these partners into the fold, Nvidia is positioning itself as a platform enabler—a subtle but strategic move that reflects the growing importance of interoperability in AI hardware design.