Elon Musk, the serial entrepreneur and eccentric billionaire, has lots of talents working for him.
Not just that, because Musk being what he is, means that he also has access to a lot of resources, as well as connections.
This is apparent, when Musk and his team at xAI managed to accomplish an engineering feat of epic proportions, setting up a supercluster of 100,000 Nvidia H100/H200 Blackwell GPUs in an astonishing 19 days.
This feat is well praised by Nvidia CEO himself.
Jensen Huang shared the story of Musk's incredible achievement with the Tesla Owners Silicon Valley group on X, expressing admiration for what he called a "superhuman" effort.

According to Huang, the xAI team transitioned from the conceptual phase to running their first AI training on the newly built supercluster.
The project involved constructing the massive X factory to house the GPUs, outfitting it with liquid cooling and power systems, and ensuring compatibility with Nvidia's hardware—all within this remarkably short time frame.
To put this in perspective, Huang explained that a typical data center would require four years to achieve a similar setup: three years for planning and one for shipping, installation, and operational readiness.
"A supercomputer that you would build would take normally three years to plan and then they deliver the equipment and it takes one year to get it all working."
In contrast, Musk and his team achieved this in just 19 days, a timeline Huang described as unparalleled in the tech industry.
In an episode of the Bg2 Pod, Huang spilled some details about the painstaking work the people at xAI have been through to accomplish what they did.
Huang's admiration is because the process wasn't just about speed but also involved tackling extreme technical complexity.
He detailed the intricate networking required for Nvidia’s hardware, noting the sheer volume of cabling and connections involved in making a single node operational.
"The number of wires that goes in one node...the back of a computer is all wires," Huang said.
Elon Musk is super human.
What would take everyone else a year, only took him 19 days. pic.twitter.com/q51sM48lsu— Tesla Owners Silicon Valley (@teslaownersSV) October 13, 2024
It's worth noting that all those Nvidia H100/H200 GPUs were put together to create what's called the 'Colossus'.
Considered the world's largest AI supercomputer, the supercluster leverages Nvidia's Spectrum-X Ethernet networking platform for performance and scalability.
This platform is designed for multi-tenant, hyperscale-grade AI factories.
As for its performance, it's maintains 95% data throughput and zero application latency degradation, even at this massive scale.
Musk and his team at xAI created this Colossus to primarily train xAI's Grok family of Large Language Models, which is at this time a feature offered exclusively to X Premium subscribers.
Musk's expensive project is detailed for the first time by YouTuber ServeTheHome, who was given access to the Supermicro servers within the 100,000 GPU beast, showing off several facets of the supercomputer.
Back in September, Musk first revealed in an X post that it took his team 122 days "from start to finish" to bring the training cluster online, appearing to refer to the total project time.
Musk explained the the 19 days was to get Colossus from hardware installation to beginning training, adding that it was "the fastest by far anyone's been able to do that."
This weekend, the @xAI team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days.
Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months.
Excellent…— Elon Musk (@elonmusk) September 2, 2024
"Just to put it in perspective, 100,000 GPUs—that's easily the fastest supercomputer on the planet as one cluster," Huang said, commenting on the supercluster.
Musk's integration of 100,000 H100/H200 GPUs is unprecedented, with Huang emphasizing that such a feat has "never been done before" and is unlikely to be replicated anytime soon.
"As far as I know, there's only one person in the world who could do that; Elon is singular in his understanding of engineering and construction and large systems and marshaling resources; it's just unbelievable," he said.
The Nvidia CEO also commended xAI's engineering, software, networking, and infrastructure teams, and called them "extraordinary."

A witness of this extravagant feat is Oracle co-founder and chairman Larry Ellison.
Back in September, he called how he had dinner with both Musk and Huang at Nobu in Palo Alto.
During the meeting of the trio of longtime friends, Ellison said how they were literally "begging" Huang for GPUs.
"Please take our money... we need you to take more of our money," Ellison quoted, adding with a chuckle, "It went OK; it worked."
As for its future plan, xAI wants to double the size of Colossus to a combined total of 200,000 NVIDIA Hopper GPUs.