Roughly two years ago, Microsoft announced a partnership with OpenAI, the AI lab with which it has a close commercial relationship, to build what the tech giant called an “AI Supercomputer” running in the Azure cloud. Containing over 285,000 processor cores and 10,000 graphics cards, Microsoft claimed at the time that it was one of the largest supercomputer clusters in the world.
Now, presumably to support even more ambitious AI workloads, Microsoft says it’s signed a “multi-year” deal with Nvidia to build a new supercomputer hosted in Azure and powered by Nvidia’s GPUs, networking and AI software for training AI systems.
“AI is fueling the next wave of automation across enterprises and industrial computing, enabling organizations to do more with less as they navigate economic uncertainties,” Scott Guthrie, executive vice president of Microsoft’s cloud and AI group, said in a statement. “Our collaboration with Nvidia unlocks the world’s most scalable supercomputer platform, which delivers state-of-the-art AI capabilities for every enterprise on Microsoft Azure.”
Details were hard to come by at press time. But in a blog post, Microsoft and Nvidia said that the upcoming supercomputer will feature hardware like Nvidia’s Quantum-2 400Gb/s InfiniBand networking technology and recently-debuted H100 GPUs. Current Azure instances offer Nvidia A100 GPUs paired with Quantum 200Gb/s InfiniBand networking.
Notably, the H100 ships with a special “Transformer Engine” to accelerate machine learning tasks and, according to Nvidia, delivers between 1.5 and 6 times better performance than the A100. It’s also less power-hungry, offering the same performance as the A100 with up to 3.5 times better energy efficiency.
As part of the collaboration, Nvidia says it’ll use Azure virtual machine instances to research advances in generative AI, or the self-learning algorithms that can create text, code, images, video or audio. (Think along the lines of OpenAI’s text-generating GPT-3 and image-producing DALL-E 2.) Meanwhile, Microsoft will optimize its DeepSpeed library for new Nvidia hardware, aiming to reduce computing power and memory usage during AI training workloads, and work with Nvidia to make the company’s stack of AI workflows and software development kits available to Azure enterprise customers.
Why Nvidia would opt to use Azure instances over its own in-house supercomputer, Selene, isn’t entirely clear; the company’s already tapped Selence to train generative AI like GauGAN2, a text-to-image generation model that creates art from basic sketches. Evidently, Nvidia anticipates that the the scope of the AI systems that it’s working with will soon surpass Selene’s capabilities.
“AI technology advances as well as industry adoption are accelerating. The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups and enabled new enterprise applications,” Manuvir Das, VP of enterprise computing at Nvidia, said in a statement. “Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalize on the transformative power of AI.”
The insatiable demand for powerful AI training infrastructure has led to an arms race of sorts among cloud and hardware vendors. Just this week, Cerabras, which has raised over $720 million in venture capital to date at an over-$4 billion valuation, unveiled a 13.5-million core AI supercomputer called Andromeda that it claims can achieve more than 1 exaflop of AI compute. Google and Amazon continue to invest in their own proprietary solutions, offering custom chips — i.e. TPUs and Trainium, respectively — for accelerating algorithmic training in the cloud.
A recent study found that the compute requirements for large-scale AI models has been doubling at an average rate of 10.7 months between 2016 and 2022. OpenAI once estimated that, if GPT-3 were to be trained on a single Nvidia Tesla V100 GPU, it would take around 355 years.
Microsoft and Nvidia team up to build new Azure-hosted AI supercomputer by Kyle Wiggers originally published on TechCrunch