Data center workloads for AI, graphics rendering, high-performance computing and business intelligence are getting a boost as a Who’s Who of the world’s biggest server makers and cloud providers snap up Nvidia’s Volta-based Tesla V100 GPU accelerators.
Nvidia is rallying its entire ecosystem, including software makers, around the new Tesla V100s, effectively consolidating its dominance in GPUs for data centers.
IBM, HPE, Dell EMC and Supermicro announced at the Strata Data Conference in New York Wednesday that they are or will be using the GPUs, which are now shipping. Earlier this week at Nvidia’s GPU Technology Conference in Beijing, Lenovo, Huawei and Inspur said they would be using Nvidia’s HGX reference architecture to offer Volta architecture-based systems for hyperscale data centers.
Volta architecture is key
Volta GPUs take a giant step forward in supplanting Nvidia’s Pascal architecture. Volta-based Tesla V100s, for example, sport 21 billion transistors and 5,120 CUDA cores, cruising at 1,455MHz boost clock speeds. The Pascal-based Tesla P100, by comparison, offers up to 3,840 CUDA cores and 15 billion transistors.
NVidia has about 70 percent of the market for discrete GPUs, and is just about the only game in town for machine-learning GPU workloads. Hardware is just part of the story, though. The ecosystem around its CUDA parallel computing platform and application programming interface (API) model, for example, has helped erect a barrier that competitors like Intel and AMD find hard to breach.
“While we focus a lot on the hardware and Volta is obviously a fine part, there should be as much if not more focus on all the supplemental products that are wrapped around it — compilers, systems, software tools, etcetera — that Nvidia honestly has had years head start developing,” said Mercury Research’s Dean McCarron.
Software tools for Volta-based Tesla V100 GPUs
Among these complementary tools is a new version of Nvidia’s TensorRT, an optimizing compiler and runtime engine for deployment of machine-learning systems for hyperscale data centers, embedded or automotive GPU platforms.
Third-party software makers also stepped forward at Strata to announce applications that will run on the Volta-based GPUs. H2O.ai’s Driverless AI program, for example, has been specifically tuned and is already running on Nvidia’s Volta-based DG-X1 supercomputer, released earlier this month, and will run on any of the servers supporting the new version of the Tesla V100s (as well as being backward compatible with Pascal systems).
Driverless AI is designed to let business users glean insights from data without needing expertise in machine learning models, and H2O.ai has found use cases in areas such as insurance, financial services and health care, according to SriSatish Ambati, the company’s CEO and co-founder.
While the massively parallel architecture of GPUs make them particularly suitable for machine-learning tasks such as training neural networks, servers incorporate the processors for a variety of tasks. Read more Nvidia gets broad support for cutting-edge Volta GPUs in the data center