Google announced that it has built the world’s fastest machine learning (ML) training supercomputer that has broken the performance of six industry-leading MLPerf benchmarks among eight.
It was the results of MLPerf benchmark competition that highlighted that Google has built the world’s fastest machine learning (ML) supercomputer. This Supercomputer along with the latest Tensor Processing Unit (TPU) chip, enables Google to set performance records in six out of eight MLPerf benchmarks.
Naveen Kumar from Google AI said that the company achieved these results with the implementation of ML models in TensorFlow, JAX, and Lingvo. He also added that four out of eight models are trained from scratch within 30 seconds.
To train one of the models it takes more than three weeks on the advanced hardware accelerator in the year 2015. The latest TPU supercomputer can train the same models that’s been used after five years in much faster way.
The Supercomputer that is been used for MLPerf training by Google is four times larger than Cloud TPU v3 Pod. Cloud TPU v3 Pod has attained three records in the previous competition.
Nvidia, Graphics giant said that it has announced the world’s fastest Artificial Intelligence (AI) training performance out of available chips, which would help big companies to handle the big challenges in AI, data science, and scientific computing.
According to MLPerf benchmarks, Nvidia A100 GPUs and DGX SuperPOD systems are available products for AI training.
From the eight MLPerf benchmarks, A100 Tensor Core GPU showed the fastest performance per accelerator.
The company said in a statement that the real winners are the customers who apply this performance to their businesses faster and more cost-effectively with AI. A100 is the first processor to hit the market faster than any other previous Nvidia GPU. A100 is based on the Nvidia Ampere architecture. The company also said that the companies across the world are applying A100 to handle complex challenges in AI, data science, and scientific computing.
The world’s top leading cloud providers such as Amazon Web Service (AWS), Baidu Cloud, Microsoft Azure, and Tencent Cloud as well as the service providers like Dell Technologies, Hewlett Packard Enterprise, Inspur, and Supermicro has come forward to meet the strong demands for Nvidia A100.