NetApp Joins Hands Run:AI

0
998

To leverage faster AI experimentation with complete GPU utilization, leading cloud data service provider NetApp is joining hands with reputed virtual AI infrastructure Run:AI.

This collaboration will be beneficial for both of the companies as it will allow multiple AI experiments to run simultaneously with faster access to data and better use of unlimited computing resources. Run:AI automates resource allocation enabling full GPU utilization.

With the help of NetApp® ONTAP® AI proven architecture, each experiment is allowed to run at maximum speed through the elimination of data pipeline bottlenecks. Overall, the collaboration of NetApp and Run:AI allows teams to gain the double benefit of full resource utilization and faster experiments for the scaling of AI.

Speed is critical in AI experiments. It is found that successful business outcomes of AI and faster experimentation are directly correlated. Yet inefficient processes are the biggest rife for AI projects. Bottlenecks are generated by the amalgamation of outdated storage solutions and data processing time. In addition, static allocation of GPU compute resources and workload orchestration issues further adds roadblocks to the number of AI experiments that researchers could run.

The collaboration of NetApp and Run:AI is done with the aim to simplify the AI workload orchestrations, rationalize the process of both machine scheduling for Deep Learning, and pipeline of data. Businesses can fully deliver on Artificial Intelligence and Deep Learning through simplification, integration, and acceleration of its data pipeline with the NetApp ONTAP AI proven architecture.

Orchestration of AI workloads by Run:AI adds a resource utilization platform and proprietary Kubernetes-based scheduling to help researchers manipulate and optimize the utilization of GPU. With the NetApp’s and Run:AI’s technology together, the product can handle multiple experiments to run simultaneously on different compute nodes but with faster access to numerous datasets on a centralized storage system.

Researchers can start solely focus on data science without the hassle of infrastructure management earlier with the use of Run:AI’s centralized resource pooling, queueing, and prioritization system together. They can even increase productivity with running workloads as many as need without the need for computing or bottlenecks of the data pipeline.

Run:AI’s fairness algorithm assures that all the teams and users will get their fair share of resources. One can preset the prioritization policies, for example. The Scheduler and virtualization technology of Run:AI will allow the researchers to easily use integer GPUs, fractional GPUs, and multiple nodes of GPUs for distributed training on Kubernetes.

Thus, this way, AI workloads could run on a need basis, not on a capacity basis. And data scientists can run large-scale AI experiments on the same infrastructure now.