NVIDIA and NetApp launched the corporate platform Ontap AI to store models of artificial intelligence. Unlike cloud services, it is equipped with tools for accelerated collection, processing and transmission of data. This is reported by Venture App.
The basis of the platform is the supercomputers DGX-1 and cloud flash storages AFF A800. To distribute data over the clouds and provide quick access, regardless of format and location, the Data Fabric architecture is used.
“NetApp’s Cloud-connected data solutions and new proven architecture with NVIDIA ‘DGX’ create a single data environment for AI. This gives customers the control, access, and performance,” said Octavian Tanase, Senior Vice President, ONTAP, NetApp.
The NVIDIA DGX-1 computers support second-generation in-depth training and are equipped with Tesla V100 graphics cards. One DGX-1 rack provides 1 Pflops of power and is able to train the FairSeq Neural Network presented in May 2017 for one and a half days. The AFF A800 drives in a cluster with 24 nodes read data at 300 GB / s and have a delay of 200 μs.
The Ontap AI platform was already used in the consulting firm Cambridge Consultants. It was involved in the development of systems for studying the effects of drugs on patients. Also Ontap AI was used to create Vincent – a painting training program at the level of human capabilities.
In April 2018, NVIDIA CEO Jensen Huang announced that their video cards no longer comply with Moore’s law. With the help of the DGX-2 supercomputer, the company managed to train the AlexNet neural network to store 15 million images in 18 minutes.
By leveraging the “NetApp Data Fabric”, “ONTAP AI” removes performance bottlenecks and enables secure, non-disruptive access to data from multiple sources and data formats.
“The combination of Nvidia ‘DGX’ and NetApp all-flash arrays meets the infrastructure challenges of today’s AI deployments,” said Jim McHugh, Vice President and General Manager of Deep Learning Systems at Nvidia.