Atipa Technologies, a prominent provider of storage and computing solutions, has reportedly introduced a deep learning and high-performance platform named Atipa Procyon G-Series.
The platform is powered by latest NVIDIA A100 Tensor Core GPU which is mounted on NVIDIA Ampere architecture, by using of A.I. containers, frameworks, and models from the NGC catalog, cited sources with knowledge of the matter.
It is to be noted that the data scientists and researchers require deep learning setups in order to transform large amount of data into valuable information, that can be thoroughly used for scientific simulations. However, installation and maintenance of these setups needs time and expertise which may not be easily available to the researchers.
Incidentally, the newly introduced Atipa Procyon G-series enables easy access to NGC catalog to perform operations related to deep learning, machine learning, and high-performance computing (HPC) applications.
Bart Willems, Technology Director at Atipa Technologies reportedly stated that the ability of the platform to download popular frameworks and containers like Caffe, and TensorFlow, or the high performing computing applications like LAMMPS and Gromacs, which comes with pre-installed libraries and dependencies, further allowing IT staff to save time.
The uses of NGC containers will benefit numerous scientists with constant updates and performance optimizations, Willems confirmed.
Meanwhile, Paresh Kharya, Senior Product Management and Marketing Director for Accelerated Computing at NVIDIA was quoted saying that acquiring advanced solutions for a relevant problem depends on several factors such as time to build, install, and optimize applications as well as better simulation time.
He further stated that the Atipa servers have A100 GPUs that allows researchers with easy access to the power of GPU acceleration that will produce results.
Atipa Procyon G-series reportedly features excellent configurability and performance by offering several benefits such as fast NVMe flash storage, high-bandwidth NVIDIA Mellanox HDR 200Gb/s InfiniBand networking, and low-latency while delivering flexible choices for GPU-to-GPU communication.
Source credits-
© 2024 groundalerts.com. All Rights Reserved.