GigaIO’s SuperNODE Helps TensorWave Deployment with AMD MI300X – HPC Information Evaluation

GigaIO’s SuperNODE Helps TensorWave Deployment with AMD MI300X – HPC Information Evaluation

SAN JOSE, CA, December 6, 2023 – GigaIO, a supplier of open, workload-defining infrastructure for synthetic intelligence and accelerated computing, introduced what the corporate mentioned is the biggest order so far for SuperNODE utilizing tens of hundreds of AMD Intuition MI300X accelerators.

The GigaIO infrastructure will type the spine of a specialised AI cloud dubbed “TensorNODE,” which shall be constructed by cloud supplier TensorWave to supply entry to AMD information heart GPUs, significantly to be used in LLMs.

SuperNODE, which was launched final June, was the world’s first single-node supercomputer outfitted with 32 GPUs, the corporate mentioned. TensorNODE’s deployment will construct on this structure extra broadly, leveraging GigaIO’s PCIe Gen-5 reminiscence cloth to supply easier setup and deployment of workloads than is feasible with legacy networks, and eliminating the related efficiency tax, in accordance with GigaIO.

“TensorWave is worked up to deliver this progressive resolution to market with GigaIO and AMD,” mentioned Darrick Horton, CEO of TensorWave. “We selected the GigaIO platform due to its superior capabilities, in addition to GigaIO’s alignment with our values ​​and dedication to open requirements. We’re leveraging this new infrastructure to help large-scale AI workloads, and we’re proud to collaborate with AMD as one of many first cloud suppliers to deploy MI300X accelerator options.”

GPU utilization is crucial within the age of GPU shortage, however it requires VRAM and vital reminiscence bandwidth. TensorWave will use FabreX to create the primary petabyte-sized GPU reminiscence array with out impacting the efficiency of non-memory-centric networks. The primary batch of TensorNODE is predicted to go stay beginning in early 2024 with an structure that can help as much as 5,760 GPUs throughout a single FabreX reminiscence cloth area. Workloads could have entry to greater than a petabyte of VRAM in a single activity from any node, enabling even the biggest duties to be accomplished in file time. All through 2024, a number of TensorNODEs shall be deployed.

The composable nature of GigaIO’s dynamic infrastructure offers TensorWave with super flexibility and suppleness in comparison with normal static infrastructure; Because the wants of LLM and AI customers evolve over time, the infrastructure may be rapidly adjusted to fulfill present and future wants.

The TensorWave Cloud shall be greener than options by eliminating redundant servers and related networking tools, offering financial savings in value, complexity, area, water and vitality.

“We’re thrilled to energy TensorWave infrastructure at scale by combining the ability of the revolutionary AMD Intuition MI300X accelerators with GigaIO’s proprietary AI infrastructure, together with our distinctive reminiscence cloth, FabreX,” mentioned Alain Benjamin, CEO of GigaIO. “This deployment validates our pioneering method to reimagine information heart infrastructure.” TensorWave Group
He brings a visionary method to cloud computing and deep expertise in constructing and deploying quick, extremely superior information facilities.

TensorNODE is an all-AMD resolution that includes each 4th technology AMD CPUs and MI300X accelerators. The anticipated efficiency of TensorNODE is made potential by the MI300X, which offers 192GB of HBM3 reminiscence per accelerator. The industry-leading reminiscence capability of those accelerators, mixed with the GigaIO reminiscence cloth – which permits for near-perfect scaling with out compromising efficiency – solves the problem of underutilized or idle GPU cores.

“We’re excited to associate with GigaIO and TensorWave to ship distinctive options for the calls for of cutting-edge AI and HPC workloads,” mentioned Andrew Dickman, company vp and normal supervisor of Information Middle and Accelerated Processing at AMD. “GigaIO’s SuperNODE structure, powered by AMD Intuition accelerators and AMD EPYC CPUs, is predicted to ship spectacular efficiency and suppleness.”

You may also like...

Leave a Reply