Recent research carried out by ITPro showed investment in AI is a key priority for businesses globally, however, following through on that ambition is a question that looms large for many organizations.
At Nvidia’s GTC conference, Hewlett Packard Enterprise (HPE) unveiled Mod Pod, a product intended to answer at least some elements of this conundrum.
Mod Pod is a liquid-cooled modular data center that’s optimized for AI and HPC workloads. It’s built into a container and can, the company says, be easily deployed on a business’ premises without needing to completely overhaul its existing data center.
“A lot of data center space that does exist, does not have the capabilities for liquid cooling, which means you don’t have the density in your racks, and you also don’t have the PUE (power usage effectiveness),” said HPE CTO Fidelma Russo. “ So [Mod Pod] gives you a lower total cost of ownership.”
“We have examples of our customers, siting these in parking lots where they used to have employees, but with the work from home from COVID, they have the space,” she added. “So again it’s easy, [you’ve] just got to level some space and you can have a data center in your backyard up and running in months.”
Mod Pod comes in 6m and 12m configurations, and supports up to 1.5MW per unit with a PUE of under 1.1. While HPE is keen to highlight its liquid-cooling credentials, the Adaptive Cascade Cooling technology can be adapted to use either air or liquid cooling depending on customer need and preference.
HPE expands Private Cloud capabilities
In addition to Mod Pod, HPE also announced several new features in Private Cloud AI, the flagship – and, thus far, only – product from its partnership with the chipmaker, Nvidia AI Computing by HPE.
The first is support for the newly announced Nvidia AI Data Platform, which allows Nvidia-Certified Storage providers, of which HPE is one, to build AI query agents into their hardware using Nvidia AI Enterprise software, such as NIM and Llama Nemotron models, as well as AI-Q Blueprint.
It also revealed a new developer system that adds an “instant AI development environment”, powered by Nvidia accelerated computing, HPE Data Fabric, and support for rapid deployment of Nvidia blueprints.
There were also a number of storage announcements, including HPE ProLiant Compute DL380a Gen12 and HPE ProLiant Compute DL384b Gen12, which feature Nvidia RTX Pro 6000 Blackwell chips and NVIDIA GB200 Grace Blackwell NVL4 Superchips respectively.
HPE ProLiant Compute XD servers, meanwhile, will support the NVIDIA HGX B300 platform, launched at GTC. The company says this technology will allow customers to “train, fine-tune and run large AI models for the most complex workloads, including agentic AI and test-time reasoning inference”.
MORE FROM ITPRO
Source link