NVIDIA Grace Hopper combines CPU and GPU in a unified architecture for AI, LLMs, and HPC. Unlock massive memory bandwidth, efficiency, and scalability

Showing the single result

SUPERMICRO 2U H100 NVL Server – Grace CPU | 2× H100 NVL GPUs | 480GB LPDDR5X | 7.68TB NVMe | AI Inference at Scale

87.173,00 

SUPERMICRO 2U GPU Server – 2× NVIDIA H100 NVL | NVIDIA Grace Superchip | 480GB LPDDR5X | Gen4 NVMe | 10GbE + IPMI


Specifications

CPU: 1 × NVIDIA Grace CPU Superchip (72 Cores)
RAM: 480GB Co-Packaged LPDDR5X-4800MHz with ECC
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 2 × NVIDIA H100 NVL Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 2U Rackmount GPU-Optimized Server
Support: Includes 3 Years Parts Warranty


This SUPERMICRO 2U GPU server is designed for ultra-efficient AI inference, LLM deployment, and hyperscale compute workloads. Powered by the 72-core NVIDIA Grace CPU Superchip and 2 NVIDIA H100 NVL GPUs, along with 480GB of high-bandwidth LPDDR5X memory, it delivers cutting-edge performance for AI, HPC, and data-intensive enterprise environments with maximum power efficiency.