SUPERMICRO 4U H100 NVL Server – 8× 94GB GPUs | Dual EPYC 9654 | 1.5TB DDR5 | Gen4 NVMe | AI Inference at Scale

285.333,00 

SUPERMICRO 4U GPU Server – 8× NVIDIA H100 NVL | Dual AMD EPYC 9654 | 1.5TB DDR5 | Gen4 NVMe | 10GbE + IPMI


Specifications

CPU: 2 × AMD EPYC GENOA 9654 (96 Cores each, 2.40GHz)
RAM: 1.5TB DDR5-4800MHz ECC RDIMM (24 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 8 × NVIDIA H100 NVL 94GB Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount High-Density GPU Server Platform
Support: Includes 3 Years Parts Warranty


This SUPERMICRO 4U GPU server is engineered for large-scale AI inference, LLM deployment, and high-throughput generative AI tasks. Featuring 8 NVIDIA H100 NVL GPUs, dual 96-core AMD EPYC Genoa processors, and 1.5TB of DDR5 memory, it offers massive compute density and bandwidth for transformer model workloads, deep learning pipelines, and enterprise-scale AI acceleration.

15 People watching this product now!