NVIDIA DGX B200 Server – 8× B200 GPUs | 2TB DDR5 | 400Gb/s Networking | AI Supercomputing for the Enterprise
NVIDIA DGX B200-180GB AI Infrastructure Server – 8× B200 GPUs | Dual Xeon Platinum 8570 | 2TB RAM | Ultra-High-Speed Networking
Specifications
CPU: 2 × Intel Xeon Platinum 8570 (56 Cores each, 2.1GHz)
RAM: 2TB DDR5 (Optional Upgrade: up to 4TB)
Storage: 2 × 1.9TB NVMe M.2 + 8 × 3.84TB NVMe U.2 SSDs
GPU: 8 × NVIDIA B200 180GB Tensor Core GPUs
Networking: 4 × OSFP (8 × Single-Port 400Gb/s InfiniBand/Ethernet)
2 × Dual-Port QSFP112 NVIDIA BlueField-3 DPU
Up to 400Gb/s InfiniBand/Ethernet Throughput
10Gb/s Onboard NIC with RJ45
100Gb/s Dual-Port Ethernet NIC
Management: Host Baseboard Management Controller (BMC) with RJ45
Support: Includes 3 Years NVIDIA Business Standard Support
The NVIDIA DGX B200 AI Infrastructure Server redefines high-performance AI and HPC systems. Featuring 8 NVIDIA B200 Tensor Core GPUs with 180GB HBM3e memory each, dual 56-core Intel Xeon Platinum CPUs, 2TB of system memory (expandable up to 4TB), and high-throughput 400Gb/s networking, it is purpose-built for training large language models, deep learning inference at scale, and next-generation simulation workloads. Designed for seamless AI cluster integration and maximum scalability.