Browse GPU servers built for AI, deep learning, scientific computing, and rendering. Powered by NVIDIA and AMD GPUs for performance, efficiency, and scale
SUPERMICRO 8U HGX H200 Server – 8× 141GB GPUs | Dual Xeon Platinum 8468 | 2TB DDR5 | 400GbE / Infiniband
324.759,00 €SUPERMICRO 8U GPU Server – NVIDIA HGX H200 8-GPU SXM5 | Dual Xeon Platinum | 2TB DDR5 | 400GbE + IPMI
Specifications
CPU: 2 × Intel Xeon Platinum 8468 (48 Cores each, 2.10GHz)
RAM: 2TB DDR5-4800MHz ECC RDIMM (32 × 64GB)
Storage: 1.92TB Gen4 NVMe SSD
GPU: NVIDIA HGX H200 with 8 × 141GB SXM5 GPUs
Network: 2 × 400GbE / Infiniband OSFP, 1 × Dedicated IPMI Management Port
Chassis: 8U Rackmount Ultra-Dense GPU Server Platform
Support: Includes 3 Years Parts Warranty
The SUPERMICRO 8U HGX H200 server is built for extreme-scale AI workloads, offering 8 NVIDIA H200 SXM5 GPUs integrated into the HGX platform. With 2TB of DDR5 ECC memory and dual Intel Xeon Platinum CPUs, this platform delivers unparalleled compute density and memory bandwidth for LLM training, generative AI, scientific computing, and data center-scale inference. Featuring 400GbE/Infiniband connectivity, it ensures seamless integration into high-performance compute clusters.
SUPERMICRO 8U HGX H100 Server – 8× 80GB GPUs | Dual AMD EPYC 9654 | 1.5TB DDR5 | Built for Generative AI at Scale
311.640,00 €SUPERMICRO 8U GPU Server – NVIDIA HGX H100 8-GPU SXM5 | Dual AMD EPYC 9654 | 1.5TB DDR5 | 10GbE + IPMI
Specifications
CPU: 2 × AMD EPYC GENOA 9654 (96 Cores each, 2.40GHz)
RAM: 1.5TB DDR5-4800MHz ECC RDIMM (24 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: NVIDIA HGX H100 with 8 × 80GB SXM5 Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 8U Rackmount HGX H100 GPU-Accelerated Server
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 8U GPU server features the NVIDIA HGX H100 platform with 8 SXM5 GPUs and dual AMD EPYC Genoa processors, delivering unmatched performance for large-scale AI training, foundation models, and deep learning inference. With 1.5TB of DDR5 memory and high-speed Gen4 NVMe storage, this system is optimized for datacenters running high-density compute clusters and GPU-parallel workloads.
SUPERMICRO 4U H100 NVL Server – 8× 94GB GPUs | Dual EPYC 9654 | 1.5TB DDR5 | Gen4 NVMe | AI Inference at Scale
285.333,00 €SUPERMICRO 4U GPU Server – 8× NVIDIA H100 NVL | Dual AMD EPYC 9654 | 1.5TB DDR5 | Gen4 NVMe | 10GbE + IPMI
Specifications
CPU: 2 × AMD EPYC GENOA 9654 (96 Cores each, 2.40GHz)
RAM: 1.5TB DDR5-4800MHz ECC RDIMM (24 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 8 × NVIDIA H100 NVL 94GB Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount High-Density GPU Server Platform
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 4U GPU server is engineered for large-scale AI inference, LLM deployment, and high-throughput generative AI tasks. Featuring 8 NVIDIA H100 NVL GPUs, dual 96-core AMD EPYC Genoa processors, and 1.5TB of DDR5 memory, it offers massive compute density and bandwidth for transformer model workloads, deep learning pipelines, and enterprise-scale AI acceleration.
SUPERMICRO 8U MI300X Server – 8× AMD Instinct OAM GPUs | 256-Core Bergamo | 3TB DDR5 | 15TB NVMe | AI-Ready Platform
277.696,00 €SUPERMICRO 8U GPU Server – 8× AMD Instinct MI300X OAM | Dual EPYC 9754 | 3TB DDR5 | Gen4 NVMe | 10GbE + IPMI
Specifications
CPU: 2 × AMD EPYC BERGAMO 9754 (128 Cores each, 2.25GHz)
RAM: 3TB DDR5-4800MHz ECC RDIMM (24 × 128GB)
Storage: 4 × 3.84TB Gen4 NVMe SSDs (Total 15.36TB High-Speed Storage)
GPU: 8 × AMD Instinct MI300X OAM Accelerators
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 8U Rackmount GPU-Optimized Server Platform
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 8U AI server delivers cutting-edge compute density with 8 AMD Instinct MI300X OAM GPUs and dual 128-core AMD EPYC Bergamo processors. With 3TB of DDR5 memory and high-speed NVMe storage, this system is engineered for training massive AI models, HPC simulations, memory-bound workloads, and energy-efficient GPU acceleration at scale. Ideal for datacenters driving AI innovation, research labs, and foundation model deployment.
SUPERMICRO 4U HGX H100 Server – 4× 80GB GPUs | Dual Xeon 8558 | 1TB DDR5 | 25GbE SFP28 | Compact AI Power
165.863,00 €SUPERMICRO 4U GPU Server – NVIDIA HGX H100 4-GPU SXM5 | Dual Xeon Platinum 8558 | 1TB DDR5 | 25GbE + IPMI
Specifications
CPU: 2 × Intel Xeon Platinum 8558 (48 Cores each, 2.10GHz)
RAM: 1TB DDR5-4800MHz ECC RDIMM (16 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: NVIDIA HGX H100 Platform with 4 × 80GB SXM5 GPUs
Network: 2 × 25GbE SFP28, 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount HGX GPU-Accelerated Server
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 4U GPU server delivers exceptional AI and HPC performance in a more compact configuration with the NVIDIA HGX H100 4-GPU SXM5 platform. Powered by dual 48-core Intel Xeon Platinum CPUs and 1TB of DDR5 memory, it’s optimized for model training, scientific workloads, large-scale simulations, and multi-GPU applications. With 25GbE and 10GbE connectivity, it integrates easily into next-gen datacenter infrastructures.
SUPERMICRO 4U L40S Server – 8× GPUs | Dual EPYC 9654 | 1.5TB DDR5 | 245TB NVMe | AI, Rendering, and Data-Heavy Workloads
143.668,00 €SUPERMICRO 4U GPU Server – 8× NVIDIA L40S | Dual AMD EPYC 9654 | 1.5TB DDR5 | 245TB NVMe | 10GbE + IPMI
Specifications
CPU: 2 × AMD EPYC GENOA 9654 (96 Cores each, 2.40GHz)
RAM: 1.5TB DDR5-4800MHz ECC RDIMM (24 × 64GB)
Storage: 8 × 30.72TB Gen4 NVMe SSDs (Total 245.76TB All-Flash Storage)
GPU: 8 × NVIDIA L40S Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount High-Density GPU Server Platform
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 4U GPU server is built for scalable AI inference, graphics acceleration, and multi-modal workloads. With 8 NVIDIA L40S GPUs, dual 96-core AMD EPYC Genoa processors, 1.5TB of DDR5 ECC memory, and nearly 250TB of Gen4 NVMe flash storage, it offers unmatched performance for data-intensive environments, visual computing, and hybrid AI + rendering pipelines.
SUPERMICRO 4U MI250 Server – 4× AMD Instinct OAM | Dual EPYC 7763 | 1TB DDR4 | 7.68TB NVMe | AI + HPC Ready
91.890,00 €SUPERMICRO 4U GPU Server – 4× AMD Instinct MI250 OAM | Dual EPYC 7763 | 1TB DDR4 | Gen4 NVMe | 10GbE + IPMI
Specifications
CPU: 2 × AMD EPYC MILAN 7763 (64 Cores each, 2.45GHz)
RAM: 1TB DDR4-3200MHz ECC RDIMM (16 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 4 × AMD Instinct MI250 OAM Accelerators
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount GPU-Accelerated Compute Server
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 4U GPU server combines the computational efficiency of 4 AMD Instinct MI250 OAM GPUs with the performance of dual 64-core AMD EPYC Milan CPUs. Designed for AI acceleration, HPC workloads, and scientific modeling, this system offers 1TB of ECC DDR4 memory and fast NVMe storage, delivering optimal performance for energy-conscious and compute-heavy deployments across enterprise and research environments.
SUPERMICRO 2U GPU Server – 2× H100 NVL | 72-Core NVIDIA Grace Superchip | 480GB LPDDR5X | Built for AI at Scale
87.173,00 €SUPERMICRO 2U GPU Server – Dual NVIDIA H100 NVL | NVIDIA Grace Superchip | 480GB LPDDR5X | 7.68TB NVMe | 10GbE
Specifications
CPU: 1 × NVIDIA Grace CPU Superchip (72 Cores)
RAM: 480GB Co-Packaged LPDDR5X-4800MHz with ECC
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 2 × NVIDIA H100 NVL Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 2U Rackmount GPU-Accelerated Server
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 2U GPU server delivers extreme AI performance powered by the NVIDIA Grace CPU Superchip and dual NVIDIA H100 NVL GPUs. Engineered for large-scale AI training, LLM inference, scientific computing, and data-intensive workloads, it integrates high-bandwidth LPDDR5X memory, lightning-fast Gen4 NVMe storage, and superior GPU density—all in a compact and efficient form factor.
SUPERMICRO 4U H100 NVL Server – 2× 94GB GPUs | Dual Xeon 6530 | 1TB DDR5 | 7.68TB NVMe | AI Inference Optimized
83.445,00 €SUPERMICRO 4U GPU Server – 2× NVIDIA H100 NVL 94GB | Dual Xeon Gold 6530 | 1TB DDR5 | Gen4 NVMe | 10GbE + IPMI
Specifications
CPU: 2 × Intel Xeon Gold 6530 (32 Cores each, 2.10GHz)
RAM: 1TB DDR5-4800MHz ECC RDIMM (16 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 2 × NVIDIA H100 NVL 94GB Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount GPU-Optimized Compute Server
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 4U GPU server delivers enterprise-class AI performance with two NVIDIA H100 NVL 94GB GPUs and dual 32-core Intel Xeon Gold processors. Designed for large language model inference, generative AI deployment, and high-throughput ML workloads, it pairs 1TB of DDR5 ECC memory with fast Gen4 NVMe storage. Ideal for datacenters scaling next-gen GPU infrastructure.
SUPERMICRO 2U L40 Server – 4× GPUs | Dual Xeon 8468 | 1TB DDR5 | 7.68TB NVMe | 400GbE AI and Visualization Power
60.854,00 €SUPERMICRO 2U GPU Server – 4× NVIDIA L40 | Dual Xeon Platinum 8468 | 1TB DDR5 | Gen4 NVMe | 400GbE + IPMI
Specifications
CPU: 2 × Intel Xeon Platinum 8468 (48 Cores each, 2.10GHz)
RAM: 1TB DDR5-5600MHz ECC RDIMM (16 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: 4 × NVIDIA L40 Professional GPUs
Network: 1 × 400GbE / Infiniband OSFP, 1 × Dedicated IPMI Management Port
Chassis: 2U Rackmount GPU-Accelerated Server Platform
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 2U GPU server brings advanced compute and rendering performance with 4 NVIDIA L40 GPUs, dual 48-core Intel Xeon Platinum CPUs, ultra-fast DDR5 memory, and 400GbE networking. Designed for AI development, high-end rendering, virtualization, and real-time simulation environments, it combines enterprise-grade reliability with massive I/O bandwidth and GPU density for modern datacenter workloads.
SUPERMICRO 2U L40S Server – 2× GPUs | EPYC 9354P | 384GB DDR5 | 1.92TB NVMe | AI, Rendering, and Virtualization Ready
29.924,00 €SUPERMICRO 2U GPU Server – 2× NVIDIA L40S | AMD EPYC 9354P | 384GB DDR5 | Gen4 NVMe | 10GbE + IPMI
Specifications
CPU: 1 × AMD EPYC GENOA 9354P (32 Cores, 3.25GHz)
RAM: 384GB DDR5-4800MHz ECC RDIMM (12 × 32GB)
Storage: 2 × 960GB Gen4 NVMe SSDs (Total 1.92TB High-Speed Storage)
GPU: 2 × NVIDIA L40S Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 2U Rackmount GPU-Optimized Compute Server
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 2U GPU server delivers powerful AI, visualization, and compute acceleration with 2 NVIDIA L40S GPUs paired with a 32-core AMD EPYC Genoa processor. Featuring 384GB of DDR5 ECC memory and fast Gen4 NVMe storage, this platform is optimized for enterprise AI inference, 3D rendering, virtual workstation deployments, and hybrid cloud infrastructure.
SUPERMICRO 4U APU Server – 4× AMD MI300A | Zen4 + GPU | 512GB HBM3 | 15TB NVMe | Unified Compute for AI + HPC
27.533,00 €SUPERMICRO 4U APU Server – 4× AMD Instinct MI300A | Zen4 + GPU Unified | 512GB HBM3 | Gen4 NVMe | 10GbE + IPMI
Specifications
APU: 4 × AMD Instinct MI300A Accelerated Processing Units
Architecture: Multi-Chip Module with 24 AMD Zen4 CPU Cores and 228 GPU Compute Units per APU
RAM: 512GB HBM3 (4 × 128GB), High-Bandwidth, Non-ECC On-Package Memory
Storage: 4 × 3.84TB Gen4 NVMe SSDs (Total 15.36TB High-Speed Storage)
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount High-Density APU-Optimized Server Platform
Support: Includes 3 Years Parts Warranty
This SUPERMICRO 4U APU server leverages AMD’s revolutionary MI300A architecture, fusing Zen4 CPU cores and advanced GPU compute units within a unified memory framework. With 512GB of HBM3 high-bandwidth memory and 4 high-speed NVMe drives, this platform is designed for tightly-coupled HPC, AI training, scientific simulations, and memory-bound workloads—where CPU-GPU integration delivers maximum efficiency and low-latency data processing.