NVIDIA L40 – 48GB GDDR6 | 18,176 CUDA Cores | AI, Rendering, and Enterprise Visualization Accelerator

10.150,00 

NVIDIA L40 Tensor Core GPU


Specifications

CUDA Cores: 18,176
Tensor Cores: 568
NVIDIA RT Cores: 142
PCIe Interface: PCI Express 4.0 ×16
VRAM: 48 GB GDDR6 with ECC
Memory Bandwidth: 864 GB/s
TDP: 300W
Warranty: 3 Years Warranty


The NVIDIA L40 Tensor Core GPU is a powerful solution for professional graphics, AI inference, and high-performance compute workloads. With 18,176 CUDA cores, 48GB of ECC GDDR6 memory, and an impressive 864GB/s of memory bandwidth, the L40 is optimized for large-scale rendering, real-time visualization, AI acceleration, and hybrid cloud environments — offering outstanding performance and energy efficiency for enterprise deployments.

NVIDIA H100 NVL 94GB – PCIe Gen5 | 67 TFLOPS FP32 | 94GB HBM3 | Ultimate AI Inference and HPC Accelerator

33.999,00 

NVIDIA H100 NVL 94GB PCIe Gen5 GPU


Specifications

FP32 Performance: 67 TFLOPS
FP64 Performance: 34 TFLOPS
PCIe Interface: PCI Express Gen5
VRAM: 94 GB HBM3 with ECC
Memory Bandwidth: 3.9 TB/s
TDP: 300–350W (Configurable)
Warranty: 3 Years Warranty


The NVIDIA H100 NVL 94GB PCIe Gen5 GPU delivers groundbreaking AI inference and HPC performance, powered by HBM3 memory and next-generation Tensor Core architecture. With 94GB of ECC-protected memory and an ultra-high bandwidth of 3.9TB/s, the H100 NVL is ideal for large language model (LLM) deployment, hyperscale inference, data analytics, and scientific computing, offering industry-leading throughput and efficiency for modern data center workloads.

NVIDIA DGX B200 Server – 8× B200 GPUs | 2TB DDR5 | 400Gb/s Networking | AI Supercomputing for the Enterprise

NVIDIA DGX B200-180GB AI Infrastructure Server – 8× B200 GPUs | Dual Xeon Platinum 8570 | 2TB RAM | Ultra-High-Speed Networking


Specifications

CPU: 2 × Intel Xeon Platinum 8570 (56 Cores each, 2.1GHz)
RAM: 2TB DDR5 (Optional Upgrade: up to 4TB)
Storage: 2 × 1.9TB NVMe M.2 + 8 × 3.84TB NVMe U.2 SSDs
GPU: 8 × NVIDIA B200 180GB Tensor Core GPUs
Networking: 4 × OSFP (8 × Single-Port 400Gb/s InfiniBand/Ethernet)
2 × Dual-Port QSFP112 NVIDIA BlueField-3 DPU
Up to 400Gb/s InfiniBand/Ethernet Throughput
10Gb/s Onboard NIC with RJ45
100Gb/s Dual-Port Ethernet NIC
Management: Host Baseboard Management Controller (BMC) with RJ45
Support: Includes 3 Years NVIDIA Business Standard Support


The NVIDIA DGX B200 AI Infrastructure Server redefines high-performance AI and HPC systems. Featuring 8 NVIDIA B200 Tensor Core GPUs with 180GB HBM3e memory each, dual 56-core Intel Xeon Platinum CPUs, 2TB of system memory (expandable up to 4TB), and high-throughput 400Gb/s networking, it is purpose-built for training large language models, deep learning inference at scale, and next-generation simulation workloads. Designed for seamless AI cluster integration and maximum scalability.

NVIDIA DGX H200 – 8× 141GB GPUs | 2TB DDR5 RAM | 400Gb/s Networking | Hyperscale AI Supercomputing

NVIDIA DGX H200 AI Supercomputing Server – 8× H200 GPUs | Dual Xeon Platinum 8480C | 2TB RAM | Ultra-High-Speed Networking


Specifications

CPU: 2 × Intel Xeon Platinum 8480C (56 Cores each, 2.0GHz)
RAM: 2TB DDR5 System Memory
Storage: 2 × 1.9TB NVMe M.2 SSDs + 8 × 3.84TB NVMe U.2 SSDs
GPU: 8 × NVIDIA H200 141GB Tensor Core GPUs (Total 1128GB HBM3e Memory)
Networking: 4 × OSFP (8 × Single-Port 400Gb/s InfiniBand/Ethernet)
2 × Dual-Port NVIDIA ConnectX-7 VPI
1 × 400Gb/s InfiniBand/Ethernet Port
1 × 200Gb/s InfiniBand/Ethernet Port
Support: Includes 3 Years NVIDIA Business-Standard Support


The NVIDIA DGX H200 represents the new frontier of AI supercomputing. Featuring 8 NVIDIA H200 Tensor Core GPUs with a combined 1128GB of high-bandwidth HBM3e memory, dual 56-core Intel Xeon Platinum CPUs, 2TB system memory, and massive NVMe storage, the DGX H200 is built for training the largest language models, generative AI, AI simulation, and scientific discovery workloads. With advanced 400Gb/s networking and the NVIDIA AI Enterprise software stack, it provides seamless scalability for hyperscale AI environments.

SUPERMICRO 8U HGX H200 Server – 8× 141GB GPUs | Dual Xeon 8468 | 2TB DDR5 | 400GbE Networking | AI Supercomputing Redefined

324.759,00 

SUPERMICRO 8U GPU Server – NVIDIA HGX H200 8-GPU SXM5 | Dual Xeon Platinum 8468 | 2TB DDR5 | 400GbE + IPMI


Specifications

CPU: 2 × Intel Xeon Platinum 8468 (48 Cores each, 2.10GHz)
RAM: 2TB DDR5-4800MHz ECC RDIMM (32 × 64GB)
Storage: 1.92TB Gen4 NVMe SSD
GPU: NVIDIA HGX H200 Platform with 8 × 141GB SXM5 GPUs
Network: 2 × 400GbE / InfiniBand OSFP Ports, 1 × Dedicated IPMI Management Port
Chassis: 8U Rackmount Ultra-High-Density GPU Supercomputing Server
Support: Includes 3 Years Parts Warranty


The SUPERMICRO 8U GPU server with NVIDIA HGX H200 8-GPU SXM5 architecture is purpose-built for AI supercomputing, training large language models, deep learning research, and scientific simulations at scale. Powered by dual 48-core Intel Xeon Platinum processors, 2TB of high-speed DDR5 memory, and next-gen 400GbE/InfiniBand networking, it provides the highest levels of performance, memory bandwidth, and scalability for future-ready AI and HPC workloads.

SUPERMICRO 8U HGX H100 Server – 8× 80GB GPUs | Dual EPYC 9654 | 1.5TB DDR5 | Gen4 NVMe | Built for AI at Scale

311.640,00 

SUPERMICRO 8U GPU Server – NVIDIA HGX H100 8-GPU SXM5 | Dual AMD EPYC 9654 | 1.5TB DDR5 | Gen4 NVMe | 10GbE + IPMI


Specifications

CPU: 2 × AMD EPYC GENOA 9654 (96 Cores each, 2.40GHz)
RAM: 1.5TB DDR5-4800MHz ECC RDIMM (24 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: NVIDIA HGX H100 Platform with 8 × 80GB SXM5 Tensor Core GPUs
Network: 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 8U Rackmount Ultra-Dense GPU-Optimized Server
Support: Includes 3 Years Parts Warranty


The SUPERMICRO 8U GPU server with NVIDIA HGX H100 8-GPU SXM5 platform delivers breakthrough performance for AI model training, LLM scaling, HPC simulation, and deep learning research. Featuring dual 96-core AMD EPYC Genoa processors, 1.5TB of high-speed DDR5 memory, and lightning-fast Gen4 NVMe storage, this server provides massive compute power, memory bandwidth, and reliability for the most demanding enterprise and research environments.

SUPERMICRO 4U HGX H100 Server – 4× SXM5 GPUs | Dual Xeon 8558 | 1TB DDR5 | 7.68TB NVMe | AI Training and HPC Optimized

165.863,00 

SUPERMICRO 4U GPU Server – NVIDIA HGX H100 4-GPU SXM5 | Dual Xeon Platinum 8558 | 1TB DDR5 | Gen4 NVMe | 25GbE + IPMI


Specifications

CPU: 2 × Intel Xeon Platinum 8558 (48 Cores each, 2.10GHz)
RAM: 1TB DDR5-4800MHz ECC RDIMM (16 × 64GB)
Storage: 2 × 3.84TB Gen4 NVMe SSDs (Total 7.68TB High-Speed Storage)
GPU: NVIDIA HGX H100 Platform with 4 × SXM5 GPUs
Network: 2 × 25GbE SFP28, 2 × 10GbE RJ-45, 1 × Dedicated IPMI Management Port
Chassis: 4U Rackmount HGX-Optimized High-Density GPU Server
Support: Includes 3 Years Parts Warranty


The SUPERMICRO 4U GPU server with NVIDIA HGX H100 delivers cutting-edge AI acceleration and HPC performance in a compact 4-GPU configuration. Powered by dual 48-core Intel Xeon Platinum processors, 1TB of high-speed DDR5 ECC memory, and 400Gb/s-ready networking, this server is optimized for AI training, large model inference, data analytics, and next-gen simulation workloads, offering unmatched scalability and throughput for enterprise environments.