NVIDIA L4 GPU – 24GB GDDR6 | 7,680 CUDA Cores | Low-Power AI, Inference, and Visual Compute Accelerator
2.697,00 €NVIDIA L4 Tensor Core GPU
Specifications
CUDA Cores: 7,680
Tensor Cores: 240
NVIDIA RT Cores: 60
PCIe Interface: PCI Express Gen 4 ×16
VRAM: 24 GB GDDR6 with ECC
Memory Bandwidth: 300 GB/s
TDP: 72W
Warranty: 3 Years Warranty
The NVIDIA L4 Tensor Core GPU is engineered for AI inference, video processing, graphics rendering, and general-purpose computing at scale. With its high-efficiency design, 24GB of ECC GDDR6 memory, and low 72W power profile, the L4 provides an ideal balance of performance, scalability, and energy efficiency for modern datacenters, edge deployments, and enterprise AI applications.
NVIDIA RTX A5000 – 24GB GDDR6 | 8,192 CUDA Cores | Advanced AI, Rendering, and Simulation Performance
2.676,00 €NVIDIA RTX A5000 Professional Graphics Card
Specifications
CUDA Cores: 8,192
Tensor Cores: 256
NVIDIA RT Cores: 64
FP32 Performance: 27.8 TFLOPS
PCIe Interface: PCI Express PCIe 4.0 x16
VRAM: 24 GB GDDR6 with ECC
Memory Bandwidth: 768 GB/s
TDP: 230 W
Warranty: 3 Years Warranty
The NVIDIA RTX A5000 combines powerful CUDA core processing, AI acceleration, and ray tracing capabilities into a single GPU, designed for demanding creative, engineering, and AI workloads. Offering 24GB of ECC GDDR6 memory, ultra-fast bandwidth, and next-generation features, the RTX A5000 delivers unmatched reliability and scalability for professional visualization and deep learning applications
NVIDIA RTX A4500 – 20GB GDDR6 | 7,168 CUDA Cores | Professional AI, Rendering, and Simulation Performance
2.186,00 €NVIDIA RTX A4500 Professional Graphics Card
Specifications
CUDA Cores: 7,168
Tensor Cores: 224
NVIDIA RT Cores: 96
FP32 Performance: 23.7 TFLOPS
PCIe Interface: PCI Express PCIe 4.0 x16
VRAM: 20 GB GDDR6 with ECC
Memory Bandwidth: 640 GB/s
TDP: 200 W
Warranty: 3 Years Warranty
The NVIDIA RTX A4500 delivers outstanding graphics, AI, and compute performance for professionals working in design visualization, simulation, and deep learning. With 20GB of GDDR6 ECC memory and optimized memory bandwidth, it is engineered for scalable, mission-critical workflows across industries like architecture, media production, and advanced engineering
NVIDIA RTX 4000 Ada – 20GB GDDR6 | 6,144 CUDA Cores | Advanced Visualization and AI Acceleration
1.508,00 €NVIDIA RTX 4000 Ada Generation Professional Graphics Card
Specifications
CUDA Cores: 6,144
Tensor Cores: 192
NVIDIA RT Cores: 48
PCIe Interface: PCI Express PCIe 4.0 x16
VRAM: 20 GB GDDR6 with ECC
TDP: 130 W
Warranty: 3 Years Warranty
The NVIDIA RTX 4000 Ada Generation delivers outstanding performance for professional visualization, AI development, and compute-intensive tasks. With 20GB of ECC memory, advanced ray tracing cores, and efficient power usage, it is designed for creative professionals, engineers, and researchers who need powerful, scalable, and reliable GPU solutions.
NVIDIA RTX A4000 – 16GB GDDR6 | 6,144 CUDA Cores | High-Performance Rendering and AI in a Compact Design
1.205,00 €NVIDIA RTX A4000 Professional Graphics Card
Specifications
CUDA Cores: 6,144
Tensor Cores: 192
NVIDIA RT Cores: 48
FP32 Performance: 19.2 TFLOPS
PCIe Interface: PCI Express PCIe 4.0 x16
VRAM: 16 GB GDDR6
Memory Bandwidth: 768 GB/s
TDP: 140 W
Warranty: 3 Years Warranty
The NVIDIA RTX A4000 delivers excellent performance for professionals working in 3D rendering, visualization, and AI workloads. With 16GB of GDDR6 memory and strong ray tracing and tensor core capabilities, this GPU is designed to handle complex creative and compute-intensive tasks with efficiency and scalability in a single-slot form factor.
NVIDIA DGX B200 Server – 8× B200 GPUs | 2TB DDR5 | 400Gb/s Networking | AI Supercomputing for the Enterprise
NVIDIA DGX B200-180GB AI Infrastructure Server – 8× B200 GPUs | Dual Xeon Platinum 8570 | 2TB RAM | Ultra-High-Speed Networking
Specifications
CPU: 2 × Intel Xeon Platinum 8570 (56 Cores each, 2.1GHz)
RAM: 2TB DDR5 (Optional Upgrade: up to 4TB)
Storage: 2 × 1.9TB NVMe M.2 + 8 × 3.84TB NVMe U.2 SSDs
GPU: 8 × NVIDIA B200 180GB Tensor Core GPUs
Networking: 4 × OSFP (8 × Single-Port 400Gb/s InfiniBand/Ethernet)
2 × Dual-Port QSFP112 NVIDIA BlueField-3 DPU
Up to 400Gb/s InfiniBand/Ethernet Throughput
10Gb/s Onboard NIC with RJ45
100Gb/s Dual-Port Ethernet NIC
Management: Host Baseboard Management Controller (BMC) with RJ45
Support: Includes 3 Years NVIDIA Business Standard Support
The NVIDIA DGX B200 AI Infrastructure Server redefines high-performance AI and HPC systems. Featuring 8 NVIDIA B200 Tensor Core GPUs with 180GB HBM3e memory each, dual 56-core Intel Xeon Platinum CPUs, 2TB of system memory (expandable up to 4TB), and high-throughput 400Gb/s networking, it is purpose-built for training large language models, deep learning inference at scale, and next-generation simulation workloads. Designed for seamless AI cluster integration and maximum scalability.
NVIDIA DGX H200 – 8× 141GB GPUs | 2TB DDR5 RAM | 400Gb/s Networking | Hyperscale AI Supercomputing
NVIDIA DGX H200 AI Supercomputing Server – 8× H200 GPUs | Dual Xeon Platinum 8480C | 2TB RAM | Ultra-High-Speed Networking
Specifications
CPU: 2 × Intel Xeon Platinum 8480C (56 Cores each, 2.0GHz)
RAM: 2TB DDR5 System Memory
Storage: 2 × 1.9TB NVMe M.2 SSDs + 8 × 3.84TB NVMe U.2 SSDs
GPU: 8 × NVIDIA H200 141GB Tensor Core GPUs (Total 1128GB HBM3e Memory)
Networking: 4 × OSFP (8 × Single-Port 400Gb/s InfiniBand/Ethernet)
2 × Dual-Port NVIDIA ConnectX-7 VPI
1 × 400Gb/s InfiniBand/Ethernet Port
1 × 200Gb/s InfiniBand/Ethernet Port
Support: Includes 3 Years NVIDIA Business-Standard Support
The NVIDIA DGX H200 represents the new frontier of AI supercomputing. Featuring 8 NVIDIA H200 Tensor Core GPUs with a combined 1128GB of high-bandwidth HBM3e memory, dual 56-core Intel Xeon Platinum CPUs, 2TB system memory, and massive NVMe storage, the DGX H200 is built for training the largest language models, generative AI, AI simulation, and scientific discovery workloads. With advanced 400Gb/s networking and the NVIDIA AI Enterprise software stack, it provides seamless scalability for hyperscale AI environments.