With H100 SXM you get:
  • ● More flexibility for users seeking additional compute power to build
  • and fine-tune generative Al models
  • ● Enhanced scalability
  • ● High-bandwidth GPU-to-GPU communication
  • ● Optimal performance density
H100 SXM
H100 PCle
GPU memory
80 GB
80 GB
GPU memory bandwidth
3.35 TB/s
2 TB/s
Max thermal design Power (TDP)
Up to 700W (configurable)
300-350W (configurable)
Multi-instance GPUs
Up to 7 MIGS @ 10 GB each
Up to 7 MIGS @ 10 GB each

Form factor

SXM

PCle

Dual-slot air-cooled

Interconnect

NVLink: 900GB/s

PCle Gen5 128GB/s

NVLink: 600GB/s

PCle Gen5: 128GB/s

NVIDIA H100 SXM Server
  • CPU

    Intel/Eagle Stream-SP/96 Core
  • Memory

    2TB RDIMM DDR5-4800MT/s
  • Network Card

    LAN CARD PCIE G5 1P NDR400 MCX7//NVIDIA/MCX75310AAS NEAT/NCP*8

    LAN CARD PCIE G5 1P NDR MCX7V//NVIDIA/MCX755106AS-HEAT *1
  • SSD

    480G OS SSD*1/7.68T U.2 NVME SSD*4
  • GPU

    NVIDIA HGX H100 8-GPU
  • Power supply

    Optional: 8x 3000W (4+4) Redundant Power Supplies, Titanium Level

    6x 3000W (4+2) Redundant Power Supplies, Titanium Level/Optional
NVIDIA H100 PCIE Server
  • CPU

    Intel/Eagle Stream-SP/88 Core
  • Memory

    2TB RDIMM DDR5-4800MT/s
  • Network Card

    (Optional)
  • SSD

    480G OS SSD*1/7.68T U.2 NVME SSD*2
  • GPU

    NVIDIA PCIE H100 *8
  • Power supply

    4x 2700W (2+2) Redundant Power Supplies, Titanium Level