Based on the NVlDlA Hopper architecture, the NVIDlA H200 is the first GPU tooffer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)-that'snearly double the capacity of the NVlDlA H100 Tensor Core GPU with 1.4X morememory bandwidth.The H200's larger and faster memory accelerates generative Aland large language models, while advancing scientific computing for HPC workloadswith better energy efficiency and lower total cost of ownership.
H200 SXM
GPU memory
141 GB
GPU memory bandwidth
4.8 TB/s
Max thermal design Power (TDP)
Up to 700W (configurable)
Multi-instance GPUs
Up to 7 MIGS @ 16.5 GB each

Form factor

SXM

Interconnect

NVLink: 900GB/s / PCle Gen5 128GB/s
The H200 doubles inference performance compared to H100 GPUs when handlinglarge language models such as Llama2 70B!
NVIDIA H200 SXM Server
  • CPU

    Intel/Eagle Stream-SP/96 Core
  • Memory

    3TB RDIMM DDR5-4800MT/s
  • Network Card

    NVIDIA/MCX75310AAS-NEAT 400G *8
    2x10Gb RJ45 Intel X710 onboard *1
  • SSD

    480G OS SSD*1/7.68T U.2 NVME SSD*8
  • GPU

    H200 HGX Baseboard SXM 8
  • Power supply

    Optional: 8x 3000W (4+4) Redundant Power Supplies, Titanium Level

    6x 3000W (4+2) Redundant Power Supplies, Titanium Level/Optional