Skip to content

Infrastructure

The main components of the maxwell cluster are

  • fast dedicated storage (cluster file-systems)
  • fast, low-latency network (infiniband)
  • high-memory compute nodes with substantial number of GPGPUs

The pages give a very brief overview on the corresponding components and their status (where possible).

  • CPU+GPU nodes: 990
  • Total number of cores with hyperthreading: 100656
  • Total number of PHYSICAL cores: 50328
  • Theoretical CPU peak performance: 1485 TFlops
  • Total RAM: 600 TB
  • GPU nodes: 195
  • Total number of GPUs: 447
  • Theoretical GPU peak performance: 4118 TFlops
  • Total peak performance: 5600 TFlops
  • GPFS exfel: 60 PB
  • GPFS petra3: 30 PB
  • GPFS cfel: 1.6 PB
  • GPFS cssb: 11 PB
  • DUST: 5 PB
  • root switches: 6
  • top switches: 12
  • leaf switches: 42
  • IB cables (#): ~1500
  • IB cables (length): ~10km
  • Users: ~4700
  • Display Users: ~450 per day
  • JupyterHub Users: ~570 per month
  • Batch Jobs: ~550.000 per month
  • Citations: >100 in 2025

Adding resources to maxwell

It's well possible to add for example groups resources to maxwell. Please get in touch maxwell.service@desy.de for details. Keep in mind that we will need to impose certain constraints to keep the cluster as little heterogeneous as feasible.