Skip to content

Using Spack on Maxwell Cluster: A Tutorial

Introduction

Spack is a user space, flexible scientific software package manager that supports multiple versions, configurations, platforms, and compilers.

  • Packages are "parameterized" so that you can easily tweak and tune configurations. See the following example for different ways of installing HDF5 - High-performance data management and storage suite:
     # Install the latest version 
       spack install hdf5

     # Install a particular version by appending @
       spack install hdf5@1.16

     # Add special boolean compile-time options with +
       spack install hdf5@1.14 +hl

     # Add compiler flags using the conventional names
       spack install hdf5@1.14 cflags="-O3 -floop-block"

     # Target a specific micro-architecture
       spack install hdf5@1.14 target=icelake 

     # Combine the above example options
       spack install hdf5@1.14 +hl cflags="-O3 -floop-block" target=icelake 

By the end of this tutorial, you'll be able to install and use scientific software of your choice on maxwell cluster via Spack.

Why Spack?

There are many ways to install and use software on an HPC cluster. The way you install software based on targeting a specific microarchitecture (CPUs and GPUs) at hand affects the speed of the execution time. CPUs have evolved; for example, three microarchitecture levels (feature levels) on top of the x86-64 baseline (generic, x86-64-v1) are defined: x86-64-v2, x86-64-v3, and x86-64-v4. If your software can use the later features, the faster the execution time for your software will be.

  • Python packages (pip, conda/mamba, pixi...). You can install Python packages with these tools in a directory you control. By default, they use the baseline x86-64 architecture. Do these within the container (next) for production, as these Python tools create many small files - not best suited for HPC.

  • Containers (Apptainer, Singularity...) - Self contained software that can be moved from one machine (say, from your Ubuntu laptop) to another (say, Red Hat, as in HPCGateway) without any issue. Since users do not have admin/root/sudo access to the HPC cluster, they cannot install software using dnf/yum, apt, zypper... on the HPC cluster. They can do so within the container and bring it to HPCGateway to use it unmodified. See the section on containers in maxwell.

  • From source code ( make, make install...) - this is battle tested and the common way of installing software on HPC centers. One can tune the installation of software for specific hardware. However, changing a specific feature of a software requires a lot of time and a highly skilled HPC admin. So most HPC centers, including Maxwell, having different variety of CPUs, target software installation for generic CPU architecture for compatibility reasons. Software installed via this method is accessible via module systems.

  • Spack simplifies the installation from source code (also provides binary packages), tuning to a specific CPU architecture at hand by default and hence faster execution time for your software application. It is so easy that users can also install it on HPC and tune a package to their liking; see the example above for HDF5. Spack installed software can be accessed via modules too (Spack integrates with module systems seamlessly).

Access to Spack

Log in to display / login node using your method of choice. For example, a user named john can do:

  ssh  john@max-com.desy.de        # Commercial, HPCGateway 
  ssh  john@max-display.desy.de    # Others

and source activate_spack.sh script using:

  source  /software/spack/activate_spack.sh

Sourcing the activate_spack.sh script do two things:

  1. Makes spack command available.

  2. Create a .spack_* directory in $HOME directory (if it doesnt exist) and set it up with configuration files. The user can change the directory where to install software via config file $HOME/.spack_*/config.yaml (default is in $HOME/spack_store_*). Note that storage quota in $HOME is 30GB.

To not do this sourcing every time you log in (also in the following examples), you can do the following only once, and then the Spack command will be available always.

 cat /software/spack/activate_spack.sh >> ~/.bash_profile
 source ~/.bash_profile

There are two major ways to access Spack installed software/compiler/mpi/packages via Spack:

  1. spack load / unload (individual packages).

  2. spack env activate /deactivate (for bundled packages).

We'll see example usage below.

Compilers

All recent versions of freely available compilers are installed and available for use out of the box (GNU gcc, llvm, intel, AMD aocc, NVIDIAs nvhpc...). To see the list of compilers and the versions installed, use

    spack compilers   # List installed compilers

The prefix [e] indicates external to Spack - compilers available on host linux, [+] indicates Spack installed packages (by user) and [^] indicates Spack installed packages by upstreams (here, by Maxwell admins).

Spack load

  • Load a specific software and use it

  • If you do not load or activate a Spack environment, system provided software with a similar name will be used by default (if available).

Example ( Hello world C/C++/Fortran)

  spack load aocc@5.0.0           # load AMD optimized aocc compiler for C/C++/Fortran
  clang /software/spack/examples/hello.c      # Compile the code
  ./a.out                                     # run the code  

Exercise: Compile and run hello.cpp (C++), hello.f90 (Fortran) codes using AMD aocc compiler.

spack unload

  spack unload aocc        # unload aocc ( see "spack find --loaded")
  spack unload             # unload ALL loaded software

Message Passing Interface (MPI)

For distributed computing on multiple nodes, MPI (Message Passing Interface) is a standardized interface that enables communication between processes running within and / or on separate nodes, allowing them to work together efficiently in parallel. All recent versions of freely available MPI implementations are available and can be used directly or indirectly with your application on HPC Gateway. The following are among the lists:

  • Open MPI
  • mvapich
  • mpich
  • Intel MPI

On the loginnode ...

Send and receive example mpi code

  • With Intel MPI
  spack load intel-oneapi-mpi@2021.16.0               # Load Intel MPI
  mpif90 /software/spack/examples/mpi_sendreceive.f90 # Compile example code
  mpirun ./a.out                                      # run on one core 
  mpirun -n 4 ./a.out                                 # run on 4 cores, one node 

Unload Intel MPI with spack unload intel-oneapi-mpi@2021.16.0

  • With Open MPI
  spack load openmpi@5.0.8                             # Load Open MPI
  mpifort /software/spack/examples/mpi_sendreceive.f90 # compile example code
  mpirun a.out                                         # run on one core     
  mpirun -n 8 a.out                                    # run on 8 cores, one node   

If there are multiple variants of the same package — for example, multiple versions of openmpi@5.0.8, where one is built with CUDA support and another without — append the spack load command with the /hash as in the following.

spack load openmpi@5.0.8             # asks you to choose one (E.g.  u4nyvig openmpi@5.0.8)
spack load openmpi@5.0.8  /u4        # choose a specific variant via /hash

Exercise: Load mpich, compile and run the same example code with it.

From the loginnode ...

In the previous example, we compiled and ran a code on the login node. This is, in general, a step to be followed if you are developing a code. Once you are sure the code is working, you need more resources (compute nodes to be exact) - that is why you need HPC in the first place. Since HPC resources are shared, you cannot use all compute resources as you wish. So how can you use more resources then? Ask an aribtrator - SLURM for our case—for resources other than the default, which you get on the login node. Slurm will check what resources are available and what are requested and grant you compute resources accordingly (now or at later time).

You can request resources interactively (e.g. using salloc) or non-interactively via sbatch. Using sbatch is highly recommended as it does not waste resources.

Interactive resources

Continuing from the above example, let's request resources interactively: a total of 8 processors (-n8, --ntasks=8), on two nodes (-N2, --nodes=2), for 10 minutes (-t 10, --time=00:10:00):

  salloc -p hpcgwgpu -N2 -n8 -t 10 

You will get something like

  salloc: Granted job allocation 18375265
  salloc: Waiting for resource configuration
  salloc: Nodes max-hpcgwg[006-007] are ready for job

Now run the executable with (openmpi is aware of slurm allocation)

  mpirun a.out  # runs on 8 cores, 2 nodes`

Or equivalently run the executable with

  srun --mpi=pmix a.out  # runs on 8 cores, 2 nodes

After you are granted resources, you can now SSH to the corresponding compute nodes and work from there.

  ssh max-hpcgwg006

Non-interactive resources

Equivalently, you can requestand run resources using sbatch (recommended). First, cancel resources you own (if all are not important).

  scancel -u john

Write the slurm job script mpi_sr.sh with contents

  #!/bin/bash -l
  #SBATCH --job-name=mpi_sr_test                     # Name your job, optional
  #SBATCH --output mpi_sr_test_%J.out                # Name your output, optional
  #SBATCH --nodes=2                                  # Request two nodes
  #SBATCH --ntasks=8                                 # 8 processors
  #SBATCH --time=00:10:00                            # for 10 minutes
  #SBATCH --partition=hpcgwgpu                       # HPC Gateway partition

  source /software/spack/activate_spack.sh           # You can omit this; see above

  spack load openmpi@5.0.8                           # Load Open MPI
  mpifort /software/spack/examples/mpi_sendreceive.f90 
  mpirun ./a.out

Submit it to Slurm via

  sbatch  mpi_sr.sh

See the results with

  cat mpi_sr_test_*.out

Exercise: Modify the Slurm script above to request and run on 2 cores of each of the two compute nodes.

Note: Currently your access to compute node is exclusive - meaning, when you get an accesss to one compute node, you will exclusively own that compute nodes all resources including 4 (four) of H200 GPUs whether you are using them or not. So be considerate, especially with requesting time and number of nodes. In the near future, we plan to make a node to be shared, and hence you can own only one H200 exclusively or more depending on your compute needs.

Computing with GPUs

Similar to MPI, one can use one or multiple GPUs to run a GPU code directly by writing / using CUDA code (or related tools such as OpenMP, HIP, SYCL, openacc...) or indirectly via application software (such as pytorch).

Example: The following cuda code (add_arrays.cu) adds two arrays on GPU using NVIDIA's nvhpc. Slurm job script add_arrays.sh:

    #!/bin/bash -l
    #SBATCH --job-name=add_arrays_test                 # Name your job, optional
    #SBATCH --output add_arrays_%J.out                 # Name your output, optional
    #SBATCH --error  add_arrays_%J.err                 # Name your error, optional
    #SBATCH --nodes=1                                  # Request 1 node
    #SBATCH --ntasks=1                                 # 1 processor
    #SBATCH --time=00:10:00                            # for 10 minutes
    #SBATCH --partition=hpcgwgpu                       # HPC Gateway partition

    source /software/spack/activate_spack.sh           # You can omit this; see above

    spack load nvhpc 
    nvcc /software/spack/examples/add_arrays.cu 
    ./a.out
  • Submit the script via sbatch add_arrays.sh
  • See the result using cat add_arrays_*.out
  • See the error (if any) using cat add_arrays_*.err

Exercise: Use CUDA-aware mpi installed, compile and run the same code using it on two GPUs.

Application software

Now that the backbone of HPC software is built and available. All that is left is for you to use your application software. If the software you need is not available by default, ask maxwell admins to install it for you or install your self using spack install <pkg> (see the introduction section above). Spack allows chaining of packages - only packages not available on upstream (here installed by maxwell admins) will be installed by user.

The following commands will help you in picking package name (), variants and more:

spack list           # lists packages available to install 
spack list  intel    # lists packages that have  the word "intel"  in it
spack info <pkg>     # get detailed information on a particular package
spack find           # list installed packages

For more informations about list of Spack’s options and subcommands

spack help          

Since typical applications software is a collection of multiple packages, it is better to use Spack environments, which bundle them under one name.

Spack Environments

Spack environments allow you to create self-contained, reproducible software collections, a concept similar to Conda environments and Python’s virtual environments and more.

Pytorch

Activate an environment (Pytorch) with spack env activate pytorch and run your pytorch training application on one or multiple GPUs (on one or multiple nodes). Activating this environment brings pytorch, pytorchvision, nccl and etc in to the environment. The following example pytorch code is taken from HPCGateway school held at HZDR Dresden, Germany. Slurm job script pytorch_hzdr_1_gpu.sh:

    #!/bin/bash -l
    #SBATCH --job-name=pytorch_test                    # Name your job, optional
    #SBATCH --output pytorch_%J.out                    # Name your output, optional
    #SBATCH --error  pytorch_%J.err                    # Name your error, optional 
    #SBATCH --nodes=1                                  # Request 1 node
    #SBATCH --ntasks=1                                 # 1 processor
    #SBATCH --time=00:10:00                            # for 10 minutes
    #SBATCH --partition=hpcgwgpu                       # HPC Gate way partition

    source  /software/spack/activate_spack.sh          # You can omit this; see above

    spack env activate pytorch
    python  /software/spack/examples/pytorch_parallel/scripts/01_mnist_single_gpu.py 

Submit the job with sbatch pytorch_hzdr_1_gpu.sh

See the results as training evolves with

  tail -f  pytorch_*.out   

and on 2 GPUs or more: Slurm job script pytorch_hzdr_n_gpus.sh:

    #!/bin/bash -l
    #SBATCH --job-name=pytorch_n_gpus_test             # Name your job, optional
    #SBATCH --output pytorch_n_gpus%J.out              # Name your output, optional
    #SBATCH --error  pytorch_n_gpus%J.err              # Name your error, optional
    #SBATCH --nodes=1                                  # Request 1 node
    #SBATCH --ntasks=2                                 # 2 processors 
    #SBATCH --time=00:10:00                            # for 10 minutes
    #SBATCH --partition=hpcgwgpu                       # HPC Gateway partition

    source  /software/spack/activate_spack.sh          # You can ommit this; see above

    spack env activate pytorch
    torchrun --nproc_per_node=2  /software/spack/examples/pytorch_parallel/scripts/02_mnist_ddp_torchrun.py
  • Submit the job with sbatch pytorch_hzdr_n_gpus.sh
  • See the result with cat pytorch_n_gpus*.out

Exercise: Increase the number of epochs or other hyperparameters and run the training on six GPUs.

Transformers

Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training. See huggingface.co.

The following slurm jobscript (transformers-pytorch.sh) activates transformers with pytorch backend and runs a test transformers_test.py code:

  #!/bin/bash -l
  #SBATCH --job-name=transformers_test               # Name your job, optional
  #SBATCH --output transformers_%J.out               # Name your output, optional
  #SBATCH --error  transformers_%J.err               # Name your error, optional
  #SBATCH --nodes=1                                  # Request 1 node
  #SBATCH --ntasks=1                                 # 1 processor
  #SBATCH --time=00:10:00                            # for 10 minutes
  #SBATCH --partition=hpcgwgpu                       # HPC Gate way partition

  source  /software/spack/activate_spack.sh          # You can ommit this; see above

  spack env activate transformers-pytorch
  python /software/spack/examples/transformers_test.py

Submit to slurm using sbatch transformers-pytorch.sh

Exercise: Modify transformers_test.py for text generation using model gpt2.

Conclusion

We hope you have gained what advantages using spack has to offer, how to use software installed by spack, how to install software using spack and etc. For further information on spack see Spack documentation. For problems using spack on maxwell cluster contact maxwell.service@desy.de.