Difference between revisions of "HOWTO use AmpTools on the JLab farm GPUs"

From GlueXWiki
Jump to: navigation, search
(Access through SLURM)
(AmpTools Compilation with CUDA)
Line 33: Line 33:
 
=== AmpTools Compilation with CUDA ===
 
=== AmpTools Compilation with CUDA ===
 
This example was done in csh for the Titan RTX cards available on sciml1902.<br>
 
This example was done in csh for the Titan RTX cards available on sciml1902.<br>
'''The compilation does not have to be performed on this machine, we chose ifarm1901 here.'''
+
'''The compilation does not have to be performed on a machine with GPUs. We chose the interactive node ifarm1901 here.'''
  
 
'''1)''' Download latest AmpTools release
 
'''1)''' Download latest AmpTools release

Revision as of 13:19, 19 January 2023

Access through SLURM

JLab currently provides NVidia Titan RTX or T4 cards on the sciml19 an sciml21 nodes. The nodes can be accessed through SLURM, where N is the number of requested cards (1-4):

>salloc --gres gpu:TitanRTX:N --partition gpu --nodes 1  --mem-per-cpu=4G

or

>salloc --gres gpu:T4:N --partition gpu --nodes 1  --mem-per-cpu=4G

The default memory request is 512MB per CPU, which is often too small.

An interactive shell (e.g. bash) on the node with requested allocation can be opened with srun:

>srun --pty bash

Information about the cards, cuda version and usage is displayed with this command:

>nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01    Driver Version: 418.87.01    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN RTX           Off  | 00000000:3E:00.0 Off |                  N/A |
| 41%   27C    P8     2W / 280W |      0MiB / 24190MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

AmpTools Compilation with CUDA

This example was done in csh for the Titan RTX cards available on sciml1902.
The compilation does not have to be performed on a machine with GPUs. We chose the interactive node ifarm1901 here.

1) Download latest AmpTools release

git clone git@github.com:mashephe/AmpTools.git

2) Set AMPTOOLS directory

setenv AMPTOOLS_HOME $PWD/AmpTools/
setenv AMPTOOLS $AMPTOOLS_HOME/AmpTools/

3) Load cuda environment module (source /etc/profile.d/modules.csh before if you can't find the module command)

module add cuda
setenv CUDA_INSTALL_PATH /apps/cuda/11.4.2/

4) Set AMPTOOLS directory

setenv AMPTOOLS $PWD/AmpTools

5) Put root-config in your path

setenv PATH $ROOTSYS/bin:$PATH

6) Set the appropriate architecture for the cuda complier (info e.g. here)

setenv GPU_ARCH sm_75

For older (pre 0.13) versions of AmpTools you will edit the Makefile and adjust the line:

CUDA_FLAGS := -m64 -arch=sm_75

7) Build main AmpTools library with GPU support

cd $AMPTOOLS_HOME
make gpu

halld_sim Compilation with GPU

The GPU dependent part of halld_sim is libraries/AMPTOOLS_AMPS/ where the GPU kernels are located. With the environment setup above the full halld_sim should be compiled, which will recognize the AMPTOOLS GPU flag and build the necessary libraries and executables to be run on the GPU

cd $HALLD_SIM_HOME/src/
scons -u install -j8

Performing Fits Interactively

With the environment setup above, the fit executable is run the same as on a CPU

fit -c YOURCONFIG.cfg

where YOURCONFIG.cfg is your usual config file. Note: additional command line parameters can be used as well, as needed.

Combining GPU and MPI

To utilize multiple GPUs in the same fit you'll need both the AmpTools and halld_sim libraries to be compiled with GPU and MPI support. To complete the steps below you'll need to be logged into one of the sciml nodes with GPU support (as described above).

AmpTools

Build the main AmpTools library with GPU and MPI support (note "mpigpu" option). If you are missing mpicxx you can load it using "module load mpi/openmpi3-x86_64"

cd $AMPTOOLS_HOME
make mpigpu

halld_sim

With the environment setup above the fitMPI executable is the only thing that needs to be recompiled, which will recognize the AmpTools GPU and MPI flag and build the necessary libraries and executables to be run on the GPU with MPI

cd $HALLD_SIM_HOME/src/programs/AmplitudeAnalysis/fitMPI/
scons -u install

Performing Fits Interactively

The fitMPI executable is run with mpirun the same as on a CPU

mpirun fitMPI -c YOURCONFIG.cfg

If you're using Slurm it will recognize how many GPUs you've reserved and assign the number of parallel processes to make use of those GPUs.

Submitting Batch Jobs