-
Notifications
You must be signed in to change notification settings - Fork 36
Compiling on LC Machines
libROM provides several CMake toolchains which can be used with the compile.sh
script to compile libROM on the LLNL LC machines. Specific toolchains can be specified with the -t
option to compile.sh
. Toolchains are present in the cmake/toolchains/
directory.
For TOSS4 machines, refer to the Quartz section below. For IBM Power9 machines, refer to the Lassen section below.
Note: Due to shared directories across the LC machines, please be sure to either clone libROM uniquely for each machine, or if using a single libROM folder, be sure to remove all folders present in dependencies/
before compiling on a different machine.
The compile.sh
script is setup to for Intel-TOSS4 systems by default using the default-toss_4_x86_64_ib-librom-dev.cmake
toolchain. This will use the LC system default compilers with MVAPICH and Intel MKL. In most cases, this should be sufficient for compiling on Quartz-like machines, e.g, ./scripts/compile.sh -g -m
.
For specific compiler versions, the following toolchains are available:
- Intel 21.6.0 compiler:
ic21-toss_4_x86_64_ib-librom-dev.cmake
- GNU 12.1.1 compiler:
gnu12-toss_4_x86_64_ib-librom-dev.cmake
An example compiling on Dane using GNU 12.1.1:
module load gcc/12.1.1-magic
module load mkl/2022.1.0
./scripts/compile.sh -g -m
Due to some of the system module defaults and how the libROM dependencies are compiled, the following modules will need to be loaded before using compile.sh
with the toolchain.
module load cmake/3.23.1
module load gcc/12.2.1
module load lapack/3.11.0-gcc-11.2.1
export CFLAGS="-mcpu=powerpc64le -mtune=powerpc64le"
export CXXFLAGS="-mcpu=powerpc64le -mtune=powerpc64le"
Note: if libROM has not been cloned from scratch, be sure to remove all folders in dependencies/
to ensure they get recompiled for the correct architecture.
The gnu12-rhel_7_ppc64le_ib_p9-dev.cmake
toolchain can be then used to compile libROM (with MFEM and GSLIB):
./scripts/compile.sh -g -m -t cmake/toolchains/gnu12-rhel_7_ppc64le_ib_p9-dev.cmake
Depending on which LC system is being used, running MPI applications might require different syntax to interact with the scheduler on the system. LC systems use several different schedulers: SLURM, IBM LSF, or Flux. For more detailed information on running on LC systems, see running jobs on LC systems.
Refer to the sections below for examples of running libROM..
For Intel TOSS 4 machines (Quartz, Dane, Ruby, Pascal, etc) the SLURM scheduler is currently used.
Below is an example of using srun
to run the libROM dg_advection_global_rom
example on Quartz with 4 MPI processes.
srun -N 1 -n 4 --cpu-bind=cores ./dg_advection_global_rom -offline
Lassen uses the IBM LSF scheduler. jsrun
or lrun
are used to run MPI jobs. lrun
is a LC provided wrapper script around jsrun
. For backwards compatibility, srun
is also available as a wrapper to lrun
.
When running libROM on Lassen, it is highly recommended to add the -M "-mca coll_hcoll_enable 1 -mca coll_hcoll_np 0 -mca coll ^basic -mca coll ^ibm -HCOLL -FCA"
option to either lrun
or jsrun
.
Below is an example of using jsrun
to run the libROM dg_advection_global_rom
example on Lassen with 4 tasks.
jsrun -n 1 -a 4 -c 4 -g 0 -M "-mca coll_hcoll_enable 1 -mca coll_hcoll_np 0 -mca coll ^basic -mca coll ^ibm -HCOLL -FCA" ./dg_advection_global_rom -offline