Replies: 1 comment 3 replies
-
Installing with following particular versions solved the problem. Best, |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I installed SU2-7.5.1 on our institute's cluster, which uses Linux OS based on CentOS 7.x distribution. I configured it with the following options.
-Dcustom-mpi=true -Denable-mkl=true -Dmkl_root=/home-ext/apps/spack/opt/spack/linux-centos7-cascadelake/intel-19.0.5.281/intel-mkl2020.4.304-fet6h2j2qeq5alsxjiw7fzjkweqorbjf/mkl/ -Denable-pywrapper=true
I installed SU2-7.5.1 with the following required packages.
GCC v12.2.0
py-mpi4py v3.1.4
openmpi v4.1.5
python v3.10.10
swig v4.1.1
intel-mkl v2020.4.304
Now, I am checking the code by running the following FSI example on the cluster.
https://github.com/su2code/Tutorials/tree/master/multiphysics/unsteady_fsi_python/Ma01
I am getting the following error while running the example.
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
mpirun noticed that process rank 116 with PID 0 on node cn215 exited on signal 11 (Segmentation fault).
The following is my job script for running on cluster.
#!/bin/bash
#SBATCH -N 3
#SBATCH --ntasks-per-node=48
#SBATCH --time=00:04:00
#SBATCH --job-name=ddd
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out
#SBATCH --exclude=cn001
#SBATCH --partition=debug
echo "Number of Nodes Allocated = $SLURM_JOB_NUM_NODES"
echo "Number of Tasks Allocated = $SLURM_NTASKS"
echo "Number of Cores/Task Allocated = $SLURM_CPUS_PER_TASK"
cd $SLURM_SUBMIT_DIR
. /opt/ohpc/admin/lmod/lmod/init/sh
module load spack/0.17
. /home-ext/apps/spack/share/spack/setup-env.sh
spack unload
spack load [email protected]%gcc@=11.2.0 /5o2xfb3
spack load [email protected]%gcc@=12.2.0 /z5t7bsx
spack load [email protected]%gcc@=12.2.0 /6waw53s
spack load [email protected]%gcc@=12.2.0 /6m7g42w
spack load [email protected]%[email protected] /fet6h2j
mpirun -n $SLURM_NTASKS --oversubscribe python3 -m mpi4py /scratch/asemagan/exctbls/su2/SU2-7.3.1/bin/fsi_computation.py --parallel -f fsi.cfg
But the same example runs well on our lab PC based on Manjaro Linux. I installed SU2-7.5.1 on the cluster with the same versions of packages as in our lab PC; those are mentioned above.
Please suggest. Any help will be appreciated.
Thank you in advance
Beta Was this translation helpful? Give feedback.
All reactions