Using apptainer on clusters: openmpi mismatch problem

I am using Apptainer to build a .sif file and run OpenFOAM-Calculix coupling on clusters. It can run well serially, but it faces OpenMPI problems while trying parallel. Has anyone met a similar problem? Any comments are welcome!

A requested component was not found, or was unable to be opened. Thismeans that this component is either not installed or is unable to beused on your system (e.g., sometimes this means that shared librariesthat the component requires are unable to be found/loaded). Note thatPMIX stopped checking at the first component that it did not find.

Host: n0057.savio3Framework: psecComponent: munge

-–[preciceAdapter] Reading preciceDict…[2025-09-22 15:52:07.065168] [0x00001519a20c8740] [error] The solver process index given in the preCICE interface constructor(0) does not match the rank of the passed MPI communicator (12).

The following is my SLURM file:

#!/bin/bash

#------------------------------------------------------------#-#-                  Slurm Submission System#-#- submit:              sbatch slurm.sh#- list jobs:           squeue -u $USER#- cancel jobs:         scancel  or scancel -u $USER#- check running job:   scontrol show job -dd#- check completed job: sacct -j#- check CPU & memory:  seff#- find work directory: find . -name *.out#- check user storage:  diskusage_report#- check system:        partition-stats#- data mover:          gra-dtn1.computecanada.ca#------------------------------------------------------------

#- mpi#SBATCH --job-name=Precice#SBATCH --account=fc_gugroup     # resource account#SBATCH --partition=savio3  #savio3_bigmem  #savio2   #savio3_htc#SBATCH --mail-type=ALL    # Mail events (NONE, BEGIN, END, FAIL, ALL)#SBATCH --mail-user=chengzhi_vamos@outlook.com     # Where to send mail#SBATCH --output=%x-%j.out       # standard output and error log#SBATCH --nodes=1  #12#SBATCH --ntasks-per-node=32  #12#SBATCH --cpus-per-task=1##SBATCH --mem-per-cpu=2G #2G#SBATCH --time=3-00:00           # time (DD-HH:MM) [3h, 12h, 1d, 3d, 7d, 28d]

module load gcc # load the gcc version of interestmodule load openmpi/4.1.6  # see the MPI versions available for that gcc

#module load singularity

cd /global/scratch/users/chengzhi/Precice/UAV_rudder_Gyroid_AOA10deg_beforeOP_FsurfaceFM_turbu_Fadjust/

Copy mesh from the steady state case, map the results to a mesh motion case,

then solve transient.

(cd fluid-openfoam || exit 1

rm -rf ../precice-runrm -rf processor*rm -rf 0.* 1 VTKrm -rf fluid-openfoamrm -rf precice-profilingrm -rf precice-Fluid-iterations

apptainer exec precice_MPI416_20250923.sifrenumberMesh -constant -overwrite

apptainer exec precice_MPI416_20250923.sifdecomposePar -force

#srun -n 32 apptainer exec precice_MPI416_20250923.sif pimpleFoamapptainer exec precice_MPI416_20250923.sif mpirun -np 32 pimpleFoam

#mpirun -np 32 apptainer exec --bind /usr/lib64:/usr/lib64 \

–bind /usr/bin:/usr/bin \

precice_MPI416_20250923.sif \

pimpleFoam

#mpirun -np 32#–network=host --mca psec ^munge#mpirun -np 32 pimpleFoam -parallel

apptainer exec precice_MPI416_20250923.sifreconstructPar

#srun -n 32 singularity exec precice.sif \

bash -c “cd fluid-openfoam && ./run.sh”

)

#------------------------------------------------------------------------------

Dear Zhi_Cheng,

First of all, great to have you here. It would really be beneficial to help you could remove all commented out lines from your post since it is very difficult with the weird rendering to follow which line is actually executed. Is this the right line?
”apptainer exec precice_MPI416_20250923.sif mpirun -np 32 pimpleFoam”

Which tutorial or case are you trying to solve?

Best Regards,

Patrick

Hi, Patrick, many thanks for your reply. I found if adding “-parallel“ at the end, it can run. So now I don’t have a problem here. Thanks again!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.