PreCICE spack installation on HPC fails with python3 related error

Dear preCICE Community,

I am attempting to install preCICE on an HPC system using Spack, following the guidelines provided on the HPC webpage. However, the installation fails with the following error:

  >> 163    CMake Error at /ari/users/ucoskun/apps/spack/opt/spack/linux-rhel8-
            zen2/gcc-12.3.0/cmake-3.30.5-s6eovwbmevqy55yyon3pp6q77nlodpg5/share
            /cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:233 (messag
            e):
     164      Could NOT find Python3 (missing: Interpreter)
     165    
     166          Reason given by package:
     167              Interpreter: Cannot run the interpreter "/ari/progs/Pytho
            n/Python-3.12.4-openmpi-5.0.3-gcc-11.4.0/bin/python3"
     168    
     169    Call Stack (most recent call first):

Here is a summary of the steps I have taken:

  • I loaded a Python module (seemed most recent version) using:
module load Python/Python-3.12.4-openmpi-5.0.3-gcc-11.4.0
  • Installed spack to my home directory without any error.
  • Checked Python versions:
python3 -V:
Python 3.12.4
python -V:
-bash: python: command not found
  • Spack find compilers and spack compilers commands resulted as below:
spack compilers
==> Available compilers
-- gcc rhel8-x86_64 ---------------------------------------------
gcc@8.5.0  gcc@12.3.0  gcc@11.4.
  • Checked the precice with spack info precice
    The preferred version was 3.1.2, and the variants section appeared appropriate (probably not though).
  • I ran spack install precice

Despite these steps, the installation fails with the aforementioned Python3 interpreter error. I would appreciate any suggestions or guidance to resolve this issue. Please note that I am relatively new to this process and may need detailed instructions.

Hi,

Try detecting the python installation as an external package in spack.

spack external find python

You should then see it as an installed package using

spack find python

I normally use this for python and a row of standard tools such as make, cmake, pkg-config etc.

Kind regards
Frédéric

Dear Frédéric,

Thank you for your quick reply!

I’ve successfully (magically) installed “everything” (please pardon my enthusiasm). I managed to run the heat-exchanger tutorial on the HPC without errors—however, there’s a catch!

After copying your response to ChatGPT and discussing it for a while, this was the solution for me:

  • Avoid using any previously installed modules from the HPC where possible (including Python).
  • Use Spack (where possible) and ensure all dependencies are built with the same GCC version.
  • The most challenging part was adjusting the Makefile for the CalculiX preCICE adapter, but the rest was fairly straightforward.

I ran the heat exchanger tutorial using a SLURM script, assigning 50 cores to each fluid participant and 28 cores to the solid participant, making a total of 128 cores. (Just to check the speed-up)

However:

I realized that I had forgotten to add the -parallel option to the run.sh scripts for OpenFOAM. Naturally, I re-ran the simulation with -parallel, but this time, each time step took significantly longer—about 30 times longer than the serial run.

Therefore, I wanted to ask the community:

  • Is this the correct approach for running a coupled simulation on an HPC using SLURM?
  • Could you (or anyone) kindly review my SLURM script below and share your thoughts?

Kind regards,
Umut

SLURM script:

#!/bin/bash
#SBATCH -A account
#SBATCH -n 128
#SBATCH -p queue
# ------------------------------
#module load ek-moduller-easybuild
#module load arpack-ng/3.9.0-foss-2023a
#spack load openmpi@5.0.6 netlib-lapack@3.12.1 openblas@0.3.29 yaml-cpp@0.8.0 openfoam@2312 precice@3.1.2 petsc@3.22.3
#export PATH=$HOME/apps/calculix-adapter-master/bin:$PATHls
#export LD_LIBRARY_PATH=$HOME/apps/spack/opt/spack/linux-rhel8-zen2/gcc-12.3.0/precice-3.1.2-ukxuqmx2goykhc5c4tyw3huhawqxokwo/lib64:$LD_LIBRARY_PATH
#export LD_LIBRARY_PATH=$HOME/apps/spack/opt/spack/linux-rhel8-zen2/gcc-12.3.0/yaml-cpp-0.8.0-fypxvjn4bec57zb4rq3pb4aqp62vlog7/lib64:$LD_LIBRARY_PATH
#export LD_LIBRARY_PATH=$HOME/apps/spack/opt/spack/linux-rhel8-zen2/gcc-12.3.0/netlib-lapack-3.12.1-6wv46i4ij6ilsumqpyw23hmrwpwi7b5q/lib64:$LD_LIBRARY_PATH
#export LD_LIBRARY_PATH=$HOME/apps/spack/opt/spack/linux-rhel8-zen2/gcc-12.3.0/petsc-3.22.3-f6fi4zethcvvsqnq6wv5fwxprhsphju7/lib:$LD_LIBRARY_PATH
# ------------------------------

caseFolder=$(pwd)
$caseFolder/clean-tutorial.sh > clean-tutorial.log

cd fluid-inner-openfoam
$caseFolder/fluid-inner-openfoam/run.sh -parallel &
cd ..

cd fluid-outer-openfoam
$caseFolder/fluid-outer-openfoam/run.sh -parallel &
cd ..

cd solid-calculix
$caseFolder/solid-calculix/run.sh &
cd ..

wait

I also updated two lines in solid-calculix/run.sh as below:

export OMP_NUM_THREADS=28
export CCX_NPROC_EQUATION_SOLVER=28

3 posts were split to a new topic: Running CCX and OF on HPC

I split the topic after the python problem disappeared.
I’ll mark this on as solved.