My cluster runs the CentOS 7 operating system, and I have installed OpenFOAM and ccx_preCICE on it. The Calculix version is 2.20. When running the tutorial/elastic-tube-3don the cluster, the example can be successfully executed on a single node. However, I encountered difficulties when trying to run it on multiple nodes across the cluster using openMPI.
After consulting the relevant information on the forum, I still cannot successfully run ccx_preCICE on the cluster using the mpirun command like
It is my understanding that the Calculix binary does not support distributed memory parallelization (MPI). You’ve found that it does support shared memory parallelization (MT – or multi-thread). This is certainly a limitation of using Calculix.
Thanks a lot!
I’d like to ask in passing, why does CentOS 7 only support preCICE-2.3.0? What are the reasons that prevent CentOS 7 from installing higher versions of preCICE? If I have a higher version of the GCC compiler, such as gcc-9.2.0, can I compile and install preCICE v3?
Are you building preCICE from source code on your own CentOS7 system, or are you downloading a pre-compiled version of preCICE from somwhere? If you’re getting a pre-compiled version, I think those all come in containers (like Docker) that should handle operating system and external library dependencies for you. If you’re compiling yourself and trying to use the compiler that’s provided by CentOS7, I can see how the source code may need a newer compiler at some point. I don’t think there is anything in the preCICE source code that is OS system-dependent. So, as long as you have a newer compiler and the necessary external libraries (Boost, etc.), you should be able to build something newer than v2.3.0 that works on CentOS7.
For reference, I was able to build and run v2.5.0 on a CentOS7 system. Our in-house code preCICE adapter is still stuck in the past, so we haven’t transitioned to preCICE v3 yet. I don’t have any worries that it will work on CentOS7 though.
Thank you for your reply! I am compiling and installing preCICE from source on CentOS 7. I learned from the following webpage that CentOS 7 only supports preCICE-v2.3.0, so I thought it would not be possible to compile and install a higher version of preCICE from source! Now I understand.
I’m also trying to run precice_ccx on an HPC system. I successfully installed all dependencies to run a coupled simulation, including the spoolesMT as mentioned in this conversation.
How do you run a coupled case (calculix, openfoam, precice) using a slurm script? (can you please share a complete and working slurm script? Even if you are not sure if it is a best practice I’m still interested to see all approaches)
Is spooles the best option to run ccx on HPC? If not, what is a better alternative?
I’m running coupled simulations on HPC but ccx+communication part of simulation works slower than my expectations. Please check this post too: How to run CCX on HPC for more information.
Hello! Up to now, I have only been able to successfully run ccx_preCICE (i.e., OpenMP) on a single node in the Slurm cluster, but not with MPI. Therefore, I am unable to provide an example of a Slurm script that successfully uses MPI. Regarding your second question, based on the installation guide on the preCICE website, I understand that Calculix relies on the SPOOLES library, so it might not be possible to use other sparse system solver libraries that support MPI.
Your post is very valuable! Thank you for sharing!
Thank you for your response! I would still love to see your (complete) Slurm script for running ccx_precice on a single node. As I thought more about it, I think I don’t necessarily need multi-node execution for ccx_preCICE—I’m primarily interested in understanding how you structured your slurm script, hostfiles, environment variables, and how you called ccx_precice (e.g., using taskset, srun, or a standalone script).
If you could share the full Slurm script, it would be invaluable.
I’ve provided a more detailed explanation of my case in this discussion: Running CalculiX and OpenFOAM on HPC. It might be more suitable to continue our conversation there.