I’m attempting to run the elastic-tube-3d tutorial case on a HPC cluster. I am using the openfoam adapter and the calculix adapter. Thus far, I have not compiled Calculix to run in parallel, so my goal was to run the FSI simulation on 2 nodes (16 cores each). I’d use 1 core to run ccx_precice and 31 cores to run pimpleFoam. This proved to be a bit troublesome.
I am able to run on 1 node just fine. I can use 15 cores for OpenFOAM and 1 core for ccx_precice.
To run on 2 nodes I found that I have to modify my
precice-config.xml file to
enforce-gather-scatter="1" within the m2n communication specifications. This leads to a few questions:
- Why do I have to enforce gathering all the data on to one core before passing it between the adaptors?
- How much of a performance hit am I taking when I do this? I’m assuming it will be worse with larger sims.
- Is there any way I can keep from having to gather and scatter? I think all of my solvers and adapters were complied with openmpi. I think I read somewhere that this can be a problem?
- Any tips on getting Calculix and the calculix adapter to run in parallel across many nodes?