Is there a tutorial dedicated to running FSI with preCICE using MPI? I couldn’t find it on the web. Currently, my model works with 1 cpu for openFOAM and calculix. But the preCICE: Mapping takes longer and longer to finish. I hope to use MPI to improve the efficiency.
I tried the following command in the xml file. Both codes just wait without handshaking.
Thank you!
<m2n:mpi
exchange-directory="."
acceptor="Solid-Solver"
connector="Fluid-Solver"
enforce-gather-scatter="false"
use-two-level-initialization="false"/>
Hi,
The m2n:mpi uses the MPI back-end to connect participants; we recommend sticking to sockets until the communication becomes an issue.
Many OpenFOAM tutorials can be run using ./run.sh -parallel.
The Calculix adapter doesn’t support running using MPI.
Got it. Thank you! So to run the FSI analysis in parallel, I need to first decompose the fluid mesh in openFOAM. After this step, basically, I don’t need to do any changes to the configuration files. Just add -parallel on the command line. Is it correct?
Second issue is the time spent on Mapping “Force“ becomes unbearable. The pattern is every 8x10^{-5} sec, the analysis got stuck in this step “preCICE: Mapping “Force” for t=0.005050000000000001 from “Fluid-Mesh-Faces” to “Solid-Mesh”“ for several minutes. Then it will quickly pass another 8x10^{-5} sec with a very nice convergence rate. Do you know why this is happening? I think since the time step is so tiny, preCICE doesn’t need to do a full search on the nearest neighbors this often. Something is accumulating over the time, I guess.
precice-config.xml (2.3 KB)