Matrix-free-dealii-precice + OpenFOAM: FSI simulation hangs

Hi everyone,

I would like to test the example tutorials provided in the matrix-free dealii solver provided by David Schneider. I tried two of the FSI examples but, unfortunately, I am not able to run those simulations as the fluid and solid solvers hang at

---[precice] Setting up master communication to coupling partner/s

Could you help me with this?

The solver is compiled according to the instructions given in the file.
I am using “precice/2.2.0”, “dealii/9.3.1”, “openmpi/4.1.1”, and “p4est/2.3.2”.

Best regards,

Hi Sina,

could you add the full preCICE configuration file? Since you seem to be on some kind of cluster, how do you execute the tests? Do you submit it via some batch system? You could try changing the network device. By default it should be lo which might have to be replaced with another network device of your machine/cluster.

For more information about the communication setup can be found here:

1 Like

Hi @sinaTaj

the configuration file should be the same as matrix-free-dealii-precice/precice-config.xml at master · DavidSCN/matrix-free-dealii-precice · GitHub this one. I just tested it locally on my machine and it works without any problems. As @ajaust already said you might need to adjust your config in case you are operating on a cluster. Could you provide some more information on how you start the simulation on which system you work on? Did you compile the programs in debug or in release mode?

Since the deal.II parameter file uses a relative path (by default) make sure you start the simulation from the respective participant directory.

1 Like

Check also this troubleshooting/FAQ topic: Help! The participants are not finding each other! - #3 by Makis

1 Like

Dear Alexander,

Indeed, I am running the cases on a node of a cluster. I am not using any batch system for these tests. here is the precice-config file.
precice-config.xml (2.7 KB)

I also tried adding the network device.


Hi Sina,

I think that the line

<m2n:sockets from="Fluid" to="Solid" exchange-directory=".." />

could be critical in your case.

We would need to know exactly how you start the solvers. Do you start them using the supplied scripts or manually? The paths from where you start the simulation matter. Therefore, you might have for the adapt the exchange-directory setting such that the solver can find each other.

If you run the participants in parallel and on more than one node than you have to add a network setting to this line. preCICE is by default using the loopback interface lo. This means your current configuration file is equivalent to changing the line to the following:

<m2n:sockets from="Fluid" to="Solid" exchange-directory=".." network="lo" />

In order to use another network device you need to give the name of the relevant network interface for communication, i.e. replace lo by another name. You can find network interfaces using the command ifconfig or ip link which gives you the name of the network cards available. One of the name will be lo. Other common names are something like eth0, eth1 etc. On bigger clusters it could also be ib0 or so (infiniband).


Dear all,

Thank you for your tips. Out of laziness, I was using my old Allrun script for running these cases. The problem was indeed the path in which I was starting the simulations. Modifying the job script fixed the issue.

Only I think the command for making the program on DavidSCN/matrix-free-dealii-precice: A matrix-free high performance solid solver for coupled fluid-structure interactions ( should be

mkdir build && cd build && cmake -DDEAL_II_DIR=/path/to/deal.II -Dprecice_DIR=/path/to/precice ..

then followed by

make release

for making in release mode. Please correct me if I am wrong.
Thank you very much for your prompt replies.

That’s true, if have not yet updated the documentation regarding this, but cmake should inform you already. I recently added a heat solver for CHT and now make release/debug does not trigger the respective make command any more. The reason was mainly to enable a selective compilation of the desired solver as compiling the programs takes some time. You can run make solid in order to build the solid-FSI solver and make heat to build the solid-heat solver. Running make is equivalent to make all and builds always both solvers. I may split up the compile-heavy part in a separate unit in the future, to that the final executables compile faster, but this requires still some work.

EDIT: For the sake of completeness: note that building in release mode removes almost all asserts within the program. I would recommend to compile in debug mode for development and case setups and release mode for in-production runs.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.