Running openfoam and dealii both in parallel in the Multiple perpendicular flaps tutorial case?

Hi everyone,

I am trying to run the Multiple perpendicular flaps tutorial case. The default option is that openfoam runs with 4 threads, and dealii runs with all the locally availbale threads. I have 64 cores on my workstation. After I changed the number of threads in /fluid-openfoam/system/decomposeParDict from 4 to 48, and changed the environmental variable DEAL_II_NUM_THREADS to 16, I got the following error from both of the two solid solvers:

—[precice] ERROR: Receive using sockets failed with system error: read: End of file

and the following error from openfoam:

PIMPLE: iteration 1
smoothSolver: Solving for cellDisplacementx, Initial residual = 0, Final residual = 0, No Iterations 2
smoothSolver: Solving for cellDisplacementy, Initial residual = 0, Final residual = 0, No Iterations 2
[0]
[0]
[0] → FOAM FATAL ERROR: (openfoam-2206)
[0] Cannot determine normal vector from patches.
[0]
[0] From void Foam::twoDPointCorrector::calcAddressing() const
[0] in file twoDPointCorrector/twoDPointCorrector.C at line 108.
[0]
FOAM parallel run aborting

In the tutorial document, it says “The solid participants are only designed for serial runs”. Is this true?

Can anyone please help me with this? Is it because I have to set other parameters as well to avoid these errors? Thank you in advance.

@jianzhou722 the error on the OpenFOAM side looks strange. Could you please upload a screenshot from ParaView with the partitioning you are applying, showing also the cells? Did you increase the resolution of the mesh before increasing the number of cells? This tutorial has a very coarse mesh.

The structure side is not complaining here, it is just telling you that the other (fluid) side disappeared unexpectedly.

Since deal.II is an FEM framework in which one has to code their own solver, the deal.II adapter is essentially an example solver (+ additional machinery). The solid participants here are also simple examples, not parallelized with MPI (distributed memory parallelization). You can, however, run them with shared memory parlalelization, as you are already doing: Configure the deal.II codes | preCICE - The Coupling Library In the deal.II adapter documentation, there is also a link to an MPI-parallel example.

1 Like