OpenFOAM-dealii simulations with large fluid meshes crash

Dear users and developers,

I observed simulation crashes in my FSI simulations using precice/2.3.0. The case considers the vibration of a long tube axially mounted in a channel flow.
I am using a solver based on the official dealii linear elasticity solver (dealii 9.2.0) together with the OpenFOAM adapter (OpenFOAM 7).
To perform a sensitivity analysis on the fluid mesh resolution, I fixed the solid mesh (DOF_solid=192000) and performed simulations using three different fluid meshes with 14M, 31M, and 49M cells. The first case runs without any problem. The second and third cases (the larger cases) however, crash after some time with first a segmentation fault error on the Solid adapter and a consequent crash on the fluid side. The fluid solver runs in parallel on 63 cores, and the solid solver in series.
The simulations are performed on a single node of a cluster.
Here is the error message in the Solid log:

Advancing in time...
Advancing the adapter: Entering the subsection
---[precice] e[0m relative convergence measure: relative two-norm diff of data "Stress" = 7.85e-06, limit = 1.00e-04, normalization = 1.71e+05, conv = true
---[precice] e[0m relative convergence measure: relative two-norm diff of data "Displacement" = 2.30e-08, limit = 1.00e-04, normalization = 4.39e-02, conv = true
---[precice] e[0m All converged
[112:310409] *** Process received signal ***
[112:310409] Signal: Segmentation fault (11)
[112:310409] Signal code: Address not mapped (1)
[112:310409] Failing at address: 0xb9
[112:310409] [ 0] /lib64/libc.so.6(+0x37400)[0x14711a489400]
[112:310409] [ 1] /software/precice/2.3.0/lib64/libprecice.so.2(_ZN7precice3com15SocketSendQueue7processEv+0x6d)[0x14711e44abcd]
[112:310409] [ 2] /software/precice/2.3.0/lib64/libprecice.so.2(+0x1a3c19)[0x14711e44bc19]
[112:310409] [ 3] /software/precice/2.3.0/lib64/libprecice.so.2(_ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE+0x493)[0x14711e445bd3]
[112:310409] [ 4] /software/precice/2.3.0/lib64/libprecice.so.2(+0x188ede)[0x14711e430ede]
[112:310409] [ 5] /software/gcc/7.5.0/lib64/libstdc++.so.6(+0xc252f)[0x14711ac5b52f]
[112:310409] [ 6] /lib64/libpthread.so.0(+0x817a)[0x14711a23a17a]
[112:310409] [ 7] /lib64/libc.so.6(clone+0x43)[0x14711a54edc3]
[112:310409] *** End of error message ***
./runSolid: line 27: 310409 Segmentation fault      (core dumped) ./Solid/linearGuideTube Solid/linearGuideTube.prm

and the Fluid error message:

---[preciceAdapter] [DEBUG] Writing coupling data...
---[preciceAdapter] [DEBUG] Advancing preCICE...
---[precice] e[31mERROR: e[0m Receive using sockets failed with system error: read: End of file
mpirun: Forwarding signal 18 to job

I have also attached the Fluid and Solid logs together with the precice config file.
It seems like the process is trying to access a memory address beyond what has been allocated.
Does anyone know what can be the reason for this?

Please let me know if you need any further information.

Sina

precice-config.xml (3.1 KB)
Fluid.log (1.6 MB)
Solid.log (465.5 KB)

Hi @sinaTaj,

how large is the deformation in your Fluid mesh? For me, it sounds like the deformation becomes too large for the applied mesh refinement so that small cells in the proximity of your structure become distorted.

The displacement of the fluid mesh is less than 0.6 mm. I have a prism boundary layer mesh adjacent to the solid boundary. The height of the mesh cells at the walls is approximately 0.05 mm in all the considered cases. The case is a damped vibration of an initially deflected beam. In the figure attached two cases are compared. The blue curve corresponds to a case with the crash, and the red curve is the coarse fluid mesh. As you can see in the figure, if this was the reason, the crash must have happened early around t = 0.03 s. The displacement is larger then. Right?
debug_displacement

Did you ever plot the Stress (should be part of the same watchpoint) of the crashing case?

Dear David,

Here is the stress in time on the same point (almost in the middle of the beam)

In the meantime, I reran the case with preCICE compiled in debug mode to see If I can extract some extra information from the solver… The case did not crash in debug mode! I am suspecting it might have something to do with the optimization.

Can you reproduce the erroneous behavior with the release version of preCICE reliably? If yes, it could be a bug in preCICE. It’s hard to guess about any reasoning as the stacktrace above does not contain much information. It might be worth trying to disable the extrapolation in your configuration, as we had some trouble with it before we released version 2.3.0.

I did not try running a case twice. I’ll try it. I will also test the effect of extrapolation.

Hi David,

Sorry for my late reply, I had to wait for the simulations to be able to make a conclusion.
I restarted the simulation with 32.3 M cells with the release version, the same error occurred at some later time (t=1.155 s). I also ran the same case with extrapolation disabled. This case is still running and is already at t = 1.97 s. I will wait for the second for one more week to see if it keeps running.
The randomness of the error makes it very difficult to draw solid conclusions.

Sina