Data mapping too long (FSI case:On the interface, there are many fluid meshes and few solid ones)

Hi!
I want to simulate the collapse of a bubble on top of an elastomer. Both the fluid domain and the solid domain are quarter-cylinders, as shown in Figure 1 for the fluid domain and Figure 2 for the solid domain


Figure1:alpha.water field of fluid domain with mesh: Red represents water and blue represents bubbles

Figure2: solid domain with mesh
At the interface, the fluid mesh is very dense, with a minimum size of 5e-4m and a solid mesh size of 2e-3m

Figure3:Fluid mesh at the interface

Figure4:Solid mesh at the interface
During the calculation process, the fluid partner always stops at ā€œā€“[precice] Receive global mesh Solid-Meshā€, And solid partner has been stopped at ā€œā€“[precice] Setting up primary communication to coupling partner/sā€

I have no idea what to do now? Any advice would be greatly appreciatedļ¼Ÿ
I want to know what went wrong and how to fix itļ¼Ÿ
Appreciate very muchļ¼

@zyx I assume you have solved this issue by now, but just in case:

  • Could it be that you are running in parallel and that you previously stopped the simulation before initialization completes? Try removing the precice-run/ directory (which contains the connection addresses) before running again. Maybe some addresses happen to be valid and some not.
  • Could you try running preCICE with Debug log enabled (may need to build preCICE from source) and post the full logs?

In case the problem is indeed the initialization time of the mapping, and assuming you are using an RBF mapping, you could tune the RBF mapping to only consider a subset of points when building the interpola. Read more about this in the preCICE v2 paper: https://open-research-europe.ec.europa.eu/articles/2-51/v2

Thank you very much for your reply!
I change the mapping mode from rbf-thin-plate-splines to nearest-projection, precice can run this step ā€œReceive global mesh Solid-Meshā€. But the operation is still stuck at: ā€œā€“[precice] e[0m Mapping distance not available due to empty partition.ā€
I also found that openfoam uses 1 node or 2 nodes operations (caculix is always a node operation) and can continue to work. But when openfoam uses 50 nodes, it gets stuck ā€œā€“[precice] e[0m Mapping distance not available due to empty partition.ā€
This is the configuration file of the fluid-structure coupling case:
precice-config.xml (2.5 KB)
This is the openfoam run log file:
logfluid.log (3.7 KB)
Here is the log file of the caculix operation:
logsolid.log (1.9 KB)

I really hope you can give me some advice!
And looking forward to anyoneā€™s suggestions! :grinning:

when I used log to debug where initialization hangs with 25 nods for openfoam and 1 node for caculix:
I got the openfoam run log file in detail:
logfluid.log (38.7 KB)
and the caculix run log file in detail:
logsolid.log (74.6 KB)

I canā€™t find out whatā€™s wrong. Could you please help me find out the error?

I want to simulate the collapse of a cavitation bubble near an elastic boundary. openfoam simulates the fluid and calculix simulates the elastic boundary. On the surface of the elastic boundary as a coupling surface, and both the fluid and solid domains are quarter-cylinders:
The fluid domain where the bottom surface is the coupling surface :


The solid domain where the top surface is the coupling surface :

The precice coupling interface looks like this, white is for Fluid-Mesh-Fluid.final.pvtu file, red is for Solid-Mesh-Fluid.final.pvtu file.It looks like the solid coupling region is larger than the fluid, but the geometric model of the two I confirm matches, I donā€™t know why it appears this way

The Fluid-Mesh-Fluid.final_0.vtu file and the Solid-Mesh-Fluid.final_0.vtu file look like they match again:

Openfoam uses parallel computing, calculix is single core computing. I set the time step for both to dt,precice coupling step to 5 times dt. When both solvers reach the coupling time window after computing by themselves, precice always crashes when coupling. For example, openfoam will report an error:
ļ¼ˆCoupling surfaces are region free for parallel operations OR Coupling Surface Subregion for Parallel Computingļ¼šļ¼‰

(0) 10:59:23 [com::SocketCommunication]:686 in receive: ERROR: Receiving data from another participant (using sockets) failed with a system error: read: End of file [asio.misc:2]. This often means that the other participant exited with an error (look there).

I like to ask for advice, why does it always crash when running to preciceā€™s coupling window? Is it related to the partitioning of the coupling surface in parallel operations?

Here are my configuration files and the log files for openfoam and calculixļ¼š
precice-config.xml (2.6 KB)
logfluid.log (106.9 KB)
logsolid.log (100.8 KB)

You can also look into the pvtu files to inspect the files referenced therein. The _0.vtu files may only contain a portion of the complete exported interface mesh such that the screenshot below is only part of the complete interface definition in preCICE. How do you create the OpenFOAM meshes? Do you maybe use a mesh scaling in some place unintentionally?

Your logsolid shows in the beginning the following warning message

[1697342948.237949] [c01r2n00:7222 :0]    ucp_context.c:671  UCX  WARN  network devices 'mlx5_2:1','mlx5_3:1' are not available, please use one or more of: 'docker0'(tcp), 'eno1'(tcp), 'flannel.1'(tcp), 'ib0'(tcp), 'mlx5_0:1'(ib), 'mlx5_1:1'(ib)

seems unrelated, but do you have any idea about it?

I guess your actual problem is related to the handling of time-step sizes

You solid log says:

precice_dt dtheta = 0.000500, dtheta = 0.000147, solver_dt = 0.000000

and your time window size is 5x10^-7. I guess a time-step size of zero will lead to a crash in your solid solver. I would try:

  1. check the time-step size handling in your solid solver
  2. try running the case without subcycling and switch only later to subcycling for performance reasons
  3. if using subcycling, maybe reduce the initial relaxation of your quasi-Newton
1 Like

Thank you very much for your advice! :smile:

  1. I checked the time step of the solid solver and the time step I used (1e-6 s or 5e-7 s) is smaller than the stabilization time step of the explicit dynamics (1.8e-6 s).But the calculation still crashed after reaching the coupling window of precice.

  2. I closed the subcycling, completed the first step of the calculation in openfoam, the calculation proceeded to the data exchange step in precice (I guess), and at this step the calculation crashed again.I donā€™t know if the error is in the precice or the solids solver.

Hereā€™s the log file after I canceled the subcycling where the fluid solver, solid solver, and precice coupling time steps were all 5e-7sļ¼š
logfluid.log (74.5 KB)
logsolid.log (100.8 KB)

I used ICEM software to create the OpenFOAM meshes.Probably because I used size scaling when exporting the mesh in ICEMļ¼Ÿ

Sorry I donā€™t know the cause of this warning yet
May I ask if you think it might be a mesh mapping or communication problem in precice with parallel operations partitions the region?
Any advice would be greatly appreciated. :smiling_face:

The time-step size in your log files looks still odd. You cannot select a time-step size greater than the coupling time-window size, which is in your configuration file above 5e-7. Did you try to set a fixed time-step size in both, the solid and fluid solver, which is equal to the time-window size?

From the log files you uploaded, I cannot say either. Do you pipe stderr into your log files?

Iā€™m not familiar (any more) with ICEM. In any case, please make sure your interfaces match. Your preCICE export files may serve as a sanity check as already discussed.

I donā€™t think thatā€™s the problem, as long as you use the latest preCICE release, but thatā€™s just a wild guess.

Thank you very much for your reply!
I have set up openfoam, calculix and precice with the same fixed time step, but it still crashed when the precice started exchanging data.I have a sense that this error is not the cause of the time-steps.
Iā€™m still confused as to where the problem lies.