Hi!
I want to simulate the collapse of a bubble on top of an elastomer. Both the fluid domain and the solid domain are quarter-cylinders, as shown in Figure 1 for the fluid domain and Figure 2 for the solid domain
Figure4:Solid mesh at the interface
During the calculation process, the fluid partner always stops at āā[precice] Receive global mesh Solid-Meshā, And solid partner has been stopped at āā[precice] Setting up primary communication to coupling partner/sā
I have no idea what to do now? Any advice would be greatly appreciatedļ¼
I want to know what went wrong and how to fix itļ¼
Appreciate very muchļ¼
@zyx I assume you have solved this issue by now, but just in case:
Could it be that you are running in parallel and that you previously stopped the simulation before initialization completes? Try removing the precice-run/ directory (which contains the connection addresses) before running again. Maybe some addresses happen to be valid and some not.
Could you try running preCICE with Debug log enabled (may need to build preCICE from source) and post the full logs?
In case the problem is indeed the initialization time of the mapping, and assuming you are using an RBF mapping, you could tune the RBF mapping to only consider a subset of points when building the interpola. Read more about this in the preCICE v2 paper: https://open-research-europe.ec.europa.eu/articles/2-51/v2
Thank you very much for your reply!
I change the mapping mode from rbf-thin-plate-splines to nearest-projection, precice can run this step āReceive global mesh Solid-Meshā. But the operation is still stuck at: āā[precice] e[0m Mapping distance not available due to empty partition.ā
I also found that openfoam uses 1 node or 2 nodes operations (caculix is always a node operation) and can continue to work. But when openfoam uses 50 nodes, it gets stuck āā[precice] e[0m Mapping distance not available due to empty partition.ā
This is the configuration file of the fluid-structure coupling case: precice-config.xml (2.5 KB)
This is the openfoam run log file: logfluid.log (3.7 KB)
Here is the log file of the caculix operation: logsolid.log (1.9 KB)
I really hope you can give me some advice!
And looking forward to anyoneās suggestions!
when I used log to debug where initialization hangs with 25 nods for openfoam and 1 node for caculix:
I got the openfoam run log file in detail: logfluid.log (38.7 KB)
and the caculix run log file in detail: logsolid.log (74.6 KB)
I canāt find out whatās wrong. Could you please help me find out the error?
I want to simulate the collapse of a cavitation bubble near an elastic boundary. openfoam simulates the fluid and calculix simulates the elastic boundary. On the surface of the elastic boundary as a coupling surface, and both the fluid and solid domains are quarter-cylinders:
The fluid domain where the bottom surface is the coupling surface :
The precice coupling interface looks like this, white is for Fluid-Mesh-Fluid.final.pvtu file, red is for Solid-Mesh-Fluid.final.pvtu file.It looks like the solid coupling region is larger than the fluid, but the geometric model of the two I confirm matches, I donāt know why it appears this way
Openfoam uses parallel computing, calculix is single core computing. I set the time step for both to dt,precice coupling step to 5 times dt. When both solvers reach the coupling time window after computing by themselves, precice always crashes when coupling. For example, openfoam will report an error:
ļ¼Coupling surfaces are region free for parallel operations OR Coupling Surface Subregion for Parallel Computingļ¼ļ¼
(0) 10:59:23 [com::SocketCommunication]:686 in receive: ERROR: Receiving data from another participant (using sockets) failed with a system error: read: End of file [asio.misc:2]. This often means that the other participant exited with an error (look there).
I like to ask for advice, why does it always crash when running to preciceās coupling window? Is it related to the partitioning of the coupling surface in parallel operations?
You can also look into the pvtu files to inspect the files referenced therein. The _0.vtu files may only contain a portion of the complete exported interface mesh such that the screenshot below is only part of the complete interface definition in preCICE. How do you create the OpenFOAM meshes? Do you maybe use a mesh scaling in some place unintentionally?
Your logsolid shows in the beginning the following warning message
[1697342948.237949] [c01r2n00:7222 :0] ucp_context.c:671 UCX WARN network devices 'mlx5_2:1','mlx5_3:1' are not available, please use one or more of: 'docker0'(tcp), 'eno1'(tcp), 'flannel.1'(tcp), 'ib0'(tcp), 'mlx5_0:1'(ib), 'mlx5_1:1'(ib)
seems unrelated, but do you have any idea about it?
I guess your actual problem is related to the handling of time-step sizes
I checked the time step of the solid solver and the time step I used (1e-6 s or 5e-7 s) is smaller than the stabilization time step of the explicit dynamics (1.8e-6 s).But the calculation still crashed after reaching the coupling window of precice.
I closed the subcycling, completed the first step of the calculation in openfoam, the calculation proceeded to the data exchange step in precice (I guess), and at this step the calculation crashed again.I donāt know if the error is in the precice or the solids solver.
Hereās the log file after I canceled the subcycling where the fluid solver, solid solver, and precice coupling time steps were all 5e-7sļ¼ logfluid.log (74.5 KB) logsolid.log (100.8 KB)
I used ICEM software to create the OpenFOAM meshes.Probably because I used size scaling when exporting the mesh in ICEMļ¼
Sorry I donāt know the cause of this warning yet
May I ask if you think it might be a mesh mapping or communication problem in precice with parallel operations partitions the region?
Any advice would be greatly appreciated.
The time-step size in your log files looks still odd. You cannot select a time-step size greater than the coupling time-window size, which is in your configuration file above 5e-7. Did you try to set a fixed time-step size in both, the solid and fluid solver, which is equal to the time-window size?
From the log files you uploaded, I cannot say either. Do you pipe stderr into your log files?
Iām not familiar (any more) with ICEM. In any case, please make sure your interfaces match. Your preCICE export files may serve as a sanity check as already discussed.
I donāt think thatās the problem, as long as you use the latest preCICE release, but thatās just a wild guess.
Thank you very much for your reply!
I have set up openfoam, calculix and precice with the same fixed time step, but it still crashed when the precice started exchanging data.I have a sense that this error is not the cause of the time-steps.
Iām still confused as to where the problem lies.