Restarting coupled FSI simulation with preCICE

Dear all,
I am trying to validate my fluid and structure mesh for the FSI1 case of cylinder-flap test. As the refinement of the fluid and structure mesh increases, it is difficult for implicit FSI coupling to converge. Therefore, my idea is to use a lenient tolerance of 10^-3 for the first 2 seconds of coupled simulation, and gradually make the tolerance much stricter (10^-4 for 2-4[s], 10^-5 from 4[s]). To accomplish this, I have to restart coupled FSI simulation at 2[s] as well as at 4[s]. As Benjamin mentioned in the gitter chatroom, I have activated the initialize option in the exchange data field as follows:

<exchange data="Displacements0" mesh="Calculix_Mesh" from="Calculix" to="Fluid" initialize="1" />

However, the restarted coupled simulation does not converge in the 1st time step (2.01[s]) and it crashes in the second time step(2.02[s]) after 11th sub-iteration due to the failure of the rbf spatial coupling at the interface. The original coupled simulation from 0[s] was doing well, requiring only 4 sub-iterations for the coupled simulation to converge at 200th time step (2[s]). I understand that IQN-ILS post processing does not function well when it does not have solution history from previous time steps to rely upon on. But if it works well at the first time step with 11 sub-iterations for convergence (0.01[s]), I am puzzled why it does not function similarly for the 201th time step(2.01[s]).

Therefore, I was wondering if my restart procedure for OpenFOAM and CalculiX are correct to begin with. For restarting OpenFOAM, I simply change the parameter startFrom to latestTime. Whereas for CalculiX, I followed the procedure described in an earlier discussion from preCICE community.

Also, since my objective here is to make the relative tolerance stricter gradually, is it possible to change the tolerance value by simply editing the precice-config.xml file during an active simulation like OpenFOAM, thereby avoiding the need for me to restart coupled FSI simulation. I have attached the necessary log and config files for the original as well as restarted simulation for your reference.

Best wishes,
Kiran Sripathy.

Hi Kiran,

In your initial Run.inp file from 0-2s, your final time is 10s. Does this simulation still output a Run.rout file? If the vertex locations in the Run.rin file are wrong, the geometry won’t match between the solvers and cause a problem.

Sometimes the restart is difficult even when the geometry is aligned. Does it only fail with RBF mapping? As a check, does nearest-neighbor mapping work? You could also possible try Aitken relaxation at first and see if that improves convergence, or try parallel-implicit coupling with IQN-ILS and Aitken, as this can often help with the initial restart (not sure if you tried it, but the config file has serial-implicit).


Hi Kyle,
Thanks for the reply. I see that the wrong config files were uploaded for the 0-2[s] case. Rest assured, in the original config files, the final time was kept as 2[s], and the .rout file was generated. So far, I have tried RBF mapping only. I will try with alternate spatial and temporal coupling schemes as you have mentioned. Apologies for the delayed response. I will check out the other configs during my Christmas vacation and post an update. Wishing you all a great holiday and happy new year :slight_smile:.
Best wishes,

No, this is not possible. The preCICE configuration is only read once during configure().

Your timestep size 0.01 is also rather big. Could also give problems for a finer fluid mesh.

But actually the whole idea of tightening the tolerance sounds a bit strange to me. What happens if you use 10^-5 right from the start? Be careful to also adjust your solvers’ tolerances. They should ideally always be at least two orders of magnitude tighter than the coupling tolerance.

@ksripathy are there any news regarding this issue? What was the solution for you?

I did not get the opportunity to check the restart issue, since the problems I faced were related to other factors. However, I assume that the initial sub iterations after restart will not converge, irrespective of the tolerances, and once the solution history is developed using the IQN-ILS algorithm, the later sub iterations shall converge.

The issues faced by me were due to my lack of understanding of how a typical coupled FSI simulation functions. On further discussions with my supervisor, I realized that it is okay for the sub iterations to not converge in the initial time steps, since the area of interest for the FSI validation cases are in the steady and periodic zones of the results for FSI1 and FSI2,3 cases respectively. Therefore, I reduced the maximum number of sub iterations from 50 to 20. It was noticed that the sub-iterations start converging approximately from 200th - 1000th time step depending on the resolution of the fluid and structure mesh.

Also, for the simulations with finer meshes, the force residual struggles to converge to 10^-4 relative tolerance, resulting in maximum allotted number of sub iterations for each time step, prolonging the duration of simulation. This was resolved by employing a much stricter tolerance for displacement residual say 10^-6, and a lenient tolerance for force residual say 10^-2. This resulted in smooth force data for FSI1 and FSI2 test cases. However for FSI3 test case, the lift and drag curve is superimposed with high frequency small perturbations, which affect the accuracy of estimation of lift/drag mean and amplitude. These perturbations decay when stricter tolerance is employed for the force residual, which is a surprising finding, since for sequential coupling with IQN-ILS post processing on displacement residual, force residual is a consequence of the input interface displacement at that sub iteration.

On an unrelated note, I wanted to mention that for coupled 2D FSI simulations, it is recommended for the structure mesh to employ first order elements, since second order brick elements result in the failure of displacement mapping from structure to fluid interface. And, it is recommended for the fluid mesh to not have high aspect ratio cells in the regions of large mesh motion, since the Laplacian mesh motion algorithm generates cells of very high Skewness, causing the OpenFOAM solver to crash.

Apologies for the delayed response regarding this issue, and I thank @KyleDavisSA and @uekerman for their inputs.

Best wishes,
Kiran Sripathy.

1 Like

Thanks for the update, @ksripathy
Quick remark: Your convergence behavior still sounds a bit odd. Be sure to start from a pre-computed fluid field. IQN-ILS has problems if forces and displacements are close to zero. Then, everything becomes ill-conditioned. Also be sure to use a convergence criterion in OpenFOAM of at least two orders of magnitude stricter then the coupling convergence criterion. For example, if you use a relative criterion of 1e-4 for the forces, the velocities in OpenFOAM should convergence until a relative criterion of 1e-6. Otherwise, IQN-ILS does not get the “direction” right.

Thank you for your Information! What you wrote was very interesting for me!
Do you have any sources for the information you give about the first and second order elements as welll as with the aspect ratio? Then I could use this for my thesis.

Would be really nice!

Thank you!



Does anyone try the restart function successfully? I did the initialization, and the problem is the same,
the restarted coupled simulation does not converge in the 1st time step and it crashes in the second time step. It seems the force0 sending back to calculix is not right, which is much larger than the correct one (I found this from Calculix_Mesh-Fluid.dt0.vtk.) This leads to the divergence of CCX. So I copy the flow field files before the second iteration and check them. The pressure field is absolutely wrong, the pressure around the fsi-surface is about 10 times the normal one. However, openfoam converges with no prolems. Anyone know what is the reason?


I have this same issue. Any updates on why this might be happening?