Hi:

I wonder if there is a consistent way of determine the coupling dt?

Large coupling dt could lead to the unstable of strong coupled fsi process but too small will lead to the waste of computational resource when coupling is not so strong. The solver failure-cut dt to half-retake step would need the precice to let both solver to retake step, which seems formidable in precice however.

Any advise would be appreciated!

Yuxiang

That’s a good question.

Think of the coupling dt as a window size. Currently this window size is fixed (but we might make it adaptive in the future). There is probably no perfect recipe on how to choose the window size. The trade-off you explain is valid.

Inside a window, both solvers can use their own timestep sizes until they reach the end of the window. In an implicit coupling loop, they can also use different timestep sizes in each iteration.

An example:

- Window size 1.0.
- The fluid solver does 2 timesteps of 0.5, the structure solver 1 timestep of 1.0.
- We don’t converge and the adaptive timestep criterion of the fluid solvers suggests smaller steps.
- In the next iteration we do 4 timestep in the fluid solver of 0.25 and again 1 timestep in the structure solver.

All this is already supported by preCICE right now. This wiki page gives more information on the configuration. The current limitation is that for such a subcyling within one timestep, we currently use a constant extrapolation, which deteriorates the order of both solvers typically to one and which might also introduces stability problems. We are working on a general interpolation scheme, see e.g. this paper. Rough estimate would be that we have this working in preCICE at the end of 2020.

Did I answer your question?

Hi, Ben, Thanks for your comments.

In a strong coupling case, such as FSI benchmark case 3, if one set window size to large, say, 1.0, usually fluid solver can converge if implicit solver is used, but solid solver is hard to converge even if implicit solver is used in the subiteration, it will then cut time step to half and recompute, and then one will find the solid solver successively fail and reduce dt to formidable small and fsi process will never converge, I suppose in a strong coupling case the information between the should be small to ensure the information is exchanged frequently in a quickly evolving environment?

Yes, this sounds reasonable.

For the new time interpolation scheme that we are currently developing (link to paper above) this might change, however. Our current experiments show that synchronizing less has no crucial negative influence.