IQN fails for quickly changing forces during wetting in a free-surface FSI problem

Continuing the discussion from Error in writing watch-point for received mesh:

I am afraid there is not easy answer to this. Your setup is really quite challenging.

  • When using much smaller values for allowed columns and reused time windows does it converge then?
<max-used-iterations value="15"/>
<time-windows-reused value="0"/>
  • Using a <coupling-scheme:parallel-implicit> and then
<acceleration:IQN-ILS>
   <data  name="Displacements" mesh="Structure_Nodes" />
   <data  name="Forces" mesh="Structure_Nodes" />
   <preconditioner type="residual-sum"/>
	...
 </acceleration:IQN-ILS>

could maybe help

  • So could be playing with the filter
<filter type="QR2" limit="1e-2"/>

Try “QR1” with “1e-5” as well for example

  • Is there any particular reason why you use
<relative-convergence-measure limit="1e-4" data="Displacements" mesh="Structure_Nodes"/>
<relative-convergence-measure limit="1e-2" data="Forces"        mesh="Structure_Nodes"/>

?
Why not using the same limit for both?

  • Or maybe acceleration:IQN-IMVJ and then
<max-used-iterations value="15"/>
<time-windows-reused value="0"/>

is worth a try here.

  • We could also experiment with a restart of the QN acceleration or a tuning of the number of columns. But this is sth we have not yet implemented. Could be an interesting research question for @KyleDavisSA ?

Regarding the value at top of the beam being very small, in this case the bottom of the beam seems to be pushed first, with the top following after, but the simulations ends before the top has a chance to move? That is something I think could be happening, but am not entirely sure.

Looking at the log files, FASTEST seems to diverge. I am not familiar with FASTEST, but in the log file it says:

preparing geometry for Flow …
Check SCL (second order in time) …
±------------------------------------------+
| Current time is: Fri May 22 15:32:47 2020 |
±------------------------------------------+
±------------------------------------------+
| Current time is: Fri May 22 15:32:49 2020 |
±------------------------------------------+

when everything seems to converge, and:

preparing geometry for Flow …
Check SCL (second order in time) …
±------------------------------------------+
| Current time is: Fri May 22 15:32:49 2020 |
±------------------------------------------+
*** !!!Iterations diverged!!! ***

for the final time step. Is FASTEST running second order in time? It could help stability to run with first order in time, and see if that works. But I would first try different filters and the IQN-IMVJ method Benjamin suggested. Also, for strongly coupled problems, sometimes the coupling stability gets better with increasing the time step, at the expense of stability in each solver. If that is not a problem with the solvers, you could try that.

  • Is there any particular reason why you use
<relative-convergence-measure limit="1e-4" data="Displacements" mesh="Structure_Nodes"/>
<relative-convergence-measure limit="1e-2" data="Forces"        mesh="Structure_Nodes"/>

No really, but if I use more strict precision for the forces, the simulation takes a lot of time and the results are not better.

On the other hand, the parallel coupling is less stable than Serial coupling for this case. I was using IQN-ILS with “0” time-windows-reused and 25 max-used-iterations, making the simulation more stable but not enough to finish the required time (0.85). See the video

https://drive.google.com/file/d/1GUvul8tab72HvaYIqBgiMpLdvkIIXzlR/view?usp=sharing

The conflict starts when the beam should return and begins to oscillate.

Concerning the IQN-IMVJ acceleration, I tested it but this appears less effective than IQN-ILS.

For this point with the acceleration methods. How can decide correctly the value for the filter, the number of reused iterations, and time reused?

I think this test case is very tricky if I change something in the coupling configuration I have the following problems:

  1. If the time is bigger, Calculix converges very fast and the simulation is more stable but Fastest diverges because the multiphase model does not support very large time-steps.
  2. If the time is smaller and appropriate for the multiphase fluid, Calulix diverges after a few coupling interactions.
  3. If the grids are coarse in the two programs, I can simulate the test case but the results are not enough accuracy. Therefore, I am trying to simulate the case using finer grids. How can be a good proportion between the fluid and structure elements at the fluid-structure interface?
  4. This test case is a 2D case (0.8mx0.8m fluid domain, 0.004mx0.09m rubber beam), I only have one volume on the z-axis (for FASTEST) and in the mapping setup z-dead=true. However, I do not know what is the appropriate width for the grid in the z-direction. On the one hand, if I use a very small value, for example, 0.002 or 0.006, the grid aspect ratio for the fluid domain is okay, but the RBF mapping diverges and to works requires at less 2 elements (C3D20R or C3D8i) are defined in the z-direction of the structure. On the other hand, if the width value is greater e.g. 0.2m, the RBF mapping works only with one element in the z-direction of the structure, but I notice in the vtk-files that the displacement is different for the parallel points, which is not logical if the case is 2D. My question is how can I decide the appropriate width for a quasi2D case? and how can I avoid these unequal displacements? To clarify my question, I attach the force and displacement values for 2 parallel points. The forces received are equal and the displacement unequal results in a complicate deformation of the fluid grid.

  1. Finally, in this case, the structure is a rubber beam (E=3.5MPa, nu=0.5), I am still using the elastic model (with nu=0.49) and the non-linear effects are considered (NLGEOM). Do you think that it is adequate? or the hyperelastic model should be used? However, for the hyperelastic model, I do not have the complete parameters that require these models. Maybe do you have more information about the constants for a rubber material?

The link is unfortunately private. I guess you have to adjust some sharing option in Google Drive.

There is no simple to answer to this. Some guidelines:

  • The more past information you use the better the performance in general (as long as you don’t use too much, then the condition gets bad). But then the coupling scheme has also more troubles to react to sudden changes. An interesting future direction of research would be to make IQN more adaptive.
  • The filter should work, meaning not filter out too little, but also not too much. You can check the number of filtered out columns per timestep in the “iterations” file of the second participant.

This sounds like an intersting use-case for the waveform relaxation we want to implement in preCICE, @BenjaminRueth’s PhD project. Preliminary results here, but might still take a year till we have this directly in preCICE. At the moment, there is no good solution for this.

Well, hard question again. The black-box philosophy would say that both solvers should use what is best for them, so to balance the error between both. And then the data mapping has just to cope with those meshes. But, of course, the meshes also have an influence on the mapping error.
Maybe some hope here: we are currently working on an artificial testing environment for data mappings. So that you can test mappings between your meshes without needing to run the complete coupled simulation. Because in the end, things like this simply need a lot of testing and comparison.

This is odd. If you have z-dead="true" the RBF mapping should not see the width in z-direction at all. Could CalculiX have troubles with too small aspect ratios?

You do give force values in z direction. Then, CalculiX also computes a deformation in z direction and, thus, you get different displacements along the z axis. Could an easy solution be to set the z components of the forces to 0?

That is, unfortunately, beyond my expertise. Maybe @KyleDavisSA knows more. Or you could ask on the CalculiX mailing list.