Rigid motion combined with deformation with OpenFOAM and CalculiX

Hello everyone,

I am trying to run a simulation of an oscillating deformable wing as an FSI simulation. This means that there is a rigid rotation of the wing and CalculiX computes the mechanical deformation of the wing. I provide the dynamicMeshDict file in OpenFOAM to make it clear: dynamicMeshDict.txt (1016 Bytes)

I had to increase slowly the rotation speed to make the simulation converge (as you can see in the file). With 3.49 rad/s at time 0, the simulation diverges with IQN but not with aitken relaxation.

I present here the log files associated to the simulation with IQN and the provided dynamicMeshDict file:

Solid.log (1.2 MB)

As you can see, the convergence is difficult to obtain at the beginning of the simulation, but then, less than 10 FSI iterations are needed, so it becomes reasonable. And it seems that the slower the increase, the faster the convergence.

I also provide the precice-config file to discuss about the best parameters to use in such a case:
precice-config.xml (3.5 KB)

The IQN parameters could be not optimized. If you have advice to reduce the number of iterations, do not hesitate.

Thank you

Hi @fsalmon,
your setup is very interesting to me. I have just published a post regarding my current configuration for a FSI problem coupling OpenFOAM and MBDyn (FSI2 FSI3 Turek Hron benchmark comparison). I am also interested in reducing the number of iteration and improve the overall performances. I am also interested in finding some benchmarks to compare my results with the ones obtained with other structural solvers. Depending on the complexity of your domain, do you mind if exchange the test case and we share the results?
Thank you

1 Like

Hi @Claudio,

I do not think I can share this case because it takes part in a European project.
Moreover, it is not really a test case since the fluid mesh contains more than 15M cells while the solid is composed of about 150k elements. I am running this case on 540 cores.

Have a good day

Hi @fsalmon

The TransferNow link has unfortunately expired. Would be interesting to see your preCICE iterations and convergence files. Especially if you use v2.1. We added more information there with the last release.

For now already some quick tips

<m2n:sockets from="Fluid" to="Calculix" distribution-type="gather-scatter"/>

You should remove the distribution-type="gather-scatter" if you use preCICE>=v2.0. Especially if you use many cores.

<absolute-convergence-measure limit="1e-5" data="Displacements0" mesh="Calculix_Mesh"/> 
<absolute-convergence-measure limit="1e-5" data="Forces0" mesh="Calculix_Mesh"/>  

Be sure that both your solvers converge two order of magnitude tighter than that. Only then, IQN will perform well. I always preferred to work with relative convergence measures. Are you sure that displacements and forces really “live on the same scale”?

<extrapolation-order value="2"/> 

In my experience, extrapolation only helps if cases already converge quite well. Otherwise it might decrease performance.

<preconditioner type="residual-sum"/>

You only need this option if you use parallel-implicit coupling. In my experience, however, parallel-implicit is faster than serial-implicit, not in terms of iterations, but in terms of time-to-solution. But only if you have a load-balanced case, so many fluid ranks.

<filter type="QR1" limit="1e-6"/>

This is sth you want to try to tweak. In the iterations file you see how many iterations are filtered out. You want to filter out a few in each timestep, but not too many :grin:

<max-used-iterations value="50"/>

That’s too small, especially if you have many dofs at the coupling interface. Try 100.


Sorry for late answer, but I needed to perform some tests.

I installed v1.6 on a national cluster and it was very painful, so I do not want to try v2.1 now while v1.6 makes the job.

Actually, I had a problem in my simulation. The fix part of the wing was not fixed in the solid simulation (my mistake). So the convergence was more difficult and the fluid mesh deformed too much and too quickly. After about 100 time steps, the simulation crashed.

Despite it, I tried several parameterisations. An extrapolation order equal to 2 is the best choice for my case (quicker convergence). As regards the filter, I am using QR2 with a limit of 1e-3. Finally, for the max used iterations, 50 is enough for me, I never exceed 35 iterations, even at the beginning.

I will try parallel-implicit.

Two things that are easy to mix up:

    <max-used-iterations value="50"/>

is the number of columns (meaning past iterations) that the IQN acceleration uses to approximate the Jacobian matrix of the interface coupled system. If your interface has many dofs larger values here can be beneficial (but not too large). 100 should be better.

<max-iterations value="50"/>

is the maximum number of allowed iterations per timestep.