Instability in parallel coupling scheme

Hello!

Sorry for all the questions this week, however I’m working a lot with preCICE this week. I am having some issues with a case I am making. The temperature/heat flux is I think not passed through correctly. I have made a small test case, to check where it goes wrong, however I cannot find it.

The test case is a rotating plate in air. Openfoam is the fluid domain, Nutils is the plate. The models are made axisymmetric (I don’t know if this can cause any of the problems). Since another plate will be added in the real case with another nutils file, I cannot run the program in serial. (In serial it seemed to be more stable), however in parallel the calculation in Nutils seems to blow up. I have used an implicit coupling scheme in an attempt to stabilize the heat flux, however it doesn’t work for the parallel computation. Would you know if there is anything wrong with my communication of Heat fluxes to OpenFOAM? There should be no reason to get unstable right? Since Nutils gets boundary conditions from the OpenFOAM case, which aren’t very weird. I think in the beginning just ambient temperature. Since Nutils solves the case fully it shouldn’t be randomly unstable?

I will post the files below.

Nutilsfile:
Nutilsfile.txt (6.7 KB)
xmlfile:
precice-config.xml (2.1 KB)
Openfoam (where the yellow line is the interface) (Note that this case is axisymmetric, with the symmetry axis on the left part part):

Thank you in advance!

Kind regards,

Cars

Sorry for all the questions this week, however I’m working a lot with preCICE this week.

No worries. Happy to help whenever time allows :blush:

I do not yet fully get your case. Important to understand is that there are two level of parallelization:

  • intra-solver parallelization: a solver runs in parallel. For example via MPI on distributed data
  • inter-solver parallelization: the coupled solvers run simultaneously, meaning you use a parallel coupling scheme

Using these terms, could you please again describe what works and what does not?

Looking at your config:

  <coupling-scheme:parallel-implicit>
    ...
    <participants first="OpenFOAM" second="Nutils"/>
    <exchange data="Temperature" mesh="OpenFOAM-Mesh" from="OpenFOAM" to="Nutils"/>
    <exchange data="Heat-Flux" mesh="OpenFOAM-Mesh" from="Nutils" to="OpenFOAM"/>
    <relative-convergence-measure limit="1.0e-5" data="Temperature" mesh="OpenFOAM-Mesh"/>
    <acceleration:aitken>
      <data mesh="OpenFOAM-Mesh" name="Heat-Flux" />
      <initial-relaxation value="0.01" />
    </acceleration:aitken>
  </coupling-scheme:parallel-implicit>

A parallel coupling scheme with only one data field used for acceleration (<data mesh="OpenFOAM-Mesh" name="Heat-Flux" />) is not very stable. Also, Aitken acceleration does not work very well for parallel coupling schemes. For a parallel coupling scheme better use a quasi-Newton acceleration. For example:

    <acceleration:IQN-ILS>
      <data mesh="OpenFOAM-Mesh" name="Temperature" />
      <data mesh="OpenFOAM-Mesh" name="Heat-Flux" />
      <preconditioner type="residual-sum"/>
      <filter type="QR1" limit="1e-6"/>
      <initial-relaxation value="0.1"/>
      <max-used-iterations value="80"/>
      <time-windows-reused value="10"/>
    </acceleration:IQN-ILS>

More information: Redirecting

Ah yes, I have not used intra-solver parallelization yet. I wanted to make it work first without it. At this moment my main OpenFOAM file (not the test case I presented above) is coupled with 2 nutils files, so if I am correct this requires inter-solver parallelization? So the 2 Nutils files can run in parallel and this will also make the OpenFOAM solver run in parallel with Nutils. This seems to make the case unstable. When I tried it with 1 Nutils file (the test case above) instead of 2 it seemed stable with a serial coupling for the inter-solver coupling. However for parallel not. Hopefully this makes my problem clear?

Okay thank you, I will try to make both data fiels to convergence!

I have tried the more stable acceleration scheme, however no succes. I think there might be an issue with the coupling of the mesh with axisymmetry (area is going to 0 near the symmetry axis) or the way I am providing the heat flux for the axisymmetric mesh. I don’t know if this axisymmetry should provide an issue? otherwise I will have to look at how I provide the heat flux. The problem isn’t that complex, so It shouldn’t be an issue to couple the temperature at the surface…

Axissymmetry should not be a problem. preCICE doesn’t see this.
In which error do you currently run?
To get some more insight, could you please upload your preCICE config, the iterations file, and the convergence file (preCICE output files) for both serial and parallel coupling? Thanks!

I took some complexity away from the case and am only coupling the right boundary (vertical yellow part in the drawing from the earlier post). So in the first timestep it seems like the case runs fine, also in the inner iterations. However when a timestep continues, the simulation just gets unreal values. (I printed the temperature and heat flux values on screen)

In the array which starts with 3183.9 the Temperature which comes in is given and in the array below the heat flux is given. It is just not physical anymore for a problem with not a very huge temperature difference.

I will give the files of my case below:

precice-config:
precice-config.txt.xml (2.5 KB)

Nutils files:
precice-Nutils-convergence.log (16.7 KB)
precice-Nutils-iterations.log (420 Bytes)
cht.txt (6.8 KB)

Openfoam files:
precice-OpenFOAM-iterations.log (244 Bytes)

Hopefully this is enough information? otherwise I can also post more of the openfoam case. (it is not possible to upload a folder is it?)

edit:

It seems that at an interval it is structurally going wrong. (the scale is temperature)

I have made the simple case running, with the parallel implicity coupling scheme! I have increased the max-iteration until a value of 100 and also in the acceleration scheme of the IQN-ILS i have increased the max-used-iterations to 100. I did this since it didn’t seem the solution could converge within the earlier given iterations. It seemed to work! I can see that the Heat-Flux has the hardest time to converge (I don’t know why, maybe different mesh sizes?), however after a number of iterations it does. I do not fully understand why (also I need to look better into how the convergence criterion is determind in preCICE), however I will read some more documentation to find out. I also used a better initial guess for the openfoam flow/pressure field (can never be a bad thing).

I will now try the multi coupling scheme for 2 meshes, but I think that with enough iterations it should converge!

Thank you for your help! I will report back if the multi-scheme yields me any trouble.

Great to hear! :tada:

I had a quick look at your config:

<max-used-iterations value="40"/>

That’s the number of columns that the QN uses for the low-rank approximation of the Jacobian. 40 is pretty low. 100 is definitely better.

The first timestep is indeed often the complicated one.

<relative-convergence-measure limit="1.0e-5" data="Temperature" mesh="OpenFOAM-Mesh"/>
<relative-convergence-measure limit="1.0e-5" data="Heat-Flux" mesh="OpenFOAM-Mesh"/>

Those are pretty strict. The inner convergence criteria, so in your case the one in OpenFOAM should be two orders of magnitude stricter than this one. So 1e-7 as a relative criterion. Otherwise, the QN has problems to find the right Jacobian. Try to make the one in OpenFOAM stricter or relax these ones.

I will thank you! I also made the mistake that 1 nutils iteration has a timestep of 1, however my openfoam iteration also has a timestep of 1. Since Nutils solves it at once, and OpenFOAM solves it iteratively, I should first have a converged solution for OpenFOAM (which requires a lot more iterations, especially at the beginning. I will probably give it a huge amount of iteration to make sure it converged) and then solve Nutils once. And then openfoam has to iterate again for a converged solution with the updated nutils. My solution in Openfoam wasn’t converged yet, and that caused some issues as well.

In my old post I stated it wrong. Of course the criterion is the one of the solver in OpenFOAM not the SIMPLE algorithm.

I have used your tips and I think it will work! I have run the first 20 iterations and it seems to converge well. I have set the timestep in openfoam to 0.1 (so it iterates 10 times instead of 1 each precice iteration. I wanted to try out if it works, so the simulation speeds up). I had some doubts about when it performed a first timestep afteer the number of precice iterations, but after the first few new precice iterations it converged very fast and I had no weird values! I increased the following things:

max-iterations value=“500”/
max-used-iterations value=“250”/
time-windows-reused value=“100”/

and decreased relative convergence measures.

It seemed like my convergence log is showing that it works well.

precice-OpenFOAM-convergence.log (67.7 KB)

precice-OpenFOAM-iterations.log (1.1 KB)

precice-config.xml (4.0 KB)

Thank you for your help!

Kind regards,

Cars

It still seems to have a hard time converging. You would more got with sth like

<max-used-iterations value=“100”/>
<time-windows-reused value=“10”/>

You could also try to play with the filter. Maybe QR1 with 1e-5 or QR2 with 1e-3 - 1e-2.

Yes, I will do that! I still suspect that maybe the interface coupling didn’t go entirely correct from nutils to OpenFOAM. (that is why I asked how to see that, however the export function dit not give me much clarity). I have combined a boundary in Nutils (not stating the specific order of which boundary came first). Since you stated that Precice doesn’t work with mesh coordinates, but with vertexes, how does it know on which position of the interface it has to place the value? (since the interface i use, is very similar to the flap). Does using a Nearest-Projection method instead of Nearest-Neighbor method solve this, so it does use position? Sorry if I am unclear, but I am trying to figure out that for example a random point on a certain position in the plane in the OpenFOAM mesh knows it has to be coupled to that same point in the Nutils mesh if it doesn’t use coordinates or anything?

Hey!

I have found out what the problem is! It had to do with the input for heat flux i was providing in Nutils to the precice adapter. I had two cases where this provided issues. In the first case, there was just an axisymmetric domain, where the boundaries are connected to a stream (very similir to the fsi flap geometry wise, however now heat conduction is the area of interest). There is no heat source in it and there was little conduction in the beginning. When computing this heat flux of almost 0, the results in nutils what was provided, was the following:

It is basically 0, however it seems that this provides trouble in precice. Also with increasing iterations, the instability grows (This shouldn’t happen right?).

Also for the case with a heat source, the convergence is a lot better, since these fluctuations matter a lot less.

Shouldn’t the accelerationscheme with an implicit scheme make sure that the instability (in the first figure) doesn’t grow? The values of the fluctuations are so small it doesn’t matter that much (if the flux is 0 or 1e-7, doesn’t matter). I think it has to do with the relative convergence criteria (which should be an absolute criteria)? Since there is not much heat flux there, it thinks the difference is very big, while in absolute values this is not the case at all. The only thing worrying me is that more iterations, seem to amplify the problem.

Hi @CarsvO,

Sry for the late reply.

Since you stated that Precice doesn’t work with mesh coordinates, but with vertexes, how does it know on which position of the interface it has to place the value?

No, the vertices are the coordinates. A vertex is simply a point in space (x,y,z). And on this vertex there is a value v. preCICE uses (x,y,z) to bring the value to the other side. The details are also in our reference paper: Redirecting

Shouldn’t the accelerationscheme with an implicit scheme make sure that the instability (in the first figure) doesn’t grow? The values of the fluctuations are so small it doesn’t matter that much (if the flux is 0 or 1e-7, doesn’t matter). I think it has to do with the relative convergence criteria (which should be an absolute criteria)?

Yes, the quasi-Newton scheme does not like if everything is almost zero. The relative convergence measure has to do with that. You could try to use an absolute one. You could also restrict the max iterations.

Also try to use no initial underrelaxation in this case since it makes the problem only worse.

<initial-relaxation value="1.0"/>

Finally, it could also be worth trying a different preconditioner, say

<preconditioner type="value"/>

Hey @uekerman,

Thank you for the help! I got the simulation a lot more stable. The only thing it doesn’t like ,is when I increase the value of the thermal conductivity, k, in the solid. For some reason if the difference between the thermal conductivity between the fluid and solid gets to big, it doesn’t like it.

That’s a known “feature”. It could be worth a try to swap the roles of solid and fluid in terms of boundary conditions.
Read more about this in: Redirecting

Yes, thank you! I was literally just reading this paper, finding it in the literature suggested as soon as I found out k influenced it.

I will try this. I already tried to put a constant very high underrelaxation on it for higher values of k and it works, but this will be a much better solution in terms of convergence speed.

Would it also be possible to use a different relaxationfactor for the Heat-Flux then for the Temperature? I know see I can define 1 value for all.

The multi-coupling scheme considers all coupling data as one large (interface) fixed-point equation. This means one relaxation factor for everything.
You could use different relaxation factors when you combine two parallel-implicit schemes but this can be numerically unstable: A plug-and-play coupling approach for parallel multi-field simulations | SpringerLink