Sorry for all the questions this week, however I’m working a lot with preCICE this week. I am having some issues with a case I am making. The temperature/heat flux is I think not passed through correctly. I have made a small test case, to check where it goes wrong, however I cannot find it.

The test case is a rotating plate in air. Openfoam is the fluid domain, Nutils is the plate. The models are made axisymmetric (I don’t know if this can cause any of the problems). Since another plate will be added in the real case with another nutils file, I cannot run the program in serial. (In serial it seemed to be more stable), however in parallel the calculation in Nutils seems to blow up. I have used an implicit coupling scheme in an attempt to stabilize the heat flux, however it doesn’t work for the parallel computation. Would you know if there is anything wrong with my communication of Heat fluxes to OpenFOAM? There should be no reason to get unstable right? Since Nutils gets boundary conditions from the OpenFOAM case, which aren’t very weird. I think in the beginning just ambient temperature. Since Nutils solves the case fully it shouldn’t be randomly unstable?

I will post the files below.

Nutilsfile: Nutilsfile.txt (6.7 KB)
xmlfile: precice-config.xml (2.1 KB)
Openfoam (where the yellow line is the interface) (Note that this case is axisymmetric, with the symmetry axis on the left part part):

A parallel coupling scheme with only one data field used for acceleration (<data mesh="OpenFOAM-Mesh" name="Heat-Flux" />) is not very stable. Also, Aitken acceleration does not work very well for parallel coupling schemes. For a parallel coupling scheme better use a quasi-Newton acceleration. For example:

Ah yes, I have not used intra-solver parallelization yet. I wanted to make it work first without it. At this moment my main OpenFOAM file (not the test case I presented above) is coupled with 2 nutils files, so if I am correct this requires inter-solver parallelization? So the 2 Nutils files can run in parallel and this will also make the OpenFOAM solver run in parallel with Nutils. This seems to make the case unstable. When I tried it with 1 Nutils file (the test case above) instead of 2 it seemed stable with a serial coupling for the inter-solver coupling. However for parallel not. Hopefully this makes my problem clear?

Okay thank you, I will try to make both data fiels to convergence!

I have tried the more stable acceleration scheme, however no succes. I think there might be an issue with the coupling of the mesh with axisymmetry (area is going to 0 near the symmetry axis) or the way I am providing the heat flux for the axisymmetric mesh. I don’t know if this axisymmetry should provide an issue? otherwise I will have to look at how I provide the heat flux. The problem isn’t that complex, so It shouldn’t be an issue to couple the temperature at the surface…

Axissymmetry should not be a problem. preCICE doesn’t see this.
In which error do you currently run?
To get some more insight, could you please upload your preCICE config, the iterations file, and the convergence file (preCICE output files) for both serial and parallel coupling? Thanks!

I took some complexity away from the case and am only coupling the right boundary (vertical yellow part in the drawing from the earlier post). So in the first timestep it seems like the case runs fine, also in the inner iterations. However when a timestep continues, the simulation just gets unreal values. (I printed the temperature and heat flux values on screen)

In the array which starts with 3183.9 the Temperature which comes in is given and in the array below the heat flux is given. It is just not physical anymore for a problem with not a very huge temperature difference.

I have made the simple case running, with the parallel implicity coupling scheme! I have increased the max-iteration until a value of 100 and also in the acceleration scheme of the IQN-ILS i have increased the max-used-iterations to 100. I did this since it didn’t seem the solution could converge within the earlier given iterations. It seemed to work! I can see that the Heat-Flux has the hardest time to converge (I don’t know why, maybe different mesh sizes?), however after a number of iterations it does. I do not fully understand why (also I need to look better into how the convergence criterion is determind in preCICE), however I will read some more documentation to find out. I also used a better initial guess for the openfoam flow/pressure field (can never be a bad thing).

I will now try the multi coupling scheme for 2 meshes, but I think that with enough iterations it should converge!

Thank you for your help! I will report back if the multi-scheme yields me any trouble.

Those are pretty strict. The inner convergence criteria, so in your case the one in OpenFOAM should be two orders of magnitude stricter than this one. So 1e-7 as a relative criterion. Otherwise, the QN has problems to find the right Jacobian. Try to make the one in OpenFOAM stricter or relax these ones.

I will thank you! I also made the mistake that 1 nutils iteration has a timestep of 1, however my openfoam iteration also has a timestep of 1. Since Nutils solves it at once, and OpenFOAM solves it iteratively, I should first have a converged solution for OpenFOAM (which requires a lot more iterations, especially at the beginning. I will probably give it a huge amount of iteration to make sure it converged) and then solve Nutils once. And then openfoam has to iterate again for a converged solution with the updated nutils. My solution in Openfoam wasn’t converged yet, and that caused some issues as well.

In my old post I stated it wrong. Of course the criterion is the one of the solver in OpenFOAM not the SIMPLE algorithm.

I have used your tips and I think it will work! I have run the first 20 iterations and it seems to converge well. I have set the timestep in openfoam to 0.1 (so it iterates 10 times instead of 1 each precice iteration. I wanted to try out if it works, so the simulation speeds up). I had some doubts about when it performed a first timestep afteer the number of precice iterations, but after the first few new precice iterations it converged very fast and I had no weird values! I increased the following things:

Yes, I will do that! I still suspect that maybe the interface coupling didn’t go entirely correct from nutils to OpenFOAM. (that is why I asked how to see that, however the export function dit not give me much clarity). I have combined a boundary in Nutils (not stating the specific order of which boundary came first). Since you stated that Precice doesn’t work with mesh coordinates, but with vertexes, how does it know on which position of the interface it has to place the value? (since the interface i use, is very similar to the flap). Does using a Nearest-Projection method instead of Nearest-Neighbor method solve this, so it does use position? Sorry if I am unclear, but I am trying to figure out that for example a random point on a certain position in the plane in the OpenFOAM mesh knows it has to be coupled to that same point in the Nutils mesh if it doesn’t use coordinates or anything?

I have found out what the problem is! It had to do with the input for heat flux i was providing in Nutils to the precice adapter. I had two cases where this provided issues. In the first case, there was just an axisymmetric domain, where the boundaries are connected to a stream (very similir to the fsi flap geometry wise, however now heat conduction is the area of interest). There is no heat source in it and there was little conduction in the beginning. When computing this heat flux of almost 0, the results in nutils what was provided, was the following:

It is basically 0, however it seems that this provides trouble in precice. Also with increasing iterations, the instability grows (This shouldn’t happen right?).

Also for the case with a heat source, the convergence is a lot better, since these fluctuations matter a lot less.

Shouldn’t the accelerationscheme with an implicit scheme make sure that the instability (in the first figure) doesn’t grow? The values of the fluctuations are so small it doesn’t matter that much (if the flux is 0 or 1e-7, doesn’t matter). I think it has to do with the relative convergence criteria (which should be an absolute criteria)? Since there is not much heat flux there, it thinks the difference is very big, while in absolute values this is not the case at all. The only thing worrying me is that more iterations, seem to amplify the problem.

Since you stated that Precice doesn’t work with mesh coordinates, but with vertexes, how does it know on which position of the interface it has to place the value?

No, the vertices are the coordinates. A vertex is simply a point in space (x,y,z). And on this vertex there is a value v. preCICE uses (x,y,z) to bring the value to the other side. The details are also in our reference paper: Redirecting

Shouldn’t the accelerationscheme with an implicit scheme make sure that the instability (in the first figure) doesn’t grow? The values of the fluctuations are so small it doesn’t matter that much (if the flux is 0 or 1e-7, doesn’t matter). I think it has to do with the relative convergence criteria (which should be an absolute criteria)?

Yes, the quasi-Newton scheme does not like if everything is almost zero. The relative convergence measure has to do with that. You could try to use an absolute one. You could also restrict the max iterations.

Also try to use no initial underrelaxation in this case since it makes the problem only worse.

<initial-relaxation value="1.0"/>

Finally, it could also be worth trying a different preconditioner, say

Thank you for the help! I got the simulation a lot more stable. The only thing it doesn’t like ,is when I increase the value of the thermal conductivity, k, in the solid. For some reason if the difference between the thermal conductivity between the fluid and solid gets to big, it doesn’t like it.

That’s a known “feature”. It could be worth a try to swap the roles of solid and fluid in terms of boundary conditions.
Read more about this in: Redirecting

I will try this. I already tried to put a constant very high underrelaxation on it for higher values of k and it works, but this will be a much better solution in terms of convergence speed.

The multi-coupling scheme considers all coupling data as one large (interface) fixed-point equation. This means one relaxation factor for everything.
You could use different relaxation factors when you combine two parallel-implicit schemes but this can be numerically unstable: A plug-and-play coupling approach for parallel multi-field simulations | SpringerLink