OpenFOAM-CalculiX FSI Diverging Sim Never Ends

I’m attempting to simulate blood flow through a flexible carotid artery. This is essentially the elastic-tube-3d tutorial, but with a bifurcation in the tube and prescribed flow waveforms at the inlet. Admittedly, I have not done anything to initialize the flow (initial velocity = [0, 0, 0] everywhere). I also haven’t played much with the material properties or the arteries constitutive equation. For now, it’s just a linear elastic material with a density close to the fluid density. I realize this sets up a precarious coupling scenario, and I need to explore mapping methods, coupling schemes, and acceleration methods more. For now, I’m using nearest neighbor mapping and parallel implicit coupling with Aitken acceleration. All of this, given my non-initialized flow and current time step size is leading to what I’m interpreting as a diverging solution at the first time step.

It seems pretty clear that Calculix is not converging, and is attempting to shorten the time step (inducing sub-cycling) in order to converge (solid.log (39.4 KB))

It also seems pretty clear that OpenFOAM (pimpleFoam) is also not converging, but not trying to modify the time step. This is consistent with pimpleFoam behavior even if adjustTimeStep is on – the first time step is never adjusted. (fluid.log (794.9 KB))

I’ll certainly accept any advice on how to drive toward convergence, generally, but the real interest here is that the simulation never shuts down. There is no overall divergence error that shuts down preCICE or either of the individual solvers. It appears that OpenFOAM has reached another failed attempt to converge within 50 pimpleFoam outer iterations and is waiting for updated information from the Calculix solver via preCICE. However, Calculix gets stuck in its 4th iteration and never passes any data. I left this sim running for 12+ hours and it remained in this state. Am I just missing something within Calculix to get it to report a divergent sim and shutdown? I’m assuming this would then trigger a preCICE/OpenFOAM shutdown. If Calculix never sends anything, is there an overall timeout parameter I can set within preCICE to shutdown if data is never passed?

preCICE and adapter config files shared here:
precice-config.xml (2.5 KB)
preciceDict.txt (489 Bytes)
config.yml (225 Bytes)

Any advice is much appreciated.

A few observations, which hopefully help:

  • The OpenFOAM side is just waiting to receive data.
  • OpenFOAM reports Courant number 0 always, which is very suspicious, but explains why it does not try to adapt the time step size.
  • CalculiX or the CalculiX adapter seems to be stuck without any output. It does read/write data in previous iterations.

I have the feeling I have seen this sitatuion before, but I cannot find the discussion right now.

Are the uncoupled simulations working correctly?

What happens if you reduce the density on the fluid side? Probably not related for this problem, but at least should work.

Does this only happen with implicit coupling? Only with Aitken?

I’m explicitly instructing OpenFOAM to not adapt its time step. I’m attempting to set a solver coupling time step that is equivalent to each solver’s time steps such that sub-cycling won’t be required and I can keep total wall time to a minimum. This may not prove possible.

Yes, I believe this was because I did not set a lower limit on the Calculix time step and it was driving itself to zero? I’m not entirely sure, but something was obviously causing the Calculix solver to diverge and leaving it to crunch hard on its solution.

Yes, I can successfully run OpenFOAM and Calculix while de-coupled.

I’ve yet to try this. I was able to overcome the hanging issue without needing to modify the fluid density. I am certainly stuck in the classic added mass scenario where the application dictates that my fluid and solid densities be almost equivalent.

I haven’t ventured to explicit coupling because I understand that the nature of my application has the best chance at convergence with implicit coupling. The low Re, almost equivalent density nature of the problem means a tight coupling will be required to keep things in check, right? I’ve since switched from Aitken to IQN-ILS. I also switched to a nearest-projection mapping, hoping that may increase my displacement accuracy and improve convergence.

I’ve since changed my coupling configuration around a bit, but have yet to arrive at something that converges in fewer than 100 coupling iterations. Am I going to need to increase the time-window-size and possibly induce some solver sub-cycling? Any general principles I can follow for this type of an application?

Latest precice-config iteration:
precice-config.xml (3.4 KB)

I remembered where I have seen this before: Simulation gets stuck at same time window - #14 by rachaelesmith

Avoiding subcycling is always a good first debugging step! :+1:

Where do you see the divergence? You mean that it comes suddenly in the last time step and it never gets reported? It should still exit at the maximum number of iterations.

Definitely these are steps that can help the convergence, but I am just trying to clear any software or configuration related issues first. While of course we all want perfect mapping and acceleration in a production case, I would usually start from explicit coupling, nearest neighbor mapping, no subcycling, sockets communication, etc, for debugging purposes.

I checked this thread and it reminded me of when I read it before. I have my CalcuiX increment count set to 100000000, so I don’t think that is the exact problem.

In the run attempt that prompted this whole thread I saw the CalculiX solver churning through it’s iterations at a particular time step. At the end of the time step CalcuiX automatically shortened its time step. The time step that CalculiX informed me it was shortening to was 0.00E+00 seconds, and then the solver just hung there. In all of my subsequent run attempts, I modified the CalculiX input file to have a minimum time step greater than 0.00 seconds (previously, I had no entry here and believed the default would work fine). Every time my analysis has diverged since then (either with or without sub-cycling) CalculiX has always finished solving, passed its data to OpenFOAM and then OpenFOAM has been the solver to crash. I think the takeaway here is that something about the CalculiX solver let its time step be set to 0.00 sec (or at least within machine precision of 0.00), and one can prevent this by explicitly specifying a minimum time step in the CalculiX input file.

I agree. I followed your advice and ran a nearest-neighbor, serial-explicit analysis. Here are the input files:
precice-config.xml (2.3 KB)
config.yml (225 Bytes)
preciceDict.txt (489 Bytes)

And here is the result of that analysis:
fluid.log (147.2 KB)
fluid-precice.log (545.9 KB)
solid.log (14.2 KB)
solid-precice.log (25.4 KB)

To me, it seems that OpenFOAM starts out and converges to a solution. Calculix appears to read the information it needs just fine, but the solution it returns includes relatively large displacements. When that data is passed back to OpenFOAM, the pimpleFoam solver crashes. This seems like a classic case for implicit solver coupling, does it not?

Here’s an image showing the initial solid mesh node locations (white) and the displaced nodes after one time step (black):

Summary so far: CalculiX reduces its timestep size all the way to 0.0, meaning it gets stuck.

@Mike_Tree indeed, this is a case for implicit coupling. What happens now if you use implicit coupling and the option to have a minimum time step size? Also, how/where did you set that option (useful for future readers)?

Hey there,

I want to make a quick note to help corner the problem.

I also encountered never-ending FSI simulations which were stuck in the middle of a CalculiX step. I used serial-implicit coupling. It is obvious that the simulation diverges on the CalcluliX side. This is, however, not necessarily connected with the time step being reduced to zero.
In my case, I set the CalculiX time step to a constant value (with the DIRECT command) and I encounterd the same problem, where the simulation stayed in that state for hours:

no convergence

 iteration 6

 Using up to 24 cpu(s) for the symmetric stiffness/mass contributions.

 Factoring the system of equations using the symmetric spooles solver
 Using up to 24 cpu(s) for spooles.

 Using up to 24 cpu(s) for the stress calculation.

 Using up to 24 cpu(s) for the energy calculation.

 average force= 0.030333
 time avg. forc= 0.007638
 largest residual force= 45.329442 in node 1604 and dof 2
 largest increment of disp= 4.761885e-04
 largest correction to disp= 2.246915e-04 in node 1499 and dof 2

 no convergence

 iteration 7

 Using up to 24 cpu(s) for the symmetric stiffness/mass contributions.

 Factoring the system of equations using the symmetric spooles solver
 Using up to 24 cpu(s) for spooles.

 Using up to 24 cpu(s) for the stress calculation.

 Using up to 24 cpu(s) for the energy calculation.

 average force= 45.175627
 time avg. forc= 11.293962
 largest residual force= 137661.648779 in node 1695 and dof 2
 largest increment of disp= 3.494407e-03
 largest correction to disp= 3.494407e-03 in node 1695 and dof 2

This suggests that Calculix diverges but fails to send an error signal. Maybe it’s stuck in the computing process?

For me, reducing the fixed time step did the trick. But it would be nice to find out how we can make CalculiX spit something out instead of making us wait for nothing :wink:


I switched to using serial-implicit coupling with Aitken under-relaxation (starting value = 0.05). I also switched to nearest-projection mapping, and dropped my time-window-size value from 0.001 to 0.0005. Both pimpleFoam and Calculix are set at this same time step so there is no solver sub-cycling. Here is the config file for this setup:
precice-config.xml (2.9 KB)

This setup results in pimpleFoam converging well for each coupling iteration in the 1st time window. Calculix also converges well for each coupling iteration in the 1st time window. The coupling, however, never converges (100 iterations). The 2nd time window has started and is churning right now, but I can see the pimpleFoam continuity errors growing and it is most certainly going to crash during this time window.

From here, I’m considering dropping the time-window-size value even lower. This should help my fluid solver convergence for the 2nd time window, but I’m lead to believe this may hurt my coupling convergence. Is this true?

This option is set in the Calculix input file. The Calclulix 2.16 manual ( explanation of the *DYNAMIC keyword explains various options on pages 427-429. The first line of this keyword definition is used to define things like which solver Calculix employs (PARADISO, SPOOLES, etc) and whether Calculix solves implicit or explicit in time. The 2nd line of this keyword definition is a set of comma-separated floating point numbers, which are:

  • Initial time increment
  • total time period
  • minimum time increment
  • maximum time increment

So, for my setup, I used:
5e-4, 5e-3, 1e-8, 5e-4

Under this configuration, I’m allowing Calculix to modify its time step between 1e-8 and 5e-4. Previously, I had not set a minimum or maximum time step and Calculix was driving its own time step to 0.

Hi @Luna,

Thanks for chiming in! The behavior you show here looks almost exactly like what I saw. The only difference is I wasn’t using the DIRECT command, so Calculix was free to adjust it’s time step. When it did so, it drove its own time step to 0, but ultimately stalled out just like you show. It would just sit there for hours.

I thought this was solely due to my lack of specifying a minimum time step, but apparently it’s a bigger issue of Calculix failing to error out at divergence.

1 Like

Does the solution change in every implicit coupling iteration? Maybe you can simply look at the residuals in the convergence.log file. If the solution is the same in every iteration, then this can mean that something is wrong with checkpointing, but this should not be the case as both adapters are actively used by multiple users.

Maybe just uploading this file would help.

Yes, the solution for each solver is changing at each implicit coupling iteration. Here is a copy of a convergence.log file:
precice-Solid-convergence.log (5.0 KB)

Just an observation for now: the residuals oscillate, which I guess means that switching from Aitken to IQN will not really help (@uekerman correct me here, please).

Are you confident that the material parameters make sense? Just to make sure that the interaction is not stronger than intended.

Yes, unfortunately. I’ve checked, double-checked, and triple-checked them. The material parameters are correct, and they are certainly leading to a strong interaction.

Do you start from a rather converged (single-physics) fluid simulation?
I would always recommend this for such strongly-coupled problems.
Once Aitken converges, you should switch to IQN. Aitken could get terribly slow here.
Also be aware that small timestep sizes can lead to more stability problems for FSI with an incompressible fluid.

I started with a converged solution based on potential flow (used potentialFoam), but I am running a steady state sim now based on my first time step (simpleFoam). I believe this will create a much better initialized flow field and will help significantly.

Do you mean stop the analysis once a coupling iteration converges and restart from the same time step using IQN, or do you mean simply verify that Aitken will converge and then start the entire analysis over using IQN?

Yes, this is a delicate balance between maintaining FSI stability (suffers if time step is small) and individual solver stabilities (suffers if time step is large), while trying to avoid solver sub-cycling.

I’ll report back with how my situation changes with using a more complete initialized flow solution.

I see significant improvement once I was able to start with a fully-converged, steady-state, viscous solution. I’m currently a couple of time windows into the analysis and the maximum number of coupling iterations required is 8. I’m still trying to run without any solver sub-iterations, so the key now is to keep pimpleFoam happy even though the Courant number is ~10. This is proving difficult.

I believe the key may be that my largest reported displacement from the CalculiX solver is on the order of the size of the fluid cells I’m using. This probably means that the mesh deformation is significantly affecting the cell quality in the mesh and eventually diverging the flow solver. I think a fluid mesh coarsening is probably in order. Any rule of thumb on wall displacement versus fluid mesh cell size? I’m thinking 10% displacement may be a good target value…

I’ve spent the day trying to study up on running pimpleFoam under high Courant numbers and made sure to increase the relevant iteration counts for its certain loops, but if the mesh gets poor enough there’s no set of solver parameter magic in the world that can fix it. Any fluid solver suggestions in general, just in case I missed something?

Here is the latest precice-config file and convergence log:
precice-config.xml (3.3 KB)
precice-Solid-convergence.log (797 Bytes)

I meant indeed the latter.

10% should definitely not be a problem. But be careful to always use a finer fluid than solid mesh if you use a conservative NP mapping. Changing forces to stresses and the mapping from conservative to consistent could be necessary once you coarsen your fluid mesh too much.

What exactly do you mean by this? Keep in mind that the IQN acceleration requires well-converged solvers. Otherwise the search directions get wrong.

My fluid mesh is still finer than my solid mesh. I’m using a nearest-projection mapping. From solid to fluid is consistent displacement, and from fluid to solid is conservative force. I’ll experiment with switching from conservative force to consistent stresses. I’m curious, why does my fluid mesh have to be finer? Does it need to be finer only if employing conservative mapping from fluid to solid? What about if I use consistent mapping instead?

I simply mean that the time step of my fluid and structure solvers is the same as the precice time-window-size value. I’m trying to keep the analysis this way to maintain low overall computational cost, but it has been a struggle. I may have to switch to running my solvers (particularly my fluid solver) at a much shorter time step, but keep the time-window-size where it is. As long as the solvers always converge well, is this still an acceptable approach? When I’ve tested this in the past, precice throws a warning that this sub-cycling is “not fully supported”. What pitfalls, if any, should I look out for here?

Sorry for the late reply.

For conservative mappings, your input mesh should be finer than your output mesh. Otherwise, you could get spatial oscillations. If you draw a nearest-neighbor mapping, this becomes obvious. But this has also been reported for RBF data mapping IIRC. For consistent mappings, you don’t get this problem.

OK, I understand what you mean now.

This can deteriorate your temporal convergence order and also your stability. Use with care. We are working on a (much) better solution for this problem right now (see, but this will still take some time to be fully available (~ preCICE v3).

This error message comes from the OpenFOAM adapter. I guess it means that not all data fields the mesh mover requires are correctly check-pointed when OpenFOAM subcycles (@Makis please correct if I am wrong).

Summary: Avoid subcycling for FSI (for preCICE v2) as long as possible :see_no_evil:

Quick remark on your original question (CalculiX hangs where it should crash): I am afraid there is not much we can do from preCICE side here. If you use a job script, you could implement some timeout and kill mechanism as done here.

1 Like

Thanks for the reply! This is all great information. I am very interested in the outcome of your waveform iteration work, and will definitely test it out when it hits the source code. I’m coupling OpenFOAM with CalculiX, so the fact that CalculiX only outputs restart files at the end of a step and that I should avoid subcycling either of my solvers means that I’m going to break the problem into multiple steps.

I found a Master’s thesis by Kyle Davis (@KyleDavisSA ?) that outlines the pattern. I’ll try and script up something to manage it for me, and I’ll definitely include a routine to check for a CalculiX crash.

Thanks again!

1 Like