precice-Fluid-iterations.log (1.6 KB)
precice-Solid-convergence.log (36.0 KB)
precice-Solid-iterations.log (2.8 KB)
calculix.out (2.5 MB)
Are these good?
I am trying to upload fluid.out but the file is too big. I could upload a link if needed.
Hello @uekerman
I added the filter like you suggested however, my simulation ran for a shorter time.
Without filter: 0.0034 secs
With filter: 0.0017 secs
Doesn’t look so bad. Please try the coupling scheme below and upload
precice-Solid-convergence.log
precice-Solid-iterations.log
again.
<coupling-scheme:serial-implicit>
<participants first="Fluid" second="Solid" />
<max-time-windows value="1000" />
<time-window-size value="1e-4" />
<exchange data="Force" mesh="Solid-Mesh" from="Fluid" to="Solid" />
<exchange data="DisplacementDelta" mesh="Solid-Mesh" from="Solid" to="Fluid" />
<max-iterations value="50" />
<relative-convergence-measure limit="1e-3" data="DisplacementDelta" mesh="Solid-Mesh" />
<relative-convergence-measure limit="1e-3" data="Force" mesh="Solid-Mesh" />
<acceleration:IQN-IMVJ>
<data name="DisplacementDelta" mesh="Solid-Mesh" />
<preconditioner type="residual-sum" />
<initial-relaxation value="0.1" />
<max-used-iterations value="100" />
<time-windows-reused value="1" />
<filter type="QR2" limit="1e-2" />
</acceleration:IQN-IMVJ>
</coupling-scheme:serial-implicit>
(I increased max-iterations
and max-used-iterations
.)
precice-Solid-convergence.log (2.9 KB)
precice-Solid-iterations.log (158 Bytes)
precice-config.xml (2.8 KB)
It runs for an even shorter time now.
@uekerman
Maybe it will be faster if I attach the case
Would appreciate your help
Hello
Any suggestions? Sorry don’t mean to rush but I am on a bit of a time crunch
Also, just wanted to confirm something. I am running the fluid side in parallel so does that mean I have to use a parallel coupling scheme?
The error is on the solid side:
*ERROR: solution seems to diverge; please try
automatic incrementation; program stops
best solution and residuals are in the frd file
No, this is just a terminology issue. A parallel scheme just means that, after the participants update their coupling boundaries in advance()
, they compute their next solution at the same time, without the one waiting for the other to complete. Both serial and parallel coupling schemes can include (MPI) parallel participants. I have the impression that serial schemes may be more stable in some situations.
Looking at your foam.out
:
---[precice] iteration: 2 of 30, time-window: 34 of 100, time: 0.0033, time-window-size: 0.0001, max-timestep-length: 0.0001, ongoing: yes, time-window-complete: no, read-iteration-checkpoint
Courant Number mean: 0.010637947557912 max: 0.12588645689223
Time = 0.0034
PIMPLE: iteration 1
smoothSolver: Solving for cellDisplacementx, Initial residual = 0.097894544594377, Final residual = 8.8271627686389e-18, No Iterations 1
smoothSolver: Solving for cellDisplacementy, Initial residual = 0.08677137813471, Final residual = 9.2944594364952e-18, No Iterations 1
smoothSolver: Solving for cellDisplacementz, Initial residual = 0.016216845125291, Final residual = 1.1636691663665e-17, No Iterations 1
GAMG: Solving for pcorr, Initial residual = 1, Final residual = 0.29714593369799, No Iterations 5
GAMG: Solving for pcorr, Initial residual = 0.085641393183388, Final residual = 0.0085547235549897, No Iterations 1
GAMG: Solving for pcorr, Initial residual = 0.017579571614725, Final residual = 0.0016194061051469, No Iterations 2
time step continuity errors : sum local = 0.00017166359570291, global = 9.529488175369e-06, cumulative = -0.00010822753630142
smoothSolver: Solving for Ux, Initial residual = 0.25705314834212, Final residual = 0.00024899040204514, No Iterations 2
smoothSolver: Solving for Uy, Initial residual = 0.21303849474934, Final residual = 0.00017596942014147, No Iterations 2
smoothSolver: Solving for Uz, Initial residual = 0.13977822139532, Final residual = 0.00015510388739576, No Iterations 2
GAMG: Solving for p, Initial residual = 0.43656118574304, Final residual = 0.10866882287469, No Iterations 5
GAMG: Solving for p, Initial residual = 0.10333277418144, Final residual = 0.003028802712372, No Iterations 2
GAMG: Solving for p, Initial residual = 0.018298353553837, Final residual = 0.0014407396521437, No Iterations 2
time step continuity errors : sum local = 0.0001340778109813, global = 8.1276992540998e-06, cumulative = -0.00010009983704732
GAMG: Solving for p, Initial residual = 0.083528004329256, Final residual = 0.018357704666049, No Iterations 5
GAMG: Solving for p, Initial residual = 0.082134376069442, Final residual = 0.0023942285413325, No Iterations 2
GAMG: Solving for p, Initial residual = 0.014499588811925, Final residual = 0.0010937836930918, No Iterations 2
time step continuity errors : sum local = 0.0001014551960498, global = 5.924182523764e-06, cumulative = -9.4175654523555e-05
GAMG: Solving for p, Initial residual = 0.083468385625284, Final residual = 0.018524552325605, No Iterations 5
GAMG: Solving for p, Initial residual = 0.082309003923538, Final residual = 0.0023970412000738, No Iterations 2
GAMG: Solving for p, Initial residual = 0.01452629142825, Final residual = 0.0010940263844635, No Iterations 2
time step continuity errors : sum local = 0.00010141743459601, global = 5.9270573651994e-06, cumulative = -8.8248597158356e-05
...
PIMPLE: iteration 37
smoothSolver: Solving for Ux, Initial residual = 6.4397697924827e-07, Final residual = 5.5633879659999e-11, No Iterations 3
smoothSolver: Solving for Uy, Initial residual = 7.4125053302811e-07, Final residual = 6.0625913795051e-11, No Iterations 3
smoothSolver: Solving for Uz, Initial residual = 1.1579999733834e-07, Final residual = 1.3225705372293e-11, No Iterations 3
GAMG: Solving for p, Initial residual = 5.8223920906919e-06, Final residual = 1.2944994329266e-07, No Iterations 2
GAMG: Solving for p, Initial residual = 8.5380009173202e-07, Final residual = 3.999447491963e-08, No Iterations 2
GAMG: Solving for p, Initial residual = 2.9332044972902e-07, Final residual = 4.1386046662666e-08, No Iterations 1
time step continuity errors : sum local = 1.8225405568824e-09, global = -1.2566975569089e-10, cumulative = -6.7337986934367e-05
GAMG: Solving for p, Initial residual = 4.6563262600409e-06, Final residual = 1.0407842077702e-07, No Iterations 2
GAMG: Solving for p, Initial residual = 6.8244865356703e-07, Final residual = 8.3700955566741e-08, No Iterations 1
GAMG: Solving for p, Initial residual = 2.4917718343258e-07, Final residual = 4.6186131564468e-08, No Iterations 1
time step continuity errors : sum local = 2.0339247400625e-09, global = -1.2292309154334e-10, cumulative = -6.7338109857458e-05
GAMG: Solving for p, Initial residual = 4.6565801748471e-06, Final residual = 1.0425098531479e-07, No Iterations 2
GAMG: Solving for p, Initial residual = 6.8254629422686e-07, Final residual = 8.3795823072414e-08, No Iterations 1
GAMG: Solving for p, Initial residual = 2.4930331011105e-07, Final residual = 1.8614457808324e-08, No Iterations 2
time step continuity errors : sum local = 8.1973538267987e-10, global = -4.6905556544095e-11, cumulative = -6.7338156763015e-05
PIMPLE: converged in 37 iterations
The Courant number seems to be low, and everything seems to be converging, even though in rather many PIMPLE iterations. So, the Fluid simulation itself should be fine.
Looking into calculix.out
:
---[precice] iteration: 2 of 30, time-window: 34 of 100, time: 0.0033, time-window-size: 0.0001, max-timestep-length: 0.0001, ongoing: yes, time-window-complete: no, read-iteration-checkpoint
Adapter reading checkpoint...
Adjusting time step for transient step
precice_dt dtheta = 0.010000, dtheta = 0.010000, solver_dt = 0.000100
Adapter reading coupling data...
Reading FORCES coupling data with ID '3'.
increment 34 attempt 2
increment size= 1.000000e-04
sum of previous increments=3.300000e-03
actual step time=3.400000e-03
actual total time=3.400000e-03
iteration 1
Using up to 1 cpu(s) for the stress calculation.
Using up to 1 cpu(s) for the energy calculation.
Using up to 1 cpu(s) for the symmetric stiffness/mass contributions.
Factoring the system of equations using the symmetric spooles solver
Using up to 1 cpu(s) for spooles.
Using up to 1 cpu(s) for the stress calculation.
Using up to 1 cpu(s) for the energy calculation.
average force= 0.019574
time avg. forc= 0.000882
largest residual force= 1.580678 in node 1439 and dof 1
largest increment of disp= 1.581901e-03
largest correction to disp= 1.581901e-03 in node 2002 and dof 1
no convergence
...
iteration 4
Using up to 1 cpu(s) for the symmetric stiffness/mass contributions.
Factoring the system of equations using the symmetric spooles solver
Using up to 1 cpu(s) for spooles.
Using up to 1 cpu(s) for the stress calculation.
Using up to 1 cpu(s) for the energy calculation.
average force= 38.392653
time avg. forc= 1.129502
largest residual force= 294233.084484 in node 1297 and dof 3
largest increment of disp= 1.399822e-02
largest correction to disp= 1.399822e-02 in node 5803 and dof 2
*ERROR: solution seems to diverge; please try
automatic incrementation; program stops
best solution and residuals are in the frd file
The avergae force
seems to be diverging: in this attempt, it started with 0.019, and ended up 38.3.
If you visualize the force results just before the simulation crashes (the CalculiX results and the preCICE exported VTK files), how do they look like? Are they smooth over the whole interface, or are there maybe any spikes around the parallel boundaries? Just to make sure that we don’t have any issues there.
By the way, since this is a small case, and since it fails early, I would also try running it with OpenFOAM in serial, just to exclude any issues with the parallelization.
Thank you for your response @Makis
This is a snippet of the resultant force from the Calculix frd file video loop. I do see spikes; what does this mean?
This is the displacement:
Got it. I will make sure to run the next trials in serial.
I don’t see any spikes in value (good, the parallelization is not the issue), but I do see something unclear near the inlet. I assume this is just arrows pointing outwards, near the current location of the pulse.
Qualitatively, it looks normal.
A next question is if the material parameters and all boundary conditions are correct. Light structures with high velocities are challenging.
You could also try a finer solid mesh. But maybe the issue is simpler and related to the CalculiX config.
The current material is hyper elastic and the most it has run is 0.0034 secs. I set the material first to elastic and it ran longer than the current (0.005 secs) but also diverges.
Both of these were with the initial preCICE configuration (without filter).
I should point out that the case, even though hyper elastic, has barely any displacement yet the coupling is quite tricky to achieve.
What else do you think could be an issue in CalculiX?
Hi @mishal49
I was able to run your case and to reproduce your problems.
With:
<coupling-scheme:parallel-implicit>
<participants first="Fluid" second="Solid" />
<max-time-windows value="100" />
<time-window-size value="5e-4" />
<exchange data="Force" mesh="Solid-Mesh" from="Fluid" to="Solid" />
<exchange data="DisplacementDelta" mesh="Solid-Mesh" from="Solid" to="Fluid" />
<max-iterations value="30" />
<relative-convergence-measure limit="1e-3" data="DisplacementDelta" mesh="Solid-Mesh" />
<relative-convergence-measure limit="1e-3" data="Force" mesh="Solid-Mesh" />
<acceleration:IQN-ILS>
<data name="DisplacementDelta" mesh="Solid-Mesh" />
<data name="Force" mesh="Solid-Mesh" />
<preconditioner type="residual-sum" />
<initial-relaxation value="0.1" />
<filter type="QR2" limit="1e-2" />
<max-used-iterations value="100" />
<time-windows-reused value="10" />
</acceleration:IQN-ILS>
</coupling-scheme:parallel-implicit>
(I did use NN mapping instead of RBF to speed things up a bit)
The coupling is tough, but converges nicely in the end, precice-Solid-iterations.log
:
TimeWindow TotalIterations Iterations Convergence QNColumns DeletedQNColumns DroppedQNColumns
1 30 30 0 28 1 0
2 46 16 1 43 0 0
3 60 14 1 56 0 0
4 74 14 1 67 2 0
5 89 15 1 81 0 0
6 103 14 1 94 0 0
7 120 17 1 100 0 10
8 145 25 1 100 0 24
9 162 17 1 100 0 16
10 178 16 1 100 0 15
11 193 15 1 100 0 14
12 206 13 1 100 1 11
13 218 12 1 100 1 10
14 230 12 1 100 3 8
15 243 13 1 100 1 11
16 256 13 1 100 3 9
17 269 13 1 100 1 11
18 279 10 1 100 3 6
19 289 10 1 100 0 9
In time step 20, however, CalculiX diverges.
With a larger time step size (5e-4, i.e. 5 times larger), coupling strength is weaker (as expected for imcompressible flow). Then, CalculiX diverges in time step 21 (i.e. t = 0.01).
Seeing all this, I expect that the CalculiX simulation gives problems here, not the coupling.
To test and debug your CalculiX model better in isolation, you could use ASTE and mimic (replay) a pre-described pressure wave of the fluid solver.
Then, if needed, you could also easily try and compare different mappings.
Hi @uekerman
Thank you for taking the time out to run my case!
I am currently working on the ASTE and will keep you posted once I have an update.
I do have a question;
Did the case run until end time for you with elastic conditions (commented in .inp file)?
I was hoping to confirm if the hyper elasticity is what is causing an issue or something else in CalculiX.
The elastic case ran slightly longer for me.
Also, could I please get the precice-config file that you used (with updated mapping and coupling scheme)? Want to make sure that I have no issues there during testing.
Hi @uekerman
I set up a replay mode like you suggested but that seems to give me an error.
I have fluid mesh node and face files. Can I not use that?
Would appreciate your guidance
aste-CT.zip (1.9 MB)
I run the case using only fluid mesh nodes (no faces) however, they don’t couple
Hello @uekerman @Makis
While I work on ASTE, I just wanted to confirm that my case setup worked fine so I reset up the elastic tube 3D case.
However, I got the following error:
The case was running fine and then diverges 2 timesteps before the endTime.
How would you suggest solving this issue? Should I change the preconditioner? I tried residual and constant. Case ran shorter using both
I have version 2.3 installed.
Would appreciate your help
@mishal49 sorry for the late reply. Are you still stuck at the same point?
Comparing your case with the elastic-tube-3d tutorial (using meld), I notice the following:
- You are using a nearest-neighbor mapping, instead of a nearest-projection, which should be more accurate. Why? The solver combination you are running supports it. Note that in the
config.yml
of CalculiX, you have removed the connectivity. - You are using a parallel-implicit scheme, instead of a serial-implicit, which should be more stable. Why? This has nothing to do with parallelization.
- In the tutorial, we are using IQN-IMVJ, you are using IQN-ILS, as @uekerman suggested. This may be something to reconsider in the new context. You also currently accelerate both
Force
andDisplacementDelta
, and reuse more iterations and time windows (which should be good). - You have generated a new CalculiX case, which I don’t have the skills to evaluate. What was your goal there? Maybe one of the @calculix-users could help (join that group, if you are one).
Hello
Update:
The case for collapsible tube ran to completion. I regenerated the solid mesh using prePoMax. This from what I remember, is the only thing that I changed in the case.
This really makes me wonder, are there some specific software packages recommended to generate the solid mesh or to even make a geometry? This would definitely help when working in the future. Even with prePoMax, a .step file led to successful mesh generation while a .stl file gave errors.
Also, whenever I made it with Salome, it had negative jacobians and when I used pointwise it diverged.
I have the same question, actually. Maybe this would be a nice thread to start in Non-preCICE - preCICE Forum on Discourse
Hi Makis,
I am running a similar case and am wondering why one should use DisplacementDelta over Displacement. I see that the elastic-tube-3d case uses the delta, however, the perpendicular- flap tutorial runs using Displacement. What would be the reason for choosing one over the other? Thanks in advance!