We could also experiment with a restart of the QN acceleration or a tuning of the number of columns. But this is sth we have not yet implemented. Could be an interesting research question for @KyleDavisSA ?
Regarding the value at top of the beam being very small, in this case the bottom of the beam seems to be pushed first, with the top following after, but the simulations ends before the top has a chance to move? That is something I think could be happening, but am not entirely sure.
Looking at the log files, FASTEST seems to diverge. I am not familiar with FASTEST, but in the log file it says:
preparing geometry for Flow …
Check SCL (second order in time) …
| Current time is: Fri May 22 15:32:47 2020 |
| Current time is: Fri May 22 15:32:49 2020 |
when everything seems to converge, and:
preparing geometry for Flow …
Check SCL (second order in time) …
| Current time is: Fri May 22 15:32:49 2020 |
*** !!!Iterations diverged!!! ***
for the final time step. Is FASTEST running second order in time? It could help stability to run with first order in time, and see if that works. But I would first try different filters and the IQN-IMVJ method Benjamin suggested. Also, for strongly coupled problems, sometimes the coupling stability gets better with increasing the time step, at the expense of stability in each solver. If that is not a problem with the solvers, you could try that.
No really, but if I use more strict precision for the forces, the simulation takes a lot of time and the results are not better.
On the other hand, the parallel coupling is less stable than Serial coupling for this case. I was using IQN-ILS with “0” time-windows-reused and 25 max-used-iterations, making the simulation more stable but not enough to finish the required time (0.85). See the video
The conflict starts when the beam should return and begins to oscillate.
Concerning the IQN-IMVJ acceleration, I tested it but this appears less effective than IQN-ILS.
For this point with the acceleration methods. How can decide correctly the value for the filter, the number of reused iterations, and time reused?
I think this test case is very tricky if I change something in the coupling configuration I have the following problems:
If the time is bigger, Calculix converges very fast and the simulation is more stable but Fastest diverges because the multiphase model does not support very large time-steps.
If the time is smaller and appropriate for the multiphase fluid, Calulix diverges after a few coupling interactions.
If the grids are coarse in the two programs, I can simulate the test case but the results are not enough accuracy. Therefore, I am trying to simulate the case using finer grids. How can be a good proportion between the fluid and structure elements at the fluid-structure interface?
This test case is a 2D case (0.8mx0.8m fluid domain, 0.004mx0.09m rubber beam), I only have one volume on the z-axis (for FASTEST) and in the mapping setup z-dead=true. However, I do not know what is the appropriate width for the grid in the z-direction. On the one hand, if I use a very small value, for example, 0.002 or 0.006, the grid aspect ratio for the fluid domain is okay, but the RBF mapping diverges and to works requires at less 2 elements (C3D20R or C3D8i) are defined in the z-direction of the structure. On the other hand, if the width value is greater e.g. 0.2m, the RBF mapping works only with one element in the z-direction of the structure, but I notice in the vtk-files that the displacement is different for the parallel points, which is not logical if the case is 2D. My question is how can I decide the appropriate width for a quasi2D case? and how can I avoid these unequal displacements? To clarify my question, I attach the force and displacement values for 2 parallel points. The forces received are equal and the displacement unequal results in a complicate deformation of the fluid grid.
Finally, in this case, the structure is a rubber beam (E=3.5MPa, nu=0.5), I am still using the elastic model (with nu=0.49) and the non-linear effects are considered (NLGEOM). Do you think that it is adequate? or the hyperelastic model should be used? However, for the hyperelastic model, I do not have the complete parameters that require these models. Maybe do you have more information about the constants for a rubber material?
The link is unfortunately private. I guess you have to adjust some sharing option in Google Drive.
There is no simple to answer to this. Some guidelines:
The more past information you use the better the performance in general (as long as you don’t use too much, then the condition gets bad). But then the coupling scheme has also more troubles to react to sudden changes. An interesting future direction of research would be to make IQN more adaptive.
The filter should work, meaning not filter out too little, but also not too much. You can check the number of filtered out columns per timestep in the “iterations” file of the second participant.
This sounds like an intersting use-case for the waveform relaxation we want to implement in preCICE, @BenjaminRodenberg’s PhD project. Preliminary results here, but might still take a year till we have this directly in preCICE. At the moment, there is no good solution for this.
Well, hard question again. The black-box philosophy would say that both solvers should use what is best for them, so to balance the error between both. And then the data mapping has just to cope with those meshes. But, of course, the meshes also have an influence on the mapping error.
Maybe some hope here: we are currently working on an artificial testing environment for data mappings. So that you can test mappings between your meshes without needing to run the complete coupled simulation. Because in the end, things like this simply need a lot of testing and comparison.
This is odd. If you have z-dead="true" the RBF mapping should not see the width in z-direction at all. Could CalculiX have troubles with too small aspect ratios?
You do give force values in z direction. Then, CalculiX also computes a deformation in z direction and, thus, you get different displacements along the z axis. Could an easy solution be to set the z components of the forces to 0?
That is, unfortunately, beyond my expertise. Maybe @KyleDavisSA knows more. Or you could ask on the CalculiX mailing list.
For this “default” set-up, my simulation runs until t=0.2
For QR-2 Filter it runs until t=0.0019
For time-windows-reused value="0" it runs for t=0.003
For <max-used-iterations value="15"/> <time-windows-reused value="0"/> it runs for t=0.0039
For <max-used-iterations value="50"/> <time-windows-reused value="0"/> it runs for t=0.0010
For convergence, measure both set to 1e-6 it runs for t=0.002.(no particular reason to choose different, just using the same as what we have in the FSI flap tutorial).
And lastly for the “default” scheme just changing from serial to parallel it runs for `t=0.0013’
After this, I tried again the settings mentioned above, but it did not improve the results.
For serial explicit coupling scheme, initially, I had <time-window-size value="1e-3"/>, with which I was able to run until t=0.28 (time of wetting).
After increasing the deltaT for preCICE to 1e-4, I am able to run the simulation until t=0.52, but as you can see the results are really bad.
over what ranges did you test the filter limit for both QR1 and QR2? The time-windows-reused should stay above 10 for a case like this, as well as max-used-iterations above 50. I think the performance could be improved with the filtering.
would you be able to try running the problem with parallel-implicit coupling and include the Forces0 data into the acceleration? I wonder if the lack of knowledge about the forces on the interface is causing the QN method to have problems. Perhaps start with:
@KyleDavisSA I tried the setting, I think including Forces0 in the acceleration surely helped.
Before including the forces in the acceleration, the forces applied on the flap were in the magnitudes of *e+03.
Now they have come down to the magnitudes of *e+02. I think I have to modify my OpenFOAM adapter further since this seems to be a case similar to the post by Ulrich.
I have made the modifications implemented, but not sure.
I will have a look at my adapter again.
For <time-window-size value="1.e-3" />, it run until almost t=0.03, which was similar to what I got before.
For <time-window-size value="1.e-4" />, it run until t=0.538698, which is better than the explicit coupling with similar <time-window-size.
Unfortunately, the actual results are the same as before.
I am trying a case where the solid is a kind of a ramp, might have better results.
Summary: With help of @KyleDavisSA, I was able to run the interFoam+CCX dam breaking simulation.
Initially, the Young’s Modulus of the material was 10 times less than what it actually should be.
I was also using the filter limit for QR2 too high. Forces were included in the acceleration.
In the interFoam case, you can see the implicit coupling applied, this time including the Forces in the acceleration helped the simulation a lot.
With decreasing the heavy fluid density to 100kg/m3 and increasing the Young’s modulus along with IQN-ILS acceleration, you can see the results for the simulation.
Next, I increased the density to match the density to that of water, keeping the rest of the settings the same.
The simulation runs only until t=1.33s, but the results are sensible in terms of mapping.
The results do not differ much when using IQN-IMVJ
I think the simulation is failing on the OpenFoam side, due to excessive Courant number. (initially, it ran until t=1.2, with some modifications in fvSchemes and fvOptions, I was able to push it a bit farther.)
You settings look already pretty good. Of course, you can always play around with them but I guess that there is no more significant improvement to expect.
The fundamental problem remains: due to the sudden wetting of the coupling interface, the history information that the quasi-Newton coupling uses becomes invalid from one timestep to another. @KyleDavisSA wanted to look into developing a kind of automatic restart mechanism of quasi-Newton, meaning to throw away all history if the coupling condition changes drastically. This could be a very nice future solution, but nothing for right now.
What you can always do is to manually restart you coupled simulation when the wetting happens.
I have the same issue as you mentioned. The simulation crashed when I set the density of water 1000kg/m3. But case can be run fine when the density of 100kg/m3. Did you solve this problem?
I’d like to know why the Young’s Modulus of the material needs to be 10 times larger than what it actually should be. Now, I was validating the FSI case about dambreak in the thesis of Vilmar Fuchs. The Young’s Modulus of material was 10^6 in the thesis, but the calculation crashed down when it ran about 0.15s. No matter how I adjust it, the calculation example can only calculate 0.15s. But the case ran about 0.36s when I change the Young’s Modulus to be 10^7. And there is the other problem, the Poisson’s ratio can’t be set as 0.5. If it is 0.5, Calculix will not run.