CalculiX adapter memory just keeps growing

I’m trying to run an FSI simulation using openfoam v2006 and calculix 2.16. I have successfully run the 3D tube tutorial case and am modifying my geometry from here. Everything seems to be working fine, but it seems that no matter what I do the ccx_preCICE process accumulated resident memory (observed via top) until a segmentation fault. It usually seg faults when the resident memory hits ~16 GB. The virtual memory doesn’t change much. It starts and stays at around 29 GB. The calculix model has about 61600 elements (C3D10), and the fluid mesh is about 500000 cells (mostly tets). I can’t see myself coarsening these much more without seriously compromising the accuracy I desire. Are these meshes just too big, or am I doing something else wrong?

Here are some relevant files:
fluid.log (17.5 KB)
run.log (5.4 KB)
solid.log (1.8 KB)
config.yml (225 Bytes)
precice-config.xml (2.7 KB)
aorta.inp (1.6 KB)
preciceDict.txt (703 Bytes)

I guess another option could be that all the necessary solvers and dependencies weren’t compiled with the same compiler. I’ve spent the last several days checking this. I’m not a root user on the system I’m using, so I opted to use an established openmpi and gcc combination. I found on my system that openmpi/4.0.0 was compiled with gcc/5.4.0. I then went through and made sure that the following were all compiled with that combination:

eigen/3.3.7 – though there is nothing compiled itself here

This didn’t help the issue. I’m still getting the same error.

@Mike_Tree is your second post about the memory leak or about the installation?

Regarding the memory leak / segmentation fault, I don’t see anything helpful in the log files (it looks like the log is not complete). Could you maybe run the two cases separately (not with Allrun) and send again the logs of the two participants?

If you have compiled preCICE in Debug mode, you can also enable debug output.

You are using nearest-neighbor and nearest-projection mapping, so for preCICE itself the memory consumption should be fine.It is also the Fluid participant that does the mapping, not the Solid.

My second post is about both. I wondered if compiling each of the dependencies with different compiler versions was the source of this seg fault. So, I checked to see what compiler was used for each dependency and re-compiled them if they weren’t using the gcc/5.4.0 and openmpi/4.0.0 combination. They are all consistent now, but the seg fault remains.

I did cut the previous fluid log short because I had specified A LOT of fluid sub-iterations and it was taking a long time to error out the fluid side. So, I switched the fluid solver to use only 5 sub-iterations and let the fluid.log complete this time. I also enabled preCICE debug output. Finally, I tweaked the aorta.inp file to get rid of the NLGEOM warning. I started each of the solvers separately, as you requested.

Here are the solver logs:
fluid.log (13.7 KB)(2.6 KB) solid.log (6.9 KB)

The debug.log file is too large to upload, so here is a link.

Actually, there seems to be a known issue with the CalculiX adapter: Check for memory leaks · Issue #10 · precice/calculix-adapter · GitHub

Let’s continue the discussion there. Any contribution to the CalculiX adapter is very welcome.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.