Calculix parallelization

Hi Everybody, I got a question regarding the parallelization of CalculiX. Usually, when I run FSI problem with OpenFoam and CalculiX, OpenFoam side can run in parallel, but not CalculiX side (still run with 1 core). Is there anyway to run CaculiX with multiple cores? I’m asking this because my solid volume mesh is big, therefore the OpenFoam finish one step and take long time to wait for CalculiX calculation. Thanks and regards!

Hi @huyfanne!

CalculiX supports shared memory parallelization, which also works in a coupled setup. Should work out of the box. IIRC there were others already who used this feature for coupled simulations.

Let us know if it works.

Benjamin

May I know what is the correct way to do so? Do I need to recompile CalculiX with Makefile_MT? I’m just starting to use precice, sorry for the inconvenience.

I am not sure, but maybe this helps: http://www.dhondt.de/ccx_2.14.pdf (page 10)

Just to clarify: CalculiX is not part of or related to preCICE, it is a completely separate software project. The best place to get answers on CalculiX-specific questions would be at the CalculiX mailing list.

In any case, if you find the answer, a summary here would be helpful for people with the same question. :slight_smile:

Dear huyfanne,
By default CalculiX is compiled to run on a single core. You should compile it with MakefileMT. Or just take my binaries here:
https://github.com/calculix/free_form_fortran#downloads (both for Windows and Linux).

Thank you for your comments, will study further on the matter

I will try to recompile with CalculiX 2.15. If it doesnt work, will get try your binary (since it is 2.16). Thanks alot.

Please note that our CalculiX adapter now also supports 2.16. To install the adapter, you essentially need to rebuild CalculiX, so you also need the code. This is what happens when you run make, which produces a modified copy of ccx, named ccx_preCICE.

I keep going on this discussion because I encountered some issues related to the parallelization of Calculix (see https://gitter.im/precice/Lobby?source=orgpage). Now I solved them and so I give my solutions.

  1. I got a problem with the parallelization of spooles. I got this kind of message in the Solid.log file:

Using up to 4 cpu(s) for the stress calculation.

Using up to 4 cpu(s) for the symmetric stiffness/mass contributions.

Factoring the system of equations using the symmetric spooles solver
Using up to 1 cpu for spooles.

This means that the resolution of the linear system will not be solved in parallel but one 1 CPU. I solved this problem by compiling only the parallel case (Makefile_MT). This problem seems to arise when you compile the serial case (Makefile) and get ccx_2.XX, and then compile the parallel case and get ccx_2.XX_MT. Using ccx_2.XX_MT does not lead to a parallel run in this case. So, only compile the parallel Calculix version. I am not sure that this occurs for everybody.

  1. The restart with Calculix does not work from version 2.15. Actually, it works only for static calculations but not for dynamic. If you need to run a large case with restart, use a lower version (2.13 is ok for sure).

  2. It is not possible to perform parallel FSI calculations with calculiX because some things are missing in the Makefile in the calculix adapter repository. I succeeded in compiling and making a parallel simulation with the following modifications. I only added -DUSE_MT in the CFLAGS and $(SPOOLES)/MT/src/spoolesMT.a \ in the LIBS.

LIBS = \
        \$(SPOOLES)/MT/src/spoolesMT.a \
        \$(SPOOLES)/spooles.a \
        \$(PKGCONF_LIBS) \
        -lstdc++ \
        -L$(YAML)/build -lyaml-cpp \

CFLAGS = -Wall -O3 -fopenmp $(INCLUDES) -DARCH="Linux" -DSPOOLES -DARPACK -DMATRIXSTORAGE -DUSE_MT

Do not forget the tabulations in LIBS.

3 Likes

The following may be of interest too: https://web.mit.edu/calculix_v2.7/CalculiX/ccx_2.7/doc/ccx/node3.html

I had the same problem. I followed this for calculix and it worked fine (also you can find more detail in calculix README.INSTALL):
https://www.libremechanics.com/?q=node/9

Eventhough, this doesn’t work directly for precice-calculix-adapter makefile, and it needs a few more modifications. I put my final Makefile here, hope it helps:

# See our wiki for getting the CalculiX dependencies:
# https://github.com/precice/calculix-adapter/wiki/Installation-instructions-for-CalculiX
# Set the following variables before building:
# Path to original CalculiX source (e.g. $(HOME)/ccx_2.15/src )
CCX             = $(HOME)/App/Calculix/ccx_2.15/src
# Path to SPOOLES main directory (e.g. $(HOME)/SPOOLES.2.2 )
SPOOLES         = $(HOME)/App/Calculix/spooles
# Path to ARPACK main directory (e.g. $(HOME)/ARPACK )
ARPACK          = $(HOME)/App/Calculix/ARPACK
# Path to yaml-cpp prefix (e.g. $(HOME)/yaml-cpp, should contain "include" and "build")
ARPACK          = $(HOME)/App/Calculix/ARPACK

# Get the CFLAGS and LIBS from pkg-config (preCICE version >= 1.4.0).
# If pkg-config cannot find the libprecice.pc meta-file, you may need to set the
# path where this is stored into PKG_CONFIG_PATH when building the adapter.
PKGCONF_CFLAGS  = $(shell pkg-config --cflags libprecice)
PKGCONF_LIBS    = $(shell pkg-config --libs libprecice)

# Specify where to store the generated .o files
OBJDIR = bin

# Includes and libs
INCLUDES = \
	-I./ \
	-I./adapter \
	-I$(CCX) \
	-I$(SPOOLES) \
	-I$(SPOOLES)/MT/src \
	$(PKGCONF_CFLAGS) \
	-I$(ARPACK) \
	-I$(YAML)/include

LIBS = \
    $(SPOOLES)/MT/src/spoolesMT.a \
	$(SPOOLES)/spooles.a \
	$(PKGCONF_LIBS) \
	-lstdc++ \
	-L$(YAML)/build -lyaml-cpp \

# OS-specific options
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Darwin)
	LIBS += $(ARPACK)/libarpack_MAC.a
else
	LIBS += $(ARPACK)/libarpack_ubuntu.a
	LIBS += -lpthread -lm -lc
endif

# Compilers and flags
#CFLAGS = -g -Wall -std=c++11 -O0 -fopenmp $(INCLUDES) -DARCH="Linux" -DSPOOLES -DARPACK -DMATRIXSTORAGE
#FFLAGS = -g -Wall -O0 -fopenmp $(INCLUDES)

CFLAGS = -Wall -O3 -fopenmp $(INCLUDES) -DARCH="Linux" -DSPOOLES -DARPACK -DMATRIXSTORAGE   -DUSE_MT=1 

# OS-specific options
CC = cc

FFLAGS = -Wall -O3 -fopenmp $(INCLUDES)
FC = gfortran
# FC = mpif90
# FC = gfortran

# Include a list of all the source files
include $(CCX)/Makefile.inc
SCCXMAIN = ccx_2.15.c

# Append additional sources
SCCXC += nonlingeo_precice.c CCXHelpers.c PreciceInterface.c
SCCXF += getflux.f getkdeltatemp.f



# Source files in this folder and in the adapter directory
$(OBJDIR)/%.o : %.c
	$(CC) $(CFLAGS) -I$(SPOOLES) -c $< -o $@
$(OBJDIR)/%.o : %.f
	$(FC) $(FFLAGS) -I$(SPOOLES) -c $< -o $@
$(OBJDIR)/%.o : adapter/%.c
	$(CC) $(CFLAGS) -I$(SPOOLES) -c $< -o $@
$(OBJDIR)/%.o : adapter/%.cpp
	g++ -std=c++11 -I$(SPOOLES) -I$(YAML)/include -c $< -o $@ $(LIBS)
	#$(CC) $(CFLAGS) $(INCLUDES) -c $< -o $@ $(LIBS)

# Source files in the $(CCX) folder
$(OBJDIR)/%.o : $(CCX)/%.c
	$(CC) $(CFLAGS) -c $< -o $@
$(OBJDIR)/%.o : $(CCX)/%.f
	$(FC) $(FFLAGS) -c $< -o $@

# Generate list of object files from the source files, prepend $(OBJDIR)
OCCXF = $(SCCXF:%.f=$(OBJDIR)/%.o)
OCCXC = $(SCCXC:%.c=$(OBJDIR)/%.o)
OCCXMAIN = $(SCCXMAIN:%.c=$(OBJDIR)/%.o)
OCCXC += $(OBJDIR)/ConfigReader.o



$(OBJDIR)/ccx_preCICE: $(OBJDIR) $(OCCXMAIN) $(OBJDIR)/ccx_2.15.a $(SPOOLES)/MT/src/spoolesMT.a 
	$(FC) -fopenmp -Wall -O3 -o $@ $(OCCXMAIN) $(OBJDIR)/ccx_2.15.a $(LIBS)

$(OBJDIR)/ccx_2.15.a: $(OCCXF) $(OCCXC) 
	ar vr $@ $?

$(OBJDIR):
	mkdir -p $(OBJDIR)

clean:
	rm -f $(OBJDIR)/*.o $(OBJDIR)/ccx_2.15.a $(OBJDIR)/ccx_preCICE
3 Likes

Hey @fsalmon ,

Can you please provide a step by step guide for enabling ccx_preCICE to run in parallel ?