I’m new to MPI for parallel running. In this link, it is said
preCICE only initializes MPI if it needs MPI. Since our in-house solid solver doesn’t support MPI right now, could I use multiple nodes for fluid solver parallel running (OpenFOAM) while using only one node for solid solver (in-house solver) to do two-way coupling with preCICE?
Thanks in advance.
I understand your question as simply being whether you can run OpenFOAM in parallel and couple it to your in-house solid solver which is run in serial. If this is correct, then the answer is yes. preCICE will handle the two-way coupling done in this manner.
Thanks for your reply.
We have been doing two-way coupling using preCICE for some time, with both OpenFOAM and our in-house solver running in parallel on one computer, which means data is exchanged on the same machine (one node). In order to improve computing efficiency, we plan to use a cluster with multiple machines (multiple nodes) instead.
At present, according to section 3.4.3 in this link, OpenFOAM supports parallel running on multiple machines. But our in-house solver can only run in parallel on one machine. For example, our cluster has 16 nodes (each node has a few CPUs). If I run OpenFOAM on 15 nodes with 16 CPUs on each node and run our in-house solver on 1 node with 16 CPUs, will preCICE handle the two-way coupling as always?
Sorry for the delayed response, have been flooded with work for a bit.
Okay, now I understand your question much better. So you are essentially asking if preCICE can communicate across compute nodes. I remember that this has been done before, but I cannot find an example or exact documentation regarding this. What I would suggest doing is directly trying this out, and using a common file system: Distributed systems | preCICE - The Coupling Library because preCICE would need this to help both participants find each other.
Thank you for your suggestion. Since we don’t have direct access to the server yet, I will test and give feedback once we have permission from the server room administrator.