Parallel computing with OpenFOAM

Hi, everyone,
I created a vector A in my OpenFOAM solver which is independent with fluid mesh and needs to be sent to the structure solver by call writeBlockVectorData and read BlockVectorData (constraints = conservative). The fluid field is divided into 28 subfields and OpenFOAM uses mpirun -np 28 pimpleFOAM -parallel to perform parallel calculations. The structure solver only uses one CPU for computation.

<participant name="FluidSolver">
 <master: mpi-single/>
<use-mesh name = "FluidMesh" provide = "yes">
...
<participant name="StructureSolver">
<use-mesh name = "StrucutureMesh" provide = "yes">
...

I met a problem that the structure solver receives nearly 28*A while the fluid solver sent A to it. Does this means that preCICE receives A for 28 times in one time step and they are added together and sent to the structure solver after mapping?

Best,
Jun

Hi @jun_leng!

I created a vector A in my OpenFOAM solver which is independent with fluid mesh

What do you mean exactly by this? Does every rank of OpenFOAM only define a part of the vector, or is the same vector with the same elements available in every rank?

I assume that you are not using the “official” adapter (or at least you modify it). How do you assign this vector to the interface mesh?

How does your m2n node in the precice-config look like? Are you using distribution-type="gather-scatter"? (see the wiki page on communication configuration)

Also, which participant does the mapping?

Posting your complete precice-config.xml would help.

Hi, @Makis,
Actually, the vector A (’’Forces’’ in precice-config.xml) is calculated by calling a function I added in PimpleFoam solver. The function will use the displacements and structural meshes calculated from the structure solver to get vector A (similar to forces) which will be used in the structural solver and some other coefficients which is used in the Navier-Stokes equations at every time step. It doesn’t need the information of fluid mesh during coupling and is not related to parallel computation of PimpleFoam.


I added the part of the adapter into PimpleSolver’s codes directly and it runs well if I don’t use parallel computation. When I run PimpleFoam parallel computation without using <master:mpi-single/>, however, it says,
(0) 17:21:52 [impl::SolverInterfaceImpl]:133 in configure: ERROR: A parallel participant needs either a master or a server communication configured.

When I run PimpleFoam parallel computation using <master:mpi-single/>, the value of the vector A increase with the number of ranks.

I want to know what ‘mpi-single’ is used for and if mpi-single will copy data for n times before communicating with structure solver (n is the number of ranks). Additionally, I am using distribution-type="gather-scatter" . Are there any documents available for introducing its theory and implementation?

Attached is my precice-config.xml.precice-config.xml (1.9 KB)

Best,
Jun

There are two steps where communication is needed:

  • between the two participants (fluid and solid), which we name “m2n” (M ranks of Fluid to N ranks of Solid). This is the inter-communication.
  • inside each parallel participant. This is the intra-communication.

From your config file:

<m2n:sockets from="FluidSolver" to="StructureSolver" distribution-type="gather-scatter"/>

As the XML node suggests, the gather-scatter is part of the m2n communication (with sockets). This means that, with this setting, the FluidSolver is gathering all the meshes in one rank, sending them to StructureSolver, which can then scatter them to other ranks (here: none). In a scenario where both participants are parallel, the default setting is to setup direct communication channels among all the ranks that need to communicate, but for serial solvers we currently need this workaround (which we try to replace by an automatic solution).

Let’s look also at the intra-communication:

    <participant name="FluidSolver">
      <master:mpi-single/>
       ...
    </participant>

Here master means that one of the FluidSolver ranks will synchronize the rest/“slave” processes (e.g. announce when has everybody finished communicating and when is the simulation ready to continue). Side-note: in the early days, preCICE also had a “server mode”, just like other coupling tools. That server would then be the orchestrator.

The mpi means that MPI is used to send these additional messages among participants and single means that a single MPI communicator (the one that is already set up by the solver) will be used. You can see other alternatives in the XML reference.

More information about these you can find in:

About your original question, I am still a bit confused on what exactly you are doing (seeing the source would help). The best way to debug it would be to use the export to vtk feature and visualize the meshes in ParaView to see if problem is introduced in the solver side or during the mapping. The configuration file looks correct to me.

Hi, @Makis!
I am sorry for my late reply. I finally found the reason for my problem after I followed your advice and checked my vtk files. The vector that will be sent to the structure solver is the same vector in every rank of OpenFOAM. Therefore, the vector became N times bigger than it was in OpenFOAM when I used type = " gather-scatter in m2n part (N is the number of ranks of OpenFOAM). To solve the problem, I divided the vector by N in the structure solver and the result turned right. But I don’t think it is a good solution to this problem. Do you have any suggestions?

Thanks!

Hi @jun_leng

What is your vector A exactly? Why do you compute a copy of it in every OpenFOAM rank? It is sth like a force? Then, a conservative mapping (as you use) does exactly what you describe – it’s trying to conserve the sum.
Two simple alternatives:

  • Only compute vector A on one rank on OpenFOAM. That’s probably the cleanest solution.
  • Use a consistent mapping instead. Then, you have to move the mapping in the preCICE config to the other participants, as explained in the wiki.

Hi, @uekerman
Thank you for your help.
Actually, vector A is the lift and drag forces that will be applied on turbines. It is not projected from the fluid mesh directly but calculated from formulas based on the Actuator Line Method, which is faster and widely used in the field of turbines. The velocity of fluid mesh is needed in the formulas. This model combines a three-dimensional Navier-Stokes solver with a technique in which body forces are distributed radially along lines representing the blades of the wind turbine. More information on ALM can be found in ALM.
preCICE is quite a useful and strong tool for FSI simulation and it really helps my research a lot. Thank you! I will try it again.

So, these forces are actually not mesh based, correct? Then, the best approach for the moment would be to only write them on one rank, like mentioned above, and define the same auxiliary mesh on the solid side.

Maybe interesting to know: we have on our road map to also support non-mesh-associated data: Handling global coupling data (not associated to a mesh) · Issue #202 · precice/precice · GitHub

If the forces are mesh-based (and I got the concept wrong), then I would expect them to also be partitioned as the mesh is.

Great! :smiley: Probably a good occasion to mention that we are always looking for more testimonials.

OK, I will have a try! :grinning: