I have some general questions about fluid fluid Coupling. Based on the partitioned pipe case as an example, I would like to make some changes to the variables being exchanged?

I would like to send both the velocity and pressure from fluid 1 solver outlet to fluid 2 solver inlet. Vice versa, I would like to send both pressure and velocity from fluid 2 solver inlet to fluid 1 solver outlet.

Currently, the example in the partitioned pipe case is only sending velocity from solver 1 to solver 2 and receiving pressure from solver 2 to solver 1. I would like to do the exchange of both pressure and velocity on both solvers.

In order to achieve that, do we need to modify the openfoam adapter to do this. Or that just changing the precice-config xml and preciceDict file is only necessary? Please advice.

I would like to add some additional info with regards to my queries above. In order to allow for the velocity and pressure from each domain to transfer to each other, I realised that it is necessary to set up two more variables, velocity2 and pressure2 into the adapter. This is done so as to write the velocity and pressure into the buffer while receiving velocity2 and pressure2 from another buffer, which is obtained from the right domain inlet. I have set up the preciceDict such that the Left domain:

I was able to get the simulation running but it was giving some error as follows:

---[precice] relative convergence measure: relative two-norm diff of data "Pressure2" = inf, limit = 1.00e-06, normalization = 0.00e+00, conv = true
---[precice] relative convergence measure: relative two-norm diff of data "Velocity" = 9.90e+01, limit = 1.00e-06, normalization = 2.24e-02, conv = false
---[precice] WARNING: The coupling residual equals almost zero. There is maybe something wrong in your adapter. Maybe you always write the same data or you call advance without providing new data first or you do not use available read data. Or you just converge much further than actually necessary.
---[precice] ERROR: Attempting to add a zero vector to the quasi-Newton V matrix. This means that the residuals in two consecutive iterations are identical. If a relative convergence limit was selected, consider increasing the convergence threshold.

Please advise how to resolve this. I have attached the precise-config.xml file here.

your approach with the additional data name is correct and your configuration looks fine.

The reason you get ERROR: Attempting to add a zero vector to the quasi-Newton V matrix. is probably that you are starting with zero values on some of the data you are accelerating and they are not changing for a while. This is a case which the Quasi-Newton schemes cannot easily treat.

You either need to ensure that âthere is something happeningâ in your simulation (i.e., values change over time), or you can switch to another acceleration scheme, such as Aitken underrelaxation.

Thank you for your reply. I have another question regarding about the boundary condition set up in this case. The idea is that the velocity and pressure from the solver of each side should overwrite boundary patch on either side.

For the fluid1 solver, the outlet of U will be set to fixed value? and inlet of P of fluid solver 2 will be set to fixed value as well? Setting it to fixedValue, $internalField is then correct?

Technically, yes: to read U,P values on both sides, both sides need to be set to fixedValue.

But modeling-wise, such a Dirichlet-Dirichlet coupling probably does not make much sense. One would probably need to investigate other boundary conditions.

What is your motivation for exchanging both values, both ways?

If you recall previously, I showed you the case of coupling two interFoam solver together. There was some wave reflection on the left boundary outlet and we were not very satisfied with the solution.

Anyway, we have recently realised that what we really need is something like a processorboundary class in openfoam at the interface (left domain outlet and right domain inlet). This is so that openfoam solver will see the two separate domain as one single domain. Imagine the case where a single domain is split into two domain using MPI, the communication at the processor boundary between the two processor domain should communicate such that it is a single domain.

To be more specific, what we need is a way that compute the cell face of the boundary mesh cells using information from the neighbouring processor boundary mesh cells. Essentially, an interpolation formula that computes the boundary cell face.

See slide 10 and 11 and in particular, interpolation equation on slide 11 that make use of the cell centre value of the boundary cells and neighbouring processor boundary cells to compute the cell face value.

So the openfoam adapter has to communicate on both sides to allow the cell face on the boundary mesh to be computed. Instead of overwriting the boundary patch values, it should use the cell centred information from the other domain to compute the boundary cell face value of the local domain.

I seeâŚ You essentially want to do fluid-fluid coupling that would work in exactly the same way as domain decomposition in parallel simulations.

With the current approach, each OpenFOAM domain will still have a physical boundary condition, which may modify the data received. What you would need (and we probably need as well) is a boundary condition that works in the same way as the parallel boundaries.

A problem, however, is that boundary conditions are applied once per step, because the function objects are called once per step. If you want tighter coupling, you could directly call preCICE from inside an OpenFOAM solver (without the adapter).

Iâm a master student, currently working on the fluid-fluid-coupling together with @Makis .
Iâve been thinking in the same direction as you I think. I also tried creating custom Boundary Conditions that are derived from the OpenFOAM âcoupledâ BC. (same as the processor BC).

From what I can see the main problem lies in the surface flux field âphiHbyAâ, which is created in âpEqn.Hâ for most pimple based solvers. This flux field uses extrapolated BCs, meaning the values for the boundary faces are simply the same as in the cell centers next to them. Unfortunately this field is not stored in foamâs objectRegistry, therefore I donât know how to modify it from outside (a.k.a. from preCICE).
These phiHbyA boundary faces are only modified by OpenFOAM when the underlying boundary patch is coupled aswell. This means the boundary patches must be marked as coupled in the mesh itself. But when you use coupled boundary patches in the mesh creation, then OpenFOAM wants to know the partner patch of the coupled boundaries. Using preCICE as a middleman though, we have two OpenFOAM instances that do not know about each other meshes.

I do not think it is appropriate to use the original âcoupledâ BC in openfoam as a starting point. Because the communication btw the processor boundary of the left and right domain are in two separate case setup. So the communication btw them is to use preCiCE instead of the internal OpenFOAM MPI functions. In yr approach, it is something like a cyclic boundary on left domain outlet and right domain inlet. But they are two separate case folder.

You would only want to make the boundary mimic the way the flux is calculated for a processor boundary and then the communication is done through precice. I have not yet figured out the full details. When I have more insights on the implementation, I will communicate with @Makis and you.