OpenFOAM interface patch in parallel computation

Hello,
My system:
OpenFOAMv2312
Ubuntu 22.04
preCICE 3.1.1
adapter 1.3.0

I am currently coupling OpenFOAM with a custom MATLAB solver using an custom version of the OpenFOAM adapter and am running into issues with OpenFOAM parallelization. I am using the interface to transfer non-geometric data, using consistent mapping. On the MATLAB side there is a single dummy mesh vertex (0 0 0). I am now running into crashes (see below) when I have a decomposed domain where one processor does not contain the patch used as an interface. If I split the domain equally, making sure that each section contains a piece of every boundary patch it all works.
Are there any good ways of handling this situation? Or do I manually have to make sure of the interface splitting?

Thanks and regards
David

---[preciceAdapter] Loaded the OpenFOAM-preCICE adapter - v1.3.0.
---[preciceAdapter] Reading preciceDict...
-[precice] e[0m This is preCICE version 3.1.1
-[precice] e[0m Revision info: no-info [git failed to run]
-[precice] e[0m Build type: Release (without debug log)
-[precice] e[0m Configuring preCICE with configuration "../precice-config.xml"
-[precice] e[0m I am participant "Fluid"
-[precice] e[0m Connecting Primary rank to 1 Secondary ranks
-[precice] e[0m Setting up primary communication to coupling partner/s
-[precice] e[0m Primary ranks are connected
-[precice] e[0m Setting up preliminary secondary communication to coupling partner/s
-[precice] e[0m Prepare partition for mesh Fluid-Mesh
-[precice] e[0m Gather mesh Fluid-Mesh
-[precice] e[0m Send global mesh Fluid-Mesh
-[precice] e[0m Receive global mesh aocs-Mesh
-[precice] e[0m Broadcast mesh aocs-Mesh
-[precice] e[0m Mapping distance min:0.01675 max:0.31825 avg: 0.18109 var: 0.00452119 cnt: 400
-[precice] e[0m Filter mesh aocs-Mesh by mappings
-[precice] e[0m Feedback distribution for mesh aocs-Mesh
-[precice] e[0m Setting up secondary communication to coupling partner/s
-[precice] e[0m Secondary ranks are connected
-[precice] e[0m Computing "nearest-neighbor" mapping from mesh "aocs-Mesh" to mesh "Fluid-Mesh" in "read" direction.
-[precice] e[0m Mapping distance min:0.01675 max:0.31825 avg: 0.18109 var: 0.00452119 cnt: 400
-[precice] e[0m Mapping "AngularAcceleration" for t=0 from "aocs-Mesh" to "Fluid-Mesh" (skipped zero sample)
-[precice] e[0m Mapping "AngularVelocity" for t=0 from "aocs-Mesh" to "Fluid-Mesh" (skipped zero sample)
-[precice] e[0m Mapping "LinearAcceleration" for t=0 from "aocs-Mesh" to "Fluid-Mesh" (skipped zero sample)
[1] [stack trace]
[1] =============
[1] #1  Foam::sigSegv::sigHandler(int)-[precice] e[0m iteration: 1, time-window: 1, time: 0 of 0.005, time-window-size: 0.001, max-time-step-size: 0.001, ongoing: yes, time-window-complete: no, 
---[preciceAdapter] preCICE was configured and initialized
---[preciceAdapter] End of read reached
---[preciceAdapter] Setting the solver's endTime to infinity to prevent early exits. Only preCICE will control the simulation's endTime. Any functionObject's end() method will be triggered by the adapter. You may disable this behavior in the adapter's configuration.
 in /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so
[1] #2  ? in /lib/x86_64-linux-gnu/libc.so.6
[1] #3  preciceAdapter::FSI::LinearAcceleration::read(double*, unsigned int) in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #4  preciceAdapter::Interface::readCouplingData(double) in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #5  preciceAdapter::Adapter::readCouplingData(double) in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #6  preciceAdapter::Adapter::adjustSolverTimeStepAndReadData() in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #7  preciceAdapter::Adapter::configure() in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #8  Foam::functionObjects::preciceAdapterFunctionObject::read(Foam::dictionary const&) in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #9  Foam::functionObjects::preciceAdapterFunctionObject::preciceAdapterFunctionObject(Foam::word const&, Foam::Time const&, Foam::dictionary const&) in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #10  Foam::functionObject::adddictionaryConstructorToTable<Foam::functionObjects::preciceAdapterFunctionObject>::New(Foam::word const&, Foam::Time const&, Foam::dictionary const&) in ~/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so
[1] #11  Foam::functionObject::New(Foam::word const&, Foam::Time const&, Foam::dictionary const&) in /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so
[1] #12  Foam::functionObjectList::read() in /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so
[1] #13  Foam::Time::run() const in /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so
[1] #14  ? in /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/bin/interFoam
[1] #15  ? in /lib/x86_64-linux-gnu/libc.so.6
[1] #16  __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
[1] #17  ? in /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/bin/interFoam
[1] =============
[18c669fa19c0:00616] *** Process received signal ***
[18c669fa19c0:00616] Signal: Segmentation fault (11)
[18c669fa19c0:00616] Signal code:  (-6)
[18c669fa19c0:00616] Failing at address: 0x28cf031300000268
[18c669fa19c0:00616] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f64a5dcd520]
[18c669fa19c0:00616] [ 1] /lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f64a5e219fc]
[18c669fa19c0:00616] [ 2] /lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7f64a5dcd476]
[18c669fa19c0:00616] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f64a5dcd520]
[18c669fa19c0:00616] [ 4] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN14preciceAdapter3FSI18LinearAcceleration4readEPdj+0x41c)[0x7f649958bb7c]
[18c669fa19c0:00616] [ 5] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN14preciceAdapter9Interface16readCouplingDataEd+0xf2)[0x7f6499565f82]
[18c669fa19c0:00616] [ 6] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN14preciceAdapter7Adapter16readCouplingDataEd+0x36)[0x7f64995f1276]
[18c669fa19c0:00616] [ 7] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN14preciceAdapter7Adapter31adjustSolverTimeStepAndReadDataEv+0x1a6)[0x7f64995f17e6]
[18c669fa19c0:00616] [ 8] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN14preciceAdapter7Adapter9configureEv+0x1bef)[0x7f64995fdd7f]
[18c669fa19c0:00616] [ 9] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN4Foam15functionObjects28preciceAdapterFunctionObject4readERKNS_10dictionaryE+0x11)[0x7f64996352d1]
[18c669fa19c0:00616] [10] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN4Foam15functionObjects28preciceAdapterFunctionObjectC2ERKNS_4wordERKNS_4TimeERKNS_10dictionaryE+0x4d)[0x7f649963542d]
[18c669fa19c0:00616] [11] /home/user/OpenFOAM/user-v2312/platforms/linux64GccDPInt32Opt/lib/libpreciceAdapterFunctionObject.so(_ZN4Foam14functionObject31adddictionaryConstructorToTableINS_15functionObjects28preciceAdapterFunctionObjectEE3NewERKNS_4wordERKNS_4TimeERKNS_10dictionaryE+0x37)[0x7f6499635947]
[18c669fa19c0:00616] [12] /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so(_ZN4Foam14functionObject3NewERKNS_4wordERKNS_4TimeERKNS_10dictionaryE+0x312)[0x7f64a6a01b62]
[18c669fa19c0:00616] [13] /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so(_ZN4Foam18functionObjectList4readEv+0xa28)[0x7f64a6a0a2c8]
[18c669fa19c0:00616] [14] /sources/OpenFOAM-v2312/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so(_ZNK4Foam4Time3runEv+0x2c7)[0x7f64a6a322b7]
[18c669fa19c0:00616] [15] interFoam(+0x4d0b0)[0x56126e83e0b0]
[18c669fa19c0:00616] [16] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f64a5db4d90]
[18c669fa19c0:00616] [17] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f64a5db4e40]
[18c669fa19c0:00616] [18] interFoam(+0x59225)[0x56126e84a225]
[18c669fa19c0:00616] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node 18c669fa19c0 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

Ok, after some debugging I have found a solution. The process which do not border the interface patch ended up crashing when trying to read from the interface patch… no surprises here. However, even if I didn’t try to access the boundaryField, I would have ended up with a crash after the process would have tried to read data from the buffer which would have been empty. Since in my application the data mapped to the interface is identical in every vertex, it is enough for one process to successfully read data from the interface and save it in the objectRegistry for later usage.So I simply created a check to see if the process had access to the interface and if not the reading would be stopped.

if(linearAccelerationField_->boundaryField()[patchID].empty() != 1)

a better solution might also just be to check if the buffer has a size of zero and then skip it.

From what I read on OpenFOAM/MPI I could have also looked into gathering/scattering to make the data available to all processor, but for my purpose this seemed simpler and faster.

@openFoe we might have some documentation gap here. Could you summerize whether there is a particular situation in which partitioning the interface fails?

Any suggestions on how to reproduce this with a tutorial would help fixing it.

What does non-geometric data mean? Does it mean the data is not attached to a any face centers or points? Maybe it would help if you share the critical part of your custom code.

In general, with code like

we ensure in the adapter that we only read on interfaces, where we actually are (rank-local) shareholder of the coupling interface.

I would not recommend using a size check for the read-access (unless you know that your configuration is ok with this and you know it is ok), see

In general, however, it should be the responsibility of the Adapter, to ensure that these data buffers have the correct size and are only accessed, if they are rank-local available.

Sorry for the late reply!
Maybe I can explain it better using this visualization. I am reading acceleration values from the MATLAB participant. Its a single vector value, which is written to the bottom patch marked in red. My adapter then takes this single value (I just use buffer[0]) and then write it to the boundaryMesh database. I then use an fvOptions tool to read this value from the mesh and apply it correctly as a body force. This works for the left decomposition, it does however not work for the right configuration as the top part did not receive data during the coupling. I now use a reduce command to sync up the data from other ranks in the fvOptions. This way, even if the rank does not have access to the interface patch, which hold the coupled data in the boundaryMesh, the data is given from other ranks.

By “interface”, do you now mean coupling interface or parallel interface?
Is this actually specific to parallel simulations?

Could this just be related to the direction of your mapping? If it is a write mapping from MATLAB to the nearest neighbor in OpenFOAM, I could imagine that one part of the boundary would not get any values. If, instead, it was a read mapping in OpenFOAM, then all nodes should get values.

I assume you have modified the OpenFOAM adapter to do this, right?

By interface I mean the coupling interface. The writing of data to the coupling interface works just fine. However, the acceleration is applied in a separate fvOptions utility. I simply adapted the tabulatedAccelerationSource to not read from a table, but request the acceleration values from the interface patch and write them to the internal cells. This last step requires each rank to have access to the coupling interface, which in the right case they do not. This is fixed by collecting the data from all ranks inside my custom fvOptions. I am just starting to code in OpenFOAM/C++ so I am not sure sure if there would be a better solution. Since my issues are related to the data being stored in the boundaryMesh, maybe I could also create a database without a mesh?

And yes, I have modified the adapter to read linear and angular accelerations, angular velocity.

Honestly, I think this whole situation could be remedied, once preCICE has the global data option Handling global coupling data

The buffersize is set according to the size of the coupling interface. For the right decomposition, the coupling interface has size zero on the top rank, hence, you encounter the segfault. Of course you can scatter the vector information (received on one interface) now internally in OpenFOAM, but it sounds a bit artificial.

If you want to apply quantities as body force, I would expect that you use the volume coupling functionality of the adapter Configure the OpenFOAM adapter | preCICE - The Coupling Library

In principle, your scenario could also be properly treated using the direct access (of the matlab mesh) Direct access to received meshes | preCICE - The Coupling Library

Yes it is a bit artificial, but for now it works. I will try to revisit this and improve in the future. Even volume coupling would be overkill. Since I only want to transfer a single vector per coupling variable, I don’t need this value in each cell center. The fvOptions will distribute the the acceleration values correctly. I will take a look at the direct mesh access, maybe this is the better workaround for global data.