I tested RBF Partition of Unity mapping with the ISSM adapter for the first time and ran into a couple of issues. Not sure if they are all related, so I put them all here. Sorry if this gets a bit long. Overall, I found using RBF a lot more work than expected.
The general situation:
ISSM mesh: unstructured triangles, between ~0.5km and ~10km resolution in interesting areas, up to 100km resolution in uninteresting areas (uninteresting means far away from ice, so not normally included in any actual computation, but still included in the computational domain, e.g. to get a nice rectangle), somewhere between 1 and 10 mio vertices in total
CUAS mesh: homogeneous quad mesh, 0.6km resolution, ~10 mio vertices in total
Run in parallel with 200-1000 Tasks per participant on an HPC cluster.
The meshes cover the same computational domain. All mappings are read-consistent, I attach the general precice config, specific settings are described below.
precice-config.xml (4.5 KB)
Nearest Neighbor mapping works fine. Linear cell interpolation as well, at least in one direction. CUAS mesh does not currently include connectivity, so it hasn’t been tested, but that can be added quickly and I don’t expect any problems there.
Issue 1: RBF mapping from CUAS to ISSM causes crashes
The ISSM adapter crashes when I set RBF mapping from CUAS to ISSM mesh. The crash is seemingly always in the same function (log message [partition::ReceivedPartition]:344 in filterByBoundingBox: eBroadcast mesh CUASMesh) but with different errors. If precice is a debug build, slurm reports out of memory. If precice is a release build, I get a segfault (or assertion if release asserts are enabled in precice). Not sure if that’s exactly the same error or the debug build runs out of memory for other reasons, but it is inside the same function. Below I attached logs with debug and release builds. The debug log has more details but I cut off the beginning for size. I can provide full logs with trace output if needed, but they are huge.
issm log relwithdebinfo.txt (339.3 KB)
issm log debug.txt (2.3 MB)
I tried different basis functions (compact poly C0 with r=10km, thin plate splines), different vertices-per-cluster (between 50 and 500), always crashes. Mapping with NN works fine.
Debugging has not been succesful so far because of the size.
I have two smaller setups where RBF mapping works OK. One is similar, same solvers and general meshes, but different geometry and about half the size (~5 mio vertices), running on the same HPC cluster. The other is tiny (~400 vertices) for testing, coupling ISSM and ASTE, tested with a few mpi tasks on my workstation. So it’s not a general problem of RBF or the adapters.
Issue 2: Choosing the basis function for RBF mapping from ISSM to CUAS
I struggle to pick the right basis function for mapping from ISSM to CUAS.
Global basis functions are expensive to intialize (I got it down to ~25mins by setting vertices per cluster to 500. with 1000 vertices there is a bad_alloc exception, with less than that the slurm job timed out after 2+ hrs of computing the mapping, not sure how long it would have taken to complete. The long resolution is probably expected for a global method, but is there a way to optimize this?
Local basis functions are tricky because the ISSM mesh resolution varies so much. A radius of 10km would be probably be fine for high resolution areas (~20 vertices in each direction) but is only 1 or 2 vertices in areas where ISSM has low resolution and 0 vertices in the uninteresting areas, where I get many artifacts where there is supposed to be just empty ocean, see the image below.
This is concerning even if the areas aren’t interesting initially, because what is “interesting” or not is a dynamic property of the solvers, and artifacts in the wrong place can cause uninteresting areas to unexpectedly become interesting and influence the global solution. I guess the only solution to this would be to carefully mask the continent and don’t add the uninteresting areas to the coupling mesh?
Assuming the uninteresting areas are correctly masked somehow, the choice of radius or shape parameter is still not obvious, there is still at least an order of magnitude between fine and coarse resolution. Too small radius and the solution suffers, too big and it becomes badly conditioned or basically a global basis function. Any recommendations are welcome.
Issue 3: ghost vertices
I guess this is more of a feature request? To enable RBF mapping, I had to modify the ISSM adapter to exclude ghost vertices on the edges of MPI processes. Then I also have to manually synchronize values after I read them from precice, because the solver requires values at ghost vertices and does not synchronize internally before solving. So now the adapter has two “modes”, one with ghosts and mesh connectivity, and another without ghosts and without mesh connectivity (ghosts are required to define connectivity), but with manual synchronization. This feels like unnecessary work. Would it be possible for precice to ignore ghosts automatically in RBF mappings, maybe with some additional help from the adapter? Then the adapter can support all mappings with minimal code.
I looked at the documentation page for distributed meshes, and I considered using the two mesh approach, but unless I miss something that has pretty much the same drawbacks as my current solution with two “modes”, i.e. the adapter needs to be able to handle both with and without ghosts, and it requires additional configuration by the user to set two meshes in the precice config.


