Highlights of the new preCICE release v3.2

DOI

Summary: Important long-term features see great progress with the new release preCICE release v3.2, such as different options to handle dynamic meshes. Other features were finally wrapped up and integrated, for instance, performance improvements in quasi-Newton filtering and preconditioning that we published 3 years ago. Read this blog article to learn about the highlights.


Just-in-time data mapping

You can now read and write data at temporary coordinates instead of static vertex IDs. This should open the door for efficient and flexible mesh-particle coupling, but could also be helpful for other applications.

# Similar to direct-mesh access, you first have to define an access region
participant.set_mesh_access_region("Received-Mesh", bounding_box)
# After initialization, you can then read data at dynamic locations triggering a just-in-time mapping
coords = [(1.0, 1.0)]
value = participant.map_and_read_data("Received-Mesh", "Scalar-Data", coords, dt)
# and write data again triggering a just-in-time mapping
participant.write_and_map_data("Received-Mesh", "Other-Scalar-Data", coords, value)

In the config, you have to allow API access to the received mesh and define the just-in-time mappings. As we now use temporary coordinates, a just-in-time read mapping has no β€œto” mesh and a just-in-time write mapping has no β€œfrom” mesh:

<participant name="SolverOne">
  <receive-mesh name="Received-Mesh" from="SolverTwo" api-access="true" />
  <read-data name="Scalar-Data" mesh="Received-Mesh" />
  <write-data name="Other-Scalar-Data" mesh="Received-Mesh" />
  <mapping:rbf direction="read" from="Received-Mesh" constraint="consistent" />
  <mapping:rbf direction="write"  to="Received-Mesh" constraint="conservative" />
</participant>

The feature is still experimental. Not all mapping methods are supported. See the documentation for more details.

Remeshing

Instead of using temporary locations, you can now also change the complete mesh.

if (my_solver_wants_to_remesh):
    # reset the complete mesh
    participant.reset_mesh("Provided-Mesh")
    # redefine the mesh
    new_ids = participant.set_mesh_vertices("Provided-Mesh", new_coords)
    # write data to the newly defined mesh
    participant.write_data("Provided-Mesh", "Data", new_ids, new_values)

# preCICE automatically handles the mesh change
participant.advance(dt)

No mesh-specific changes are necessary in the config, but the feature has to be enabled globally for performance reasons:

<precice-configuration ... allow-remeshing="true" />

Also this feature is still experimental. Only parallel coupling schemes are supported so far. See the documentation for more details.

Waveform iteration

Waveform iteration is available since preCICE version v3.0, see the documentation for more details. We now reached an important further milestone by extending the support to quasi-Newton and Aitken acceleration. For further reading, we can recommend the preprint from Kotarsky and Birken.

As we are now β€œfeature complete” on a basic, but sufficient level, we switched to use substeps="True" as default in all exchanges of implicit coupling schemes. Additional data from inside the coupling time window is now automatically exchanged if a solver subcycles. This drastically improves numerical accuracy, but also leads to higher computational effort in data mapping, acceleration, and communication.

Profiling

Quite a few improvements concerning performance profiling are part of this release. For example, we added events for profiling exports and most API calls. And you can now also track custom events in your adapter code yourself:

participant.start_profiling_section("computeForces")
compute_forces()
participant.stop_last_profiling_section()

After the simulation, merge performance results from all participants as usual:

$ precice-profiling merge fluid-openfoam solid-fenics

And then, for example, inspect the much improved summary:

$ precice-profiling analyze Fluid

Reading events file profiling.json
Output timing are in us.
name                                   β”‚ sum        count %      mean       min        max
───────────────────────────────────────┼─────────── ───── ────── ────────── ────────── ──────────
total                                  β”‚ 1520885.00     1 100.00 1520885.00 1520885.00 1520885.00
  advance                              β”‚  183083.00     1  12.04  183083.00  183083.00  183083.00
    m2n.receiveData                    β”‚    1362.00     1   0.09    1362.00    1362.00    1362.00
    ...                                                                                          
  initialize                           β”‚ 1333426.00     1  87.67 1333426.00 1333426.00 1333426.00
    m2n.acceptPrimaryRankConnection.A  β”‚  810972.00     1  53.32  810972.00  810972.00  810972.00
    m2n.acceptSecondaryRanksConnection β”‚   47853.00     1   3.15   47853.00   47853.00   47853.00
    ...                                                                                          
  solver.advance                       β”‚    2278.00     1   0.15    2278.00    2278.00    2278.00
    computeForces                      β”‚     925.00     1   0.06     925.00     925.00     925.00

See the documentation for more details.

Exceptions

An important usability improvement is that the C++ API now throws a precice::Error instead of aborting when running into a problem. Catching such exceptions in the Python bindings, for example, now prevents crashing complete interactive sessions:

>>> import precice
>>> participant = precice.Participant("SolverOne", "./precice-config.xml", 0, 1)
preCICE: This is preCICE version 3.2.0
...
preCICE: I am participant "SolverOne"
>>> vertex_ids = participant.set_mesh_vertices("SolverOneMesh", [(1.0, 1.0, 1.0)])
preCICE:ERROR:  The mesh named "SolverOneMesh" is unknown to preCICE. This participant only knows mesh "SolverOne-Mesh".

We also use the exceptions to test non-happy paths in the core library (some examples).

To prevent extensive stack traces, we recommend catching these expections in C++ adapters, compare the OpenFOAM adapter as an example.

What else?

As usual, we improved usability of preCICE with new checks and better error messages. For example, we now check that coupled participants use the same configuration and the same preCICE version. This has always been necessary, but has simply never been checked.

On the border between the core library and the ecosystem, we are now actively using our new system tests to catch numerical regressions in the library, adapters, and tutorials. These long tests are now executed at least every night, and help identify complex issues early on, especially in components not covered by other tests (e.g., some of the adapters). Caching reduces the runtime, which is expected to grow significantly, as we now cover more components and test cases.

See an example run, and read more in a recently published paper.


Version v3.2 is a minor release. This means that we do not break the API. Updating should be easy: nothing to change in your code, just link it to the latest preCICE. Seeing all the progress above, we highly recommend it. :blush:

Last thing: Do not forget to submit your research to and register for the preCICE Workshop in Hamburg this September.

2 Likes