Highlights of the new preCICE release v2.3

Just incoming: a fresh new preCICE feature release, v2.3. Let’s see what we got. :eyes:

Improved memory footprint of mesh data structures

Over the past decade, the regular change of usage requirements in preCICE led to bloated structures for the mesh primitives, which resulted in unnecessary memory overhead and data fragmentation. We optimized these mesh data structures by removing unnecessary features, such as normals of edges, by reducing the bookkeeping in triangles, by calculating normals on-demand instead of storing them, and by directly storing 2/3-D coordinates in the structs instead of allocating them on the heap.
Alongside the reduced memory overhead, this also improves data locality :rocket:.
The following table provides an overview of the memory improvements for all primitives.

Primitive Old [Bytes] New [Bytes]
Obj Dyn Total Obj Dyn Total
Vertex 48 2x24 96 48 0 48
Edge 40 24 64 24 0 24
Triangle 48 24 72 32 0 32

Multi-coupling extension

So far, the multi-coupling feature of preCICE was a bit limited. It could only be used for scenarios where one (centric) participant had connections to all other participants – just like in the multiple perpendicular flaps tutorial, where the fluid participant is connected to both solid participants. The centric participant can then control the coupling scheme:

<coupling-scheme:multi>
  <participant name="Fluid" control="yes" />
  <participant name="Solid1" />
  <participant name="Solid2" />
  ...
</coupling-scheme:multi>

We partly removed this restriction with v2.3. Now, also a non-centric participant can control the coupling scheme. Why is this important? Because now, we can also handle scenarios where there is simply no centric participant. Imagine four participants coupled in a row:

A \leftrightarrow B \leftrightarrow C \leftrightarrow D

This could be, as an academic example, two channel flows, separated by an elastic membrane, where there is an additional elastic flap in one of the channels :exploding_head:.

We can now, for example, make A the controller. If you want to see the complete configuration file, have a look at this integration test. We still have one restriction, however, at least for the moment: The controlling participant needs to run in serial.

Scaled-consistent data mapping

There is now a third choice besides consistent and conservative data mapping: scaled-consistent. It is basically the consistent option, but with a conservation scaling after the mapping. This is, for instance, a great option for conserving flow rate when mapping velocities. Currently, this mapping only works for serial participants, but we are working on a parallelization.

More small things

  • We changed all convergence measure outputs (screen log and log files) to scientific format and made them more readable:
TimeWindow  Iteration  ResRel(Displacements)  ResRel(Forces)
 ...
     1      12  3.10171371e-03  1.27279630e-03
     1      13  1.22703631e-03  1.86290797e-04
     1      14  1.33979283e-04  1.47033325e-04
     2       1  7.30942365e-01  8.37346009e-01
     2       2  1.45023640e-01  1.82075455e-02
     2       3  1.33150983e-02  6.01315341e-03
     2       4  2.08715768e-03  2.92094817e-03
     2       5  5.15375046e-04  4.07557375e-04
     3       1  5.63476022e-01  7.05386938e-01
     3       2  7.89180248e-03  4.22791173e-03
...
  • We added a new API function to avoid unnecessary mesh connectivity computations in adapters:
bool isMeshConnectivityRequired(int meshID) const;
  • In next months and years, we will work on a few new groundbreaking features in preCICE (roadmap of preCICE). These features often require extensions or even modifications to the API of preCICE. Some of these will eventually lead to preCICE v3.0. To work on these upcoming features in the main trunk of preCICE and already test them now, we added a few new experimental API functions with this release (one example is the direct access to received meshes). They are not really ready to be used yet :construction: and they may also still change. That’s why one needs to explicitly switch them on:
<solver-interface experimental="true" dimensions=... >
    ...
</solver-interface>    
  • RBF data mappings using PETSc give now more useful information when the don’t converge. Did the linear system diverge or where just all GMRES iterations used up? :thinking:

  • m2n:mpi now defaults to the (much) more efficient single-ports implementation :rocket:. To use the old variant, use m2n:mpi-mulitple-ports

  • Socket communication now finally works without network connection :relieved: . Fingers crossed :crossed_fingers: that it will stay this way.

  • A last important detail: CMake now defaults to BUILD_SHARED_LIBS=ON. Previously, you needed to explicitly enable this to make preCICE work in most scenarios. Since this is what the vast majority of users use and since the previous default was not supported, we consider this a non-breaking change. Scripts explicitly enabling building the shared library still work as expected and we are working towards full support of static library builds.

5 Likes