First iteration of serial-explicit with multiple solvers behaves different to subsequent iterations

I have a setup with multiple participants, which are arranged in two layers: solvers and “couplers” (I have some fixed-point iteration which is done through the couplers).

First, the solvers take some initial values, then run their part and send data to the couplers, which suggest new data for the next iteration. In general, the setup works as expected in terms of data exchange and order of execution. However, but during the first iteration (only!), the solvers are executed one after the other, not simultaneously.

Let’s break it down to a minimum working example I derived from my actual setup (with some dummy solver to demonstrate and debug the behavior).

Participants:

  • f1
  • f2
  • c1

Coupling Scheme:

  • f1 → c1
  • f2 → c1

f1 and f2 can be run simultaneously , using data from c1’s previous iteration (or initial values from c1).

What happens:

First iteration, t=0.0

  • f1 starts solving, f2 and c1 wait (both at line precice.initialize(), message reads `—[precice] Secondary ranks are connected`)
    • waiting of c1 is expected, but f2 should already run its solving here
  • f1 gets to precice.advance()
  • f2 starts solving
  • f2 gets to precice.advance()
  • c1 starts solving
  • c1 gets to precice.advance()
  • Time-window ends

Second iteration, t=1.0

  • f1 and f2 both start execution, c1 waits
    • this is different to first
  • f1 and f2 get to precice.advance()
  • c1 starts solving
  • c1 gets to precice.advance()
  • Time-window ends

All iterations now are the same as the second iteration.

What am I missing?

precice-config.xml

<?xml version="1.0" encoding="UTF-8" ?>
<precice-configuration>
  <log enabled="true">
    <sink type="stream" output="stdout"
      filter="%Severity% > debug and %Rank% = 0"
      format="---[precice] %ColorizedSeverity% %Message%"
      enabled="true" />
  </log>

  <data:scalar name="dmdt" />
  <data:scalar name="p_t" />

  <mesh name="f1_2" dimensions="2">
    <use-data name="p_t" />
    <use-data name="dmdt" />
  </mesh>

  <mesh name="f2_1" dimensions="2">
    <use-data name="p_t" />
    <use-data name="dmdt" />
  </mesh>


  <mesh name="c1_1" dimensions="2">
    <use-data name="p_t" />
    <use-data name="dmdt" />
  </mesh>
  <mesh name="c1_2" dimensions="2">
    <use-data name="p_t" />
    <use-data name="dmdt" />
  </mesh>


  <participant name="f1">
    <provide-mesh name="f1_2" />
    <receive-mesh name="c1_1" from="c1" />

    <read-data mesh="f1_2" name="p_t" />
    <write-data mesh="f1_2" name="dmdt" />

    <mapping:nearest-neighbor
      constraint="consistent"
      direction="read"
      to="f1_2"
      from="c1_1">
    </mapping:nearest-neighbor>
  </participant>



  <participant name="f2">
    <provide-mesh name="f2_1" />
    <receive-mesh name="c1_2" from="c1" />

    <read-data mesh="f2_1" name="p_t" />
    <write-data mesh="f2_1" name="dmdt" />


    <mapping:nearest-neighbor
      constraint="consistent"
      direction="read"
      to="f2_1"
      from="c1_2">
    </mapping:nearest-neighbor>
  </participant>




  <participant name="c1">
    <provide-mesh name="c1_1" />
    <provide-mesh name="c1_2" />
    <receive-mesh name="f1_2" from="f1" />
    <receive-mesh name="f2_1" from="f2" />

    <write-data mesh="c1_1" name="p_t" />
    <read-data mesh="c1_1" name="dmdt" />

    <write-data mesh="c1_2" name="p_t" />
    <read-data mesh="c1_2" name="dmdt" />

    <mapping:nearest-neighbor
      constraint="consistent"
      direction="read"
      to="c1_1"
      from="f1_2">
    </mapping:nearest-neighbor>
    <mapping:nearest-neighbor
      constraint="consistent"
      direction="read"
      to="c1_2"
      from="f2_1">
    </mapping:nearest-neighbor>
  </participant>


  <m2n:sockets connector="f1" acceptor="c1" exchange-directory=".." />
  <m2n:sockets connector="f2" acceptor="c1" exchange-directory=".." />

  <coupling-scheme:serial-explicit>
    <time-window-size value="1" />
    <max-time-windows value="3" />
    <participants first="f1" second="c1" />
    <exchange data="p_t" mesh="c1_1" from="c1" to="f1" initialize="yes" />
    <exchange data="dmdt" mesh="f1_2" from="f1" to="c1" initialize="yes" />
  </coupling-scheme:serial-explicit>

  <coupling-scheme:serial-explicit>
    <time-window-size value="1" />
    <max-time-windows value="3" />
    <participants first="f2" second="c1" />
    <exchange data="p_t" mesh="c1_2" from="c1" to="f2" initialize="yes" />
    <exchange data="dmdt" mesh="f2_1" from="f2" to="c1" initialize="yes" />
  </coupling-scheme:serial-explicit>


</precice-configuration>

dummysolver.py:

#!/usr/bin/env python
from precice import Participant


def dummysolver(solver_name, precice_config, meshes):
    precice = Participant(solver_name, precice_config, 0, 1)

    # set up mesh stuff from precice
    for mesh in meshes:
        mesh['dimensions'] = precice.get_mesh_dimensions(mesh['name'])
        mesh['vertex_ids'] = precice.set_mesh_vertices(mesh['name'], mesh['grid'])

    # write initial data
    if precice.requires_initial_data():
        for mesh in meshes:
            for varname in mesh['var_write']:
                precice.write_data(mesh['name'], varname, mesh['vertex_ids'], mesh['variables'][varname])

    # init precice
    print("init precice...")
    precice.initialize()

    # print initial values of variables
    print("after init:")
    for mesh in meshes:
        print(f"{mesh['name']}: {mesh['variables']}")


    # configure time step
    t = 0.
    dt = 1.0 #precice.get_max_time_step_size()

    # LOOP DUMMY SOLVING
    while precice.is_coupling_ongoing():
        print(f"t={t}:")

        # read values
        for mesh in meshes:
            vertex_ids = mesh['vertex_ids']
            for varname in mesh['var_read']:
                mesh['variables'][varname] = precice.read_data(mesh['name'], varname, mesh['vertex_ids'], dt)

        # output current iteration values
        for mesh in meshes:
            print(f"{mesh['name']}: {mesh['variables']}")


        # --------- DUMMY SOLVE ---------
        for mesh in meshes:
            for varname in mesh['var_write']:
                mesh['variables'][varname] += 0.1

        _ = input("Dummy solver paused. Press Enter to continue...")
        # --------- DUMMY SOLVE ---------



        # write values after solving
        for mesh in meshes:
            vertex_ids = mesh['vertex_ids']
            for varname in mesh['var_write']:
                precice.write_data(mesh['name'], varname, mesh['vertex_ids'], mesh['variables'][varname])

        # next iteration
        t += dt
        print("advance, wait...")
        precice.advance(dt)
        print(f"loop done, t={t}")

The three participants:

f1.py

#!/usr/bin/env python
import numpy as np
import os, sys
sys.path.append(os.path.abspath(".."))
from dummysolver import dummysolver

solver_name = "f1"
precice_config = "../precice-config.xml"
meshes = [
    {
        'name': "f1_2",
        'grid': np.array([[0.,0.]]),
        'dimensions': -1,
        'vertex_ids': [],
        'var_read' : ['p_t'],
        'var_write' : ['dmdt'],
        'variables' : {
            'p_t' : np.array([1.]),
            'dmdt' : np.array([1.]),
        }
    }
]

dummysolver(solver_name, precice_config, meshes)

f2.py:

#!/usr/bin/env python
import numpy as np
import os, sys
sys.path.append(os.path.abspath(".."))
from dummysolver import dummysolver

solver_name = "f2"
precice_config = "../precice-config.xml"
meshes = [
    {
        'name': "f2_1",
        'grid': np.array([[0.,0.]]),
        'dimensions': -1,
        'vertex_ids': [],
        'var_read' : ['p_t'],
        'var_write' : ['dmdt'],
        'variables' : {
            'p_t' : np.array([4.]),
            'dmdt' : np.array([4.]),
        }
    }
]

dummysolver(solver_name, precice_config, meshes)

c1.py:

#!/usr/bin/env python
import numpy as np
import os, sys
sys.path.append(os.path.abspath(".."))
from dummysolver import dummysolver

solver_name = "c1"
precice_config = "../precice-config.xml"
meshes = [
    {
        'name': "c1_1",
        'grid': np.array([[0.,0.]]),
        'dimensions': -1,
        'vertex_ids': [],
        'var_read' : ['dmdt'],
        'var_write' : ['p_t'],
        'variables' : {
            'p_t' : np.array([2.]),
            'dmdt' : np.array([2.]),
        }
    },

    {
        'name': "c1_2",
        'grid': np.array([[0., 0.]]),
        'dimensions': -1,
        'vertex_ids': [],
        'var_read': ['dmdt'],
        'var_write': ['p_t'],
        'variables': {
            'p_t': np.array([3.]),
            'dmdt': np.array([3.]),
        }
    }
]

dummysolver(solver_name, precice_config, meshes)

We have to wait for the preCICE experts to answer what is going on, but I am a bit curious. Is the observed behavior an issue for your use case, i.e., is there something to fix, or are you just curious yourself whether this is the intended behavior?

@ajaust The behavior is partially an issue. The data exchange is performed correctly and in order. However, in the first iteration, it is a performance issue, since the solvers are not running in parallel but sequential (idling of CPU nodes). For now, it is not a big issue, but might become in the future once I scale up my meshes etc.

I’m somehow missing how the first iteration is different than all subsequent, so that the initialize() is fully done. From my personal perspective, I would have expected that everything happens in initialize() and every time step loop is exactly the same.

I’ve done some more reading in the preCICE documentation. It is described that this behavior can be expected: Multi coupling configuration | preCICE - The Coupling Library
I played around a little with changing first/second order of the solvers, shifting “initialize” or removing it. Once no initial data is send, all solvers run parallel even in the first iteration.

However, all this does not solve my initial problem. Switching first/second in my case does help at first, but if I want to restart the simulation from iteration X, I get back to the same point when loading and distributing results from the previous simulation.
Technically I could implement that outside of the main preCICE wrappers. But that does against my instinct, as I would put up my solvers during initialize() to either start fresh or load and exchange previous iteration results. I want to keep implementing new adapters/solvers consistent.

It seems like my best choice at the moment would be to accept the first iteration as a kind of bootstrap and don’t do any actual solver run. But again, this goes against my instinct and logic, since I rather would do that in the actual initialize phase.

Hi @Mattsch,

When I refactored the compositional coupling scheme to supporting remeshing, I split data exchange into two phases, one per direction of data exchange. This is required for supporting remeshing in the serial coupling scheme.

Positive side effect of this refactoring was that coupling schemes are all running these phases interleaved. This can bundle blocking communication, meaning that solvers in a compositional coupling are able to run in parallel. This is what you observe during normal iterations.

The data initialization is still a single function which exchanges in both directions, and therefore imposes a sequential handling to the coupling schemes.
To fix this, we would need to split also the initialization into two phases and implement this in the compositional coupling scheme.

Would you like to open a feature issue in the preCICE repository? It is always extremely helpful to know which features are needed by users. This feature could still make the April release.

Best
Frédéric

Dear @fsimonis ,

thanks for the insight. So as I understand, the observed behavior in my setup could be fixed and is not really on purpose (but more or less a missing implementation)?
I’ll my issue to the repo. However, I am currently in the explorative phase how preCICE can be used on our side, so there is no urgency.

Exactly. It’s expected behaviour that can be fixed with some implementation effort.

Happy exploring!

Best
Frédéric