MBDyn adapter: coupling sequence of operations

Hi, following @DavidSCN suggestion about the order of operations during a coupling cycle in preCICE:

which I recall here:

I found a difference between how preCICE performs the coupling and the order of operations expected by MBDyn. MBDyn has a built-in feature to couple with external code (so that you don’t need to perform checkpointing) but the order of operations is like this:

while(isCouplingOngoing){
    compute_motion_step();
    # if this is the first iteration of the current time step it is a prediction of motion at t+1
    # MDByn expects that this motion is used by the fluid solver to compute its time step and then forces at interface
    retrieve_forces();
    # the fluid solver sends computed forces to MBDyn
    converged(true/false);
    # the fluid solver tells MBDyn if it has converged (true) so MBDyn advances to next time step
    # or not, so MBDyn performs another computation of the same time step with updated forces.
}

In my thesis I tried to follow the MBDyn logic (in fact I cannot send forces to MBDyn before performing the motion_step) but:

  • basically I could only perform solid first fluid second analyses, which in hindsight makes sense
  • this setup becomes unstable when mass ratio becomes close to 1 (0.4 was the limit in my experiments)

In the current version of my adapter, the only way I found to overcome this, is to perform perform 2 computations, i.e. another compute_motion_step() after retrieving forces from the interface and then send displacements to the interface after this second computation. this is sub-optimal, even if the structural computation is much faster than the fluid one.
This way I can perform any kind of simulation (order of solvers, etc…) and this overcomes added mass instability problems.
I just wonder if there is a nicer way to perform this.
Claudio

Hi @Claudio,

As you already pointed out, the order of the participants has been changed, so that now the solid participants is the first acting participant. However, I don’t understand how this avoids the checkpointing. It should be independent of the participant order. For me, it sounds a bit like MBDyn has already a built-in checkpointing. How do you determine the

part. Do you use preCICE here or not? In case you say ‘false’ at this point, you should recalculate the time step, i.e., reload the data in the solid solver.

If you really have the following problem:

what you could do is to send the displacement at the end of the while loop and use it initially for the next time window. This becomes obviously redundant in case you already compute the motion at the end of your time step (after convergence). Still, I’m not sure if I understand your case and how MBDyn handles it correctly.
Maybe you could explain the checkpointing and the convergence handling of the coupled system more in detail.

David

Hi @DavidSCN ,
I’m afraid I might have left too much implied. I’ll try to explain a little better. MBDyn has its own API that can be exploited (named mbcNodal, afterwards mbc) to exchange forces and displacements between the MBDyn structure and a given cloud of points. So one can setup a MBDyn case and connect it with some external software. MBDyn expects this kind of behavior by any connecting software:

while(true){
     // compute motion step of MBDyn structure
     mbc->GetMotion();
     // retrieve displacements at interface points and use them to compute fluid domain i.e.
     interface_motion  = mbc->X(i,j);
     // fluid advances with the information retrieved and computes forces at interface
     // those forces are sent to MBDyn to be used for next time step or iteration
     mbc->F(i,j) = interface_forces;
     // the external software must tell MBDyn if it has converged or not (MBDyn makes its own checkpointing)
    mbc->putForces(false); // no convergence, reload state.
    // mbc->putForces(true) would tell mbdyn to go to next time step
}

As far as I can see this behavior is somehow compliant with the preCICE order of operations only if, in staggered implicit coupling, Solid is first and Fluid is second. this in fact was my only way to perform a convergent simulation with preCICE and MBDyn. Honestly I haven’t experimented much with parallel coupling.

Lately I tried to implement an adapter which again uses the APIs from preCICE and MBDyn trying to mix the two logics:

while(precice_interface->isCouplingOngoing){
    mbc->GetMotion();
    precice_interface->readBlockVectorData(forceID,  ..., interface_forces);
    mbc->F(i,j) = interface_forces;
    mbc->putForces(false);
    // now I'm sure to compute motion with the forces of the latest fluid iteration
    mbc->GetMotion();
    interface_displacements = mbc->X(i,j);
    precice_interface->writeBlockVectorData(dispID, ..  , interface_displacements);
    precice_interface->advance();
    if(precice_interface->isActionRequired(...){
        // next iteration
        mbc-putForces(false);
   }else{
       // next timestep
       mbc->putForces(true);
   }
 }

This last version of the adapter has an apparently useless computation (GetMotion) but makes a dramatic improvement in convergence even in simulations where the mass ratio is 1, while previous versions of the adapter (or this one avoiding the first GetMotion) diverged. also the number of iterations improves a lot, even if there might be some improvement, but I’ll open another thread about it.

Claudio

Thanks for the description @Claudio. Let me rephrase it briefly: the initial mbc->GetMotion is not necessary since you read afterwards the updated values from preCICE.
First of all it is probably worth noting that you can also provide initial data using the precice_interface->initializeData() function and a configuration tag <exchange ..initialize="true"/>
If I understand now everything correct, a potential solution would look as follows

    // Initialize preCICE internally
    precice_interface->initialize();
    // initialize your solver with data
    precice_interface->initializeData();

    // read initial readData from preCICE if required for the first time step
    // probably only zeros here
    if (precice_interface->isReadDataAvailable())
        precice_interface->readBlockVectorData...
  }
// unable to apply forces before computation
// Compute initial 'useless' step 
mbc->GetMotion();
mbc->F(i,j) = interface_forces;
mbc->putForces(false);

// start the actual time loop
while(precice_interface->isCouplingOngoing){
    // now I'm sure to compute motion with the forces of the latest fluid iteration
    mbc->GetMotion();
    interface_displacements = mbc->X(i,j);
    precice_interface->writeBlockVectorData(dispID, ..  , interface_displacements);
    precice_interface->advance();
    precice_interface->readBlockVectorData(forceID,  ..., interface_forces);

    if(precice_interface->isActionRequired(...){
        // next iteration
        mbc-putForces(false);
   }else{
       // next timestep
       mbc->putForces(true);
   }
 }

The main point here is to pull the useless calculation out of the loop. Not sure if this is compliant with all restrictions. Let’s see…

David

Hi @DavidSCN ,
thank you very much for the proposed solution. Indeed I use data initialization and I like the idea of putting the first computation out of the loop. the loop structure that you propose recalls the one I used before, but maybe, without the first computation, I missed or I shifted an iteration. I will try and see.
It might be worth while saying that, with the redundant solution I am using, the FSI3 benchmark, with the Solid-Fluid staggered coupling, converges consistently with an average of 3 steps (min 2 max 4), making the simulation amazingly fast for my previous standards.
Claudio

1 Like

Hi Claudio,
keep me updated if it works:)

Very nice. We still need to discuss how exactly to do this in the future, but maybe you can share your case at some point with the community. There was a vast interest in benchmark cases during the workshop, in particular the FSI3.
David

Hi @DavidSCN ,
just a quick update. I tried with the order of operations you suggested but unfortunately it didn’t work. Convergence becomes dramatically slow or even diverges. On the other side, the “redundant” setup turns out to be dramatically fast. No issues whatsoever even with large deformations. I have to double check (or double-double check) if my modifications were right. On the other side I would rather keep everything like this: it means to get a simulation in 1 hour instead in 2 days. I’m wondering if somehow a second computation refines the result but for the moment it is a wild guess. I’m planning to check the residuals and dump the interface results after the two computations.
Claudio

@Claudio I understand that you probably solved these issues already. What was the fix?

@Makis I would say that, for the moment, I am keeping a (probably) unnecessary computation. I haven’t yet investigated other possible solutions, the current one is fast converging and this counterbalances one redundant structural computation.
The comparison with the usual benchmarks make me believe that the process is good enough.
I haven’t had the time to investigate other solutions but I think that, either we force MBDyn to be the first solver in a staggered computation (and I’m not sure what happens in a parallel computation) or we have to somehow change a bit of the MBDyn code, but honestly it is just a guess.
After I finish with some other experiments I’ll go back to work on this.
Claudio

1 Like