I am trying to replicate the Perpendicular Flap example on a k8s cloud that uses an istio service mesh. I am running OpenFOAM with the preCICE adapter in one pod and Calculix with the preCICE adapter in another pod in the same k8s namespace. In theory this is not too different from running the two components in different docker containers in a docker network. Now here comes the problem: I am using Istio with strict mTLS, so that by default, all traffic should be sent through the istio sidecars. I do not want to change this setting as it is a basic security feature of istio.
To get the preCICE communication to work, I currently have to do two things:
- use the following m2n configuration setting in the precice-config.xml:
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../../pvc_shared" network="eth0" port="50061" enforce-gather-scatter="1" />
where enforce-gather-scatter=“1” is required to control the communication port range (only 50061). I understand that this limits performance, so I would prefer to remove it.
If I don’t use this flag, the Calculix participant fails with the following error:
Setting up preCICE participant Solid, using config file: config.yml
---[precice] This is preCICE version 2.5.0
---[precice] Revision info: no-info [git failed to run]
---[precice] Build type: Release (without debug log)
---[precice] Configuring preCICE with configuration "../precice-config.xml"
---[precice] I am participant "Solid"
Using quasi 2D-3D coupling
Set ID Found
2D-3D mapping results in a 2D mesh of size 247 from the 494 points in 3D space.
Read data 'Force' found with ID # '3'.
Write data 'Displacement' found with ID # '2'.
---[precice] Setting up primary communication to coupling partner/s
---[precice] Primary ranks are connected
---[precice] Setting up preliminary secondary communication to coupling partner/s
---[precice] Prepare partition for mesh Solid-Mesh
---[precice] Gather mesh Solid-Mesh
---[precice] Send global mesh Solid-Mesh
---[precice] Setting up secondary communication to coupling partner/s
---[precice] ERROR: Accepting a socket connection at failed with the system error: bind: Address already in use
Segmentation fault (core dumped)
Question: Could it be that preCICE tries to open additional communication ports which get blocked by istio?
- I also have to open-up port 50061 in the k8s container definition and explicitly exclude this port from istio communications - see k8s deployment definition below. Normally, I would prefer to define ports in a k8s service entry linked to the deployment, and route using a service name (eg: openFOAM:50061) instead of IP address (eth0). That asside, I think that if I could set what ports preCICE opens, then I could update the ports in the deployment definition below accordingly.
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-deployment
labels:
app: python-deployment
namespace: user
spec:
replicas: 1
selector:
matchLabels:
app: python-deployment
version: alpha
template:
metadata:
labels:
app: python-deployment
annotations:
traffic.sidecar.istio.io/excludeInboundPorts: "50061"
traffic.sidecar.istio.io/excludeOutboundPorts: "50061"
spec:
serviceAccountName: python-deployment
securityContext:
runAsUser: 1000
containers:
- name: python-deployment
image: xxxxxxxxxxx
imagePullPolicy: Always
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
volumeMounts:
- name: task-storage
mountPath: /app/precice-xxxxxxxxxxxxx-comp/editables/pvc_shared
ports:
- containerPort: 50061
initContainers:
- name: data-permission-fix
image: busybox
command: ["/bin/chmod","-R","u=rwX,g=rwX,o=rwX", "/data"]
volumeMounts:
- name: task-storage
mountPath: /data
securityContext:
runAsUser: 0
resources:
limits:
memory: "1Gi"
volumes:
- name: task-storage
persistentVolumeClaim:
claimName: nfs-pvc
Any ideas?
PS: Great to meet you all at the conference in Chania last week.
Olivia