eth0 and eth1 are not affected by the installation of the Multus CNI and continue to use Cilium.
Validated on 31 Oct 2025 • Last edited on 4 Nov 2025
DigitalOcean Kubernetes (DOKS) is a Kubernetes service with a fully managed control plane, high availability, and autoscaling. DOKS integrates with standard Kubernetes toolchains and DigitalOcean’s load balancers, volumes, CPU and GPU Droplets, API, and CLI.
Only 8-GPU configuration GPUs are multi-node capable and are available by contract only. For more information on the supported GPUs, see GPU Worker Nodes.
In a multi-node configuration, 8-GPU configuration GPUs are connected via a dedicated high speed networking fabric in the DOKS cluster. The networking fabric is exposed on the worker nodes with eight network interface controllers (NICs), named fabric0, fabric1, …, fabric7 which exist next to the regular eth0 and eth1 interfaces. The eth0 interface is for public connectivity to the internet and eth1 is for private connectivity to other nodes in the same VPC network. The NICs enable AI/ML workloads to exchange data through the networking fabric with very low latency and high throughput. To achieve high networking performance, we recommend using the Remote Direct Memory Access (RDMA) networking protocol for communication between the GPU nodes through the fabric NICs, which completely bypasses the CPU and kernel of the operating system for data transfer.
Additional drivers and plugins are required to enable the high speed fabric for multi-node GPU networking. This guide covers the additional required components and how to configure them.
To use the high speed fabric with container-based workloads, install the following driver and plugin on AMD and NVIDIA GPUs:
doca-roce RDMA driver. This driver enables high-performance internode communication for multi-node GPUs.
Mellanox k8s-rdma-shared-dev-plugin. The plugin exposes the RDMA-related resources as Kubernetes resources, named rdma/fabric0, rdma/fabric1, rdma/fabric2, …, rdma/fabric7. You can manage these resources using resource requests and limits in your manifests.
Additionally, you need to install the Multus CNI plugin. This CNI lets you move the NICs, fabric0, fabric1,..., fabric7, into the container namespace via the host-device plugin. To install the CNI, run the following command:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.ymleth0 and eth1 are not affected by the installation of the Multus CNI and continue to use Cilium.
After installing the CNI plugin, you need to provision the host-device plugin.
Expose the RDMA-related resources managed by the Mellanox k8s-rdma-shared-dev-plugin to your workloads. To do this for AMD GPU nodes, add the following resource requests and limits to your Pod or Deployment manifest:
resources:
requests:
amd.com/gpu: 8
rdma/fabric0: 1
rdma/fabric1: 1
rdma/fabric2: 1
rdma/fabric3: 1
rdma/fabric4: 1
rdma/fabric5: 1
rdma/fabric6: 1
rdma/fabric7: 1
limits:
amd.com/gpu: 8
rdma/fabric0: 1
rdma/fabric1: 1
rdma/fabric2: 1
rdma/fabric3: 1
rdma/fabric4: 1
rdma/fabric5: 1
rdma/fabric6: 1
rdma/fabric7: 1For NVIDIA GPU nodes, replace amd.com/gpu: 8 with nvidia.com/gpu: 8.
Create a config file of kind: NetworkAttachmentDefinition to provision the following host-device Multus Custom Resource (CR) objects for the fabric NICs:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: roce-net-fabric0
spec:
config: '{
"cniVersion": "0.3.1",
"type": "host-device",
"device": "fabric0"
}'
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: roce-net-fabric1
spec:
config: '{
"cniVersion": "0.3.1",
"type": "host-device",
"device": "fabric1"
}'
---
...
...
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: roce-net-fabric7
spec:
config: '{
"cniVersion": "0.3.1",
"type": "host-device",
"device": "fabric7"
}'These are namespaced resources and you must install them in your desired namespace using the following command:
kubectl apply -f <your-manifest>.yaml --namespace=<your-namespace>
Next, make fabric0, fabric1, ..., fabric7 available in the containers. To do this, add the following annotation to your Pod or Deployment manifest:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: >-
roce-net-fabric0@fabric0,
roce-net-fabric1@fabric1,
roce-net-fabric2@fabric2,
roce-net-fabric3@fabric3,
roce-net-fabric4@fabric4,
roce-net-fabric5@fabric5,
roce-net-fabric6@fabric6,
roce-net-fabric7@fabric7Use kubectl apply to apply the updates.
Once the fabric0, fabric1, ..., fabric7 NICs are available in the containers, high speed networking using RDMA is enabled between the GPU nodes.
Try using different keywords or simplifying your search terms.