How to Use NFS Storage with Kubernetes Clusters
Validated on 3 Oct 2025 • Last edited on 20 Oct 2025
DigitalOcean Kubernetes (DOKS) is a Kubernetes service with a fully managed control plane, high availability, and autoscaling. DOKS integrates with standard Kubernetes toolchains and DigitalOcean’s load balancers, volumes, CPU and GPU Droplets, API, and CLI.
You can connect your DOKS clusters to a DigitalOcean NFS Share and use the share for tasks such as AI/ML Kubernetes workloads. For other persistent storage options, see Add Volumes to Kubernetes Clusters.
To use an NFS share with your DOKS cluster, you statically provision a PersistentVolume (PV), bind the PV to a PersistentVolumeClaim (PVC), and then mount the PVC to your workload.
Prerequisites
To connect an existing DOKS cluster to a DigitalOcean NFS share, you need to:
-
Create an NFS share. You can provision one using either the DigitalOcean Control Panel or the API.
-
Get the connection details once the share is active.
In the left menu of the control panel, click Network File Storage to open the Network File Storage page which lists all the NFS shares. Note the server IP address and mount path values in the Mount Path column. The server IP address is the value before the
:
and the mount path is the value after the:
. For example, if the value is10.128.0.69:/123456/6160d138-60cb-4e61-9ff3-076eebed5c0f
, then the server IP address is10.128.0.69
and the mount path is/123456/6160d138-60cb-4e61-9ff3-076eebed5c0f
.To get the values using the API, send a
GET
request to the/v2/nfs
endpoint. From the API response, note the host IP address and the mount path. For example:... "host": "10.128.0.69", "mount_path": "/123456/38bc6f86-9927-491a-a7b5-c5627219a0d3", ...
The
host
value is the server IP address. Themount_path
value provides the path to use when configuring your Kubernetes cluster.
Create PersistentVolume
A PersistentVolume (PV) is a cluster-level resource that registers your DigitalOcean NFS Share with Kubernetes, making it available for use across the entire cluster.
To provision a PV for your NFS share, create the following config file named nfs-pv.yaml
, replacing the values for server
and path
with the host
and mount_path
values of your NFS share. The size of the PV should ideally match your share’s size and the accessModes
must be ReadWriteMany
to allow multiple pods to read and write to the volume simultaneously.
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: do-nfs-pv
labels:
type: nfs-model-storage
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: "10.128.0.69"
path: "/123456/38bc6f86-9927-491a-a7b5-c5627219a0d3"
Use kubectl apply
to create the PV:
kubectl apply -f nfs-pv.yaml
Create PersistentVolumeClaim
A PersistentVolumeClaim (PVC) is how your applications request access to the storage made available by the PV.
To provision a PVC for your NFS share, create the following config file named nfs-pvc.yaml
. The label for the PVC must match the label for your PV to ensure that the PVC binds to the specific NFS PV. The accessModes
must be ReadWriteMany
to allow multiple pods to read and write to the PVC simultaneously.
nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: do-nfs-pvc
namespace: sammy-doks
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: nfs-model-storage
In the config file, the storageClassName
field is set to ""
. This instructs DOKS to find a pre-existing, statically provisioned PV matching the specified PV label and links your PVC directly to your manually configured NFS share. DOKS has built-in StorageClass options such as do-block-storage
that dynamically provision new storage volumes when a PVC requests them. However, in this case, you have already provisioned the storage when creating the PV and therefore do not need DOKS to dynamically provision one.
Use kubectl apply
to create the PV:
kubectl apply -f nfs-pvc.yaml
Mount PVC in Your Workload
After your PVC is bound to the PV, you can mount it to a workload such as Deployment, Pod, or DaemonSet.
The following config file demonstrates how to mount the storage to a pod and write the current date to a log file on the NFS share every 5 seconds.
To mount the volume to the pod and reference your PVC, add the volumes
section to the specification. The claimName
field must match the name you specified for your PVC. Next, add the volumeMounts
section where the name
field must match the volume name you specified earlier and the mountPath
field specifies the path where the volume will be mounted in the container’s filesystem.
pod-with-nfs.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-test-pod
namespace: sammy-doks
spec:
volumes:
- name: my-nfs-share
persistentVolumeClaim:
claimName: do-nfs-pvc
containers:
- name: my-app-container
image: busybox
command: ["/bin/sh", "-c", "while true; do date >> /data/test.log; sleep 5; done"]
volumeMounts:
- name: my-nfs-share
mountPath: "/data"
After you apply this manifest using kubectl apply -f pod-with-nfs.yaml
, the pod reads from and writes to its /data
directory, with all files persisting directly on your DigitalOcean NFS Share.