How to Mount NFS File Shares on Bare Metal GPUs
Validated on 4 Nov 2025 • Last edited on 14 Nov 2025
DigitalOcean Gradient™ AI Bare Metal GPUs are dedicated, single-tenant servers with 8 GPUs of various models that can operate standalone or in multi-node clusters.
NFS, or Network File System, is a distributed file system protocol that lets multiple clients access the same storage over a network.
To add an NFS file share to your bare metal GPU, first contact sales. After provisioning, we send you:
- The mount path for your file share in the format
/id/share_name, for example/1234567/example_name. - The IP addresses you can use to mount the file share.
Once you have this information, install the NFS client and stunnel to connect to the share over TLS.
Install Stunnel and NFS
Before installing stunnel and the NFS client, ensure that your system packages are up to date like this:
sudo apt updateAftwards, install the NFS client and stunnel packages:
sudo apt install stunnel nfs-commonDigitalOcean enforces NFS v4.1 with TLS. Stunnel establishes the TLS connection to the NFS server.
The installation is successful if you see a listing of the new packages installed for stunnel and the NFS client.
Configure Stunnel
Create the /var/run/stunnel directory so stunnel can create its PID file:
sudo mkdir /var/run/stunnelDownload and save our root CA certificate to /root/nfs.crt:
sudo curl https://docs.digitalocean.com/products/bare-metal-gpus/how-to/mount-nfs-file-shares/nfs.crt -o /root/nfs.crtThis certificate signs the NFS server certificates. By trusting it, you trust all certificates it issues, allowing secure access to DigitalOcean’s NFS servers.
Create the stunnel configuration file at /etc/stunnel/stunnel.conf like this:
/etc/stunnel/stunnel.confpid = /var/run/stunnel/stunnel.pid
CAfile = /root/nfs.crt
socket = r:TCP_NODELAY=1
[nfs4]
client = yes
accept = 127.0.0.1:49152
connect = <use_the_provided_ip_address>:2049
ciphers = ALL
sslVersion = TLSv1.2Replace <use_the_provided_ip_address> with one of the IPs we provided. For best performance, use a different IP for each GPU node.
This configuration:
- Creates a listener on port
49152. - Encrypts traffic with TLS and forwards it to port
2049on the NFS server. - Specifies the CA certificate and PID file locations.
Restart stunnel to apply the configuration:
sudo systemctl restart stunnel4Check that stunnel is running:
sudo systemctl status stunnel4The service is running successfully if you see output similar to the following:
● stunnel4.service - LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons)
Loaded: loaded (/etc/init.d/stunnel4; generated)
Active: active (running) since Tue 2025-03-18 14:22:17 UTC; 5s ago
Docs: man:stunnel(8)
Process: 12345 ExecStart=/etc/init.d/stunnel4 start (code=exited, status=0/SUCCESS)
Tasks: 2 (limit: 18942)
Memory: 1.2M
CPU: 40ms
CGroup: /system.slice/stunnel4.service
├─12348 /usr/bin/stunnel4 /etc/stunnel/stunnel.conf
└─12349 /usr/bin/stunnel4 /etc/stunnel/stunnel.confThis confirms that the stunnel service is active (running) and using your /etc/stunnel/stunnel.conf configuration file.
Mount the NFS File Share
Once stunnel is active, mount the NFS share through the local stunnel port. Replace /<id>/<share_name> with your mount path like this:
sudo mount -o port=49152,nfsvers=4.1,nconnect=16 127.0.0.1:/<id>/<share_name> /mountpoint/The port option must match the accept port in your stunnel configuration.
nconnect=16 opens up to 16 TCP connections to improve performance.
Verify the mount:
df -h /mountpoint/The mount is successful if you see output similar to the following:
Filesystem Size Used Avail Use% Mounted on
127.0.0.1:/1234567/example_name 10T 2.0T 8.0T 20% /mountpointThis confirms that the NFS file share is mounted at /mountpoint/, showing the total, used, and available space.