# How to Mount NFS File Shares on Bare Metal GPUs DigitalOcean Gradient™ AI Bare Metal GPUs are dedicated, single-tenant servers with 8 GPUs of various models that can operate standalone or in multi-node clusters. [NFS](https://en.wikipedia.org/wiki/Network_File_System), or Network File System, is a distributed file system protocol that lets multiple clients access the same storage over a network. To add an NFS file share to your bare metal GPU, first [contact sales](https://www.digitalocean.com/products/bare-metal-gpu?referrer=pdocs&utm_campaign=how-to-mount-nfs-file-shares-on-bare-metal-gpus#sales-form). After provisioning, we send you: - The **mount path** for your file share in the format `/id/share_name`, for example `/1234567/example_name`. - The **IP addresses** you can use to mount the file share. Once you have this information, [install the NFS client and stunnel](#install-stunnel-and-nfs) to connect to the share over TLS. ## Install Stunnel and NFS Before installing stunnel and the NFS client, ensure that your system packages are up to date like this: ```shell sudo apt update ``` Aftwards, install the NFS client and stunnel packages: ```shell sudo apt install stunnel nfs-common ``` DigitalOcean enforces NFS v4.1 with TLS. [Stunnel](https://en.wikipedia.org/wiki/Stunnel) establishes the TLS connection to the NFS server. The installation is successful if you see a listing of the new packages installed for stunnel and the NFS client. ## Configure Stunnel Create the `/var/run/stunnel` directory so stunnel can create its PID file: ```shell sudo mkdir /var/run/stunnel ``` Download and save [our root CA certificate](https://docs.digitalocean.com/products/bare-metal-gpus/how-to/mount-nfs-file-shares/nfs.crt) to `/root/nfs.crt`: ```shell sudo curl https://docs.digitalocean.com/products/bare-metal-gpus/how-to/mount-nfs-file-shares/nfs.crt -o /root/nfs.crt ``` This certificate signs the NFS server certificates. By trusting it, you trust all certificates it issues, allowing secure access to DigitalOcean’s NFS servers. Create the stunnel configuration file at `/etc/stunnel/stunnel.conf` like this: `/etc/stunnel/stunnel.conf` ```text pid = /var/run/stunnel/stunnel.pid CAfile = /root/nfs.crt socket = r:TCP_NODELAY=1 [nfs4] client = yes accept = 127.0.0.1:49152 connect = :2049 ciphers = ALL sslVersion = TLSv1.2 ``` Replace `` with one of the IPs we provided. For best performance, use a different IP for each GPU node. This configuration: - Creates a listener on port `49152`. - Encrypts traffic with TLS and forwards it to port `2049` on the NFS server. - Specifies the CA certificate and PID file locations. Restart stunnel to apply the configuration: ```shell sudo systemctl restart stunnel4 ``` Check that stunnel is running: ```shell sudo systemctl status stunnel4 ``` The service is running successfully if you see output similar to the following: ```text ● stunnel4.service - LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons) Loaded: loaded (/etc/init.d/stunnel4; generated) Active: active (running) since Tue 2025-03-18 14:22:17 UTC; 5s ago Docs: man:stunnel(8) Process: 12345 ExecStart=/etc/init.d/stunnel4 start (code=exited, status=0/SUCCESS) Tasks: 2 (limit: 18942) Memory: 1.2M CPU: 40ms CGroup: /system.slice/stunnel4.service ├─12348 /usr/bin/stunnel4 /etc/stunnel/stunnel.conf └─12349 /usr/bin/stunnel4 /etc/stunnel/stunnel.conf ``` This confirms that the stunnel service is `active (running)` and using your `/etc/stunnel/stunnel.conf` configuration file. ## Mount the NFS File Share Once stunnel is active, mount the NFS share through the local stunnel port. Replace `//` with your mount path like this: ```shell sudo mount -o port=49152,nfsvers=4.1,nconnect=16 127.0.0.1:// /mountpoint/ ``` The `port` option must match the `accept` port in your stunnel configuration. `nconnect=16` opens up to 16 TCP connections to improve performance. Verify the mount: ```shell df -h /mountpoint/ ``` The mount is successful if you see output similar to the following: ```text Filesystem Size Used Avail Use% Mounted on 127.0.0.1:/1234567/example_name 10T 2.0T 8.0T 20% /mountpoint ``` This confirms that the NFS file share is mounted at `/mountpoint/`, showing the total, used, and available space.