NFS Droplet 1-Click allows you to create and maintain shared network storage for your applications running on the DigitalOcean cloud. Besides the getting started tutorial, it includes the necessary scripts for you to customise for production deployment.
For best performance, we recommend using a Premium Dedicated Droplet with 10gbit networking for both your server and client(s).
Use this Volumes Block Storage tutorial to attach storage to your existing NFS droplet and during droplet creation.
Using Volume Block Storage decouples your shared data from the lifecycle of the NFS Droplet 1-Click, meaning your data is safe and persistent no matter what happens to the NFS server. Additionally, Volumes Block Storage can be easily moved between droplets in case you need to relocate your NFS server to another machine.
Package | Version | License |
---|---|---|
nfs-kernel-server | 2.6.1 | GPL |
Fail2Ban | 0.11.2 | GNU General Public License |
Click the Deploy to DigitalOcean button to create a Droplet based on this 1-Click App. If you aren’t logged in, this link will prompt you to log in with your DigitalOcean account.
In addition to creating a Droplet from the Droplet NFS Server 1-Click App using the control panel, you can also use the DigitalOcean API. As an example, to create a 4GB Droplet NFS Server Droplet in the SFO2 region, you can use the following curl
command. You need to either save your API access token) to an environment variable or substitute it in the command below.
curl -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$TOKEN'' -d \
'{"name":"choose_a_name","region":"sfo2","size":"s-2vcpu-4gb","image": "sharklabs-dropletnfsserver"}' \
"https://api.digitalocean.com/v2/droplets"
The NFS server can be configured to be accessible from both VPC and public networks.
If you intend to use NFS between droplets in the same region, we highly recommend configuring the NFS server in a VPC network for an easy and secure setup.
On the other hand, if you intend to use NFS between droplets from different regions, Kubernetes droplets or machines outside of DigitalOcean, your only option configuring the NFS server in a public network.
Before configuring the NFS server, make sure that Volumes Block Storage is attached to the droplet. After you SSH login to the NFS Server Droplet, list the contents of the/mnt folder, you should see a folder with the name volume_<region>_<number>
:
$ ls /mnt
volume_sfo3_01
Continuing in this tutorial, we will use volume_sfo3_01, make sure to replace it with your volume.
The first step is to change the ownership of the volume folder to allow any NFS client to use it:
$ chown -R nobody:nogroup /mnt/volume_sfo3_01/
Next, export the volume folder to the NFS server so the NFS server can use it. Open /etc/export
file in your preferred editor:
$ nano /etc/exports
Append /mnt/volume_sfo3_01 *(rw,sync,no_subtree_check)
to the end of the file. This will tell the NFS server that anyone can mount your volume with read and write access. You should not be concerned about anyone accessing your volume, since you will further allow access only from the VPC network.
Save and exit the file. Now, restart the NFS server to apply the previous changes:
$ systemctl restart nfs-kernel-server
Next, create a firewall rule to allow access to the NFS only from the VPC network.
In the network setting of your NFS droplet find the VPC IP range.
In the commands below we will use 10.124.0.0/20
, make sure to replace this value with the VPC range of your network.
Create a firewall rule allowing access to the NFS server from any IP from your VPC network:
$ ufw allow in on eth1 from 10.124.0.0/20 to any port nfs
You can check the current status of UFW which should look like this:
$ ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp LIMIT IN Anywhere
[ 2] 2049 on eth1 ALLOW IN 10.124.0.0/20
[ 3] 22/tcp (v6) LIMIT IN Anywhere (v6)
At this point, you have successfully configured your NFS server to work with your VPC network, below you can find how to configure the NFS client.
Setting up NFS on a public network is necessary when you have droplets, Kubernetes etc. running in different regions and your applications need access to a shared NFS storage.
For this setup, you will need to disable the UFW firewall and droplet and configure DigitalOcean Firewall. DigitalOcean Firewall will allow you to control which droplets will have access to your NFS server based on their tags.
You can start by creating and configuring your NFS server in the same way as described in #Configuring NFS Server in a VPC Network section but instead of creating UFW rules, you want to disable UFW at all:
$ ufw disable
Now, create a tag which your NFS server and clients will use. Go to NFS Droplet 1-Click settings, go to Tags and create a tag with a unique name:
Next, head to your NFS Droplet 1-Click settings, go to Networking -> Firewalls and click the Edit button:
Click Create Firewall and populate fields for your new firewall:
Click Create Firewall and you successfully configured the NFS server with the DigitalOcean firewall. To allow your NFS clients to access your NFS server, add your new tag to the client droplets and the DigitalOcean firewall will automatically allow the connection.
Note: The DigitalOcean firewall works only for the public network. Use public NFS server IP to connect the clients.
After you SSH login to the NFS Client Droplet, install the NFS client.
Ubuntu / Debian:
$ apt install nfs-common
Fedora / CentOS / Rocky Linux / AlmaLinux:
$ yum install nfs-utils
Now, create a folder which will be mounted to the NFS folder:
$ mkdir /mnt/nfs
Finally, mount the NFS server to the newly created folder.
$ mount -t nfs 10.124.0.6:/mnt/volume_sfo3_01/ /mnt/nfs -o nconnect=8
$ mount -t nfs 134.209.175.72:/mnt/volume_sfo3_01/ /mnt/nfs -o nconnect=8
In both examples, we specified the nconnect mount option. This value, which can be between 1 to 16, controls how many TCP connections the client will form between itself and the NFS server. Some workloads, particularly small writes, may see an improvement in IOPS by setting this option.
At this point, your NFS client is configured with the NFS server. You can check it by using df -h and looking at the size of the mounted folder:
$ df -h
...
10.124.0.6:/mnt/volume_sfo3_01 100G 0 95G 0% /mnt/nfs
...
If you don’t need NFS server, you can unmount and detach your NFS client using:
$ umount /mnt/nfs
DO Droplets come with a single partition (/dev/vda1, for example for the system). While it is fine to just use a directory from the root partition as the NFS share, there is no control over the usage and you may run the risk of saturating your root partition. If you are planning to use the local storage, one option is to allocate a file of a particular size (eg. 20GB) and use that as a filesystem to expose via NFS. We have included a script (local-partition.sh
) that you can customize. It uses ‘dd’ to create a blank file of a given size, then use mkfs to create a file system, and finally mount to expose it as a separate partition.
We strongly recommend to backup your NFS files. This will help you go back to a point in time, retrieve when files are corrupted, or even mirror the filesystem to another region for your application needs.
You can use SnapShooter (https://marketplace.digitalocean.com/add-ons/snapshooter, or https://app.snapshooter.com/) for backing up your server filesystem. After setting up SnapShooter, you will need to configure the target backup store (S3 compatible). Creating a backup job is just adding 1 command to set up SnapShooter on your NFS server, and then creating a backup task on the SnapShooter console using the “Server filesystem backup” recipe.
If you wish to test the performance of your NFS share from the client side, you can follow many of the same steps shown in the community guide for benchmarking volume performance: https://www.digitalocean.com/community/tutorials/how-to-benchmark-digitalocean-volumes
chown -R nobody:nogroup /mnt/volume_nyc3_01/
/mnt/volume_nyc3_01 *(rw,sync,no_subtree_check)
systemctl restart nfs-kernel-server
. Use df -h
to verify mounts.mount :/mnt/volume_nyc3_01 /mnt/nfs
. Use ‘df -h’ to verify mounts.ufw allow in on eth1 from 10.124.0.0/20