# How to Tune Network Performance on DigitalOcean Gradientâ„¢ AI GPU Droplets DigitalOcean Droplets are Linux-based virtual machines (VMs) that run on top of virtualized hardware. Each Droplet you create is a new server you can use, either standalone or as part of a larger, cloud-based infrastructure. GPU Droplets benefit from network tuning to maximize throughput, especially when accessing [Network File Storage](https://docs.digitalocean.com/products/nfs/index.html.md) shares or communicating between nodes. Two optimizations improve throughput for GPU Droplets: - Increasing TCP buffer sizes to improve throughput for high-bandwidth connections. - Enabling jumbo frames on the VPC private interface (`eth1`) to reduce overhead. These settings are specific to GPU Droplets and persist across reboots. ## Configure TCP Buffer Sizes Create a `sysctl` configuration file to increase the maximum TCP send and receive buffer sizes to 16 MB. Open a new file in your text editor: ```shell sudo nano /etc/sysctl.d/99-gpu-network-tuning.conf ``` Add the following configuration: net.core.wmem\_max=16777216 net.ipv4.tcp\_rmem=4096 87380 16777216 net.ipv4.tcp\_wmem=4096 65536 16777216 ```` Save and close the file, then apply the settings: ```shell sudo sysctl --system ```` The file in `/etc/sysctl.d/` ensures the settings persist across reboots. ## Configure MTU The VPC private interface, `eth1`, on GPU Droplets supports jumbo frames with an MTU (Maximum Transmission Unit) of 9000. The method for configuring MTU depends on which network management tool your distribution uses. ## Ubuntu/Debian (Netplan) Ubuntu and Debian use [Netplan](https://netplan.io/) for network configuration. The AI/ML-ready image provided for GPU Droplets uses Netplan. Open the Netplan configuration file in your text editor: ```shell sudo nano /etc/netplan/50-cloud-init.yaml ``` Find the section for `eth1` (the VPC interface). Look for the line `mtu: 1500` under the `eth1` entry and change it to `mtu: 9000`. This enables jumbo frames on the VPC interface. Then apply the change: ```shell sudo netplan apply ``` Because this edits the Netplan configuration directly, the MTU setting persists across reboots. ## RHEL/Rocky/AlmaLinux/Fedora (NetworkManager) Red Hat-based distributions, including Rocky Linux, AlmaLinux, and Fedora, use **NetworkManager** and the `nmcli` (Network Manager Command Line Interface) tool for network configuration. Instead of editing a YAML file, you use a terminal command to update the connection profile. Set the MTU to 9000 on `eth1`: ```shell sudo nmcli connection modify eth1 ethernet.mtu 9000 ``` Apply the change: ```shell sudo nmcli connection up eth1 ``` NetworkManager saves this configuration in `/etc/NetworkManager/system-connections/`, so it persists across reboots. ## Verify Confirm the `sysctl` values are active: ```shell sysctl net.core.rmem_max net.core.wmem_max net.ipv4.tcp_rmem net.ipv4.tcp_wmem ``` Confirm the MTU is set to 9000 on `eth1`: ```shell ip link show eth1 | grep mtu ``` ## Automate with User Data You can automatically configure jumbo frames when you create a GPU Droplet by using [user data](/products/droplets/how-to/provide-user-data/). The following `cloud-config` file sets the MTU to 9000 on the VPC interface at boot: - ip link set dev eth1 mtu 9000 ``` You can enter this `cloud-config` file on the GPU Droplet creation page in the Control Panel, pass it to the API with the `user_data` field, or pass it to the CLI with the `--user-data-file` flag. ```