Recommended Drivers and Software for DigitalOcean Gradient™ AI GPU Droplets

Validated on 13 Aug 2025 • Last edited on 25 Aug 2025

DigitalOcean Droplets are Linux-based virtual machines (VMs) that run on top of virtualized hardware. Each Droplet you create is a new server you can use, either standalone or as part of a larger, cloud-based infrastructure.

We strongly recommend creating GPU Droplets using our AI/ML-ready images, which have drivers and software preinstalled and configured to help you get started.

GPU Droplets also work with other Droplet images (stock Linux images, backups, snapshots, and so on) but you need to manually install drivers and other software to use your Droplet’s GPUs.

If you have additional software you want to use across multiple GPU Droplets, one option is to start with our AI/ML-ready image, install any additional software, and then take a snapshot of the Droplet. You can then use the snapshot as the base image to create additional GPU Droplets.

AI/ML-Ready Image

For AMD GPU Droplets

For GPU Droplets with AMD GPUs, our AI/ML-ready image is based on Ubuntu 24.04 and is configured following the ROCm quick start setup including:

  • python3-setuptools
  • python3-wheel
  • rocm version 6.4.0
  • amdgpu-dkms version 6.12.12
  • linux-generic

You can choose this image when you create a GPU Droplet from the control panel or specify it by slug name when you create a GPU using the API or doctl using the slug gpu-amd-base.

For NVIDIA GPU Droplets

For GPU Droplets with NVIDIA GPUs, our AI/ML-ready image is based on Ubuntu 22.04 and includes:

  • nvidia-container-toolkit=1.17.8-1
  • cuda-keyring_1.1-1
  • cuda-drivers-575
  • cuda-toolkit-12-9
  • bzip2 (8 GPU Droplets only)
  • MLNX_OFED_LINUX-23.10-4.0.9.1-ubuntu22.04-x86_64 (8 GPU Droplets only)
  • nvidia-fabricmanager-575 (8 GPU Droplets only)

You can choose this image when you create a GPU Droplet from the control panel or specify it by slug name when you create a GPU using the API or doctl. For all single GPU Droplets, use gpu-h100x1-base (even for single GPU plans using GPUs other than H100s). For 8 GPU Droplets, use gpu-h100x8-base.

For manual setup, on Debian-based systems like Ubuntu, you can install the CUDA drivers and toolkit as well as the NVIDIA Container Toolkit with APT. On 8 GPU Droplets, our image installs additional software that requires more configuration, as noted above. We recommend following NVIDIA’s installation documentation for Fabric Manager and NVIDIA’s documentation on Mellanox OFED.

Inference-Optimized Image for NVIDIA GPU Droplets

Our inference-optimized image is designed for LLM setup and deployment. It is based on Ubuntu 24.04 and includes:

  • CUDA 12.9
  • NVIDIA driver version 575.51.03
  • NVIDIA Fabric Manager (8 GPU Droplets only)
  • Docker
  • vLLM (vllm-openai container v0.9.0)

You can choose this image when you create a GPU Droplet from the control panel.

After you create a GPU Droplet with this image, SSH into the Droplet as the root user and run the included run_model.sh script. The script prompts you through the configuration and selection of the models you want to use.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.