Validated on 26 Sep 2024 • Last edited on 20 Nov 2024
DigitalOcean Droplets are Linux-based virtual machines (VMs) that run on top of virtualized hardware. Each Droplet you create is a new server you can use, either standalone or as part of a larger, cloud-based infrastructure.
GPU Droplets have NVIDIA H100 GPUs in a single or 8 GPU configuration. They also come with two different kinds of storage: a boot disk for persistent data and a scratch disk for non-persistent data. Learn more about GPU Droplet plans and features.
We provide an AI/ML-ready image for GPU Droplets that has drivers and software from NVIDIA preinstalled, as well as preconfigured 1-Click Models powered by Hugging Face. You can also create GPU Droplets with existing Droplet images, but you need to manually install drivers and other software to use the GPUs.
GPU Droplets vs Bare Metal GPUs
DigitalOcean Bare Metal GPUs and GPU Droplets both provide GPU-based compute resources tailored to AI/ML workloads, but they’re each suited for different use cases.
Click to learn more about the difference between bare metal GPUs and GPU Droplets.
GPU Droplets
Bare metal GPUs
Virtual machines. GPU Droplets have the convenience and ease of deployment that comes with managed infrastructure, but VM configuration is constrained by the hypervisor and shared OS layer.
Physical servers. Bare metal GPUs are physical servers without virtualization, so you can set up advanced orchestration layers, containers, operating systems, and other deep configuration directly on the hardware.
Shared infrastructure. GPU Droplets share physical resources, so there may be minor resource fluctuations that don’t significantly impact tasks like fine-tuning and inferencing.
Single tenant hardware. Bare metal GPUs are in isolated environments, which are best for use cases requiring full data isolation or highly consistent performance.
On-demand instances with per-hour billing. Pricing for GPU Droplets is flexible and low commitment, so GPU Droplets are best for variable usage or rapid scalability.
Contract-based billing and provisioning. Pricing for bare metal GPUs is more cost effective, but meant for long-term use with intensive, prolonged workloads that need stable performance.
GPU Droplets are best for small- to medium-scale tasks, including:
Fine-tuning (adjusting models with specific data sets)
Inference (running predictions with high-speed responses for production applications)
Moderate data processing (lightweight analytics or video processing that benefit from GPU acceleration but don’t demand full hardware dedication)
Bare metal GPUs are best for advanced and custom workloads, including:
Model training at scale (training foundational models and handling large datasets with optimal performance)
Complex inference needs (running real-time inference for high-throughput applications)
Custom orchestration and HPC (like Kubernetes clusters, multi-node setups, or high-frequency trading)
How to Use GPU Droplets
In general, you can manage GPU Droplets like non-GPU Droplets, but some features and requirements are specific to GPU Droplets:
Install the NVIDIA Data Center GPU Manager (DCGM) and DCGM Exporter to enable health monitoring, diagnostics, and process statistics for NVIDIA GPUs on GPU Droplets.
This Community tutorial explains how to set up the NVIDIA container toolkit, run Docker for GPU workloads, and install Miniconda to manage Python environments on GPU Droplets.