Droplet Features

Validated on 14 Aug 2025 • Last edited on 25 Aug 2025

DigitalOcean Droplets are Linux-based virtual machines (VMs) that run on top of virtualized hardware. Each Droplet you create is a new server you can use, either standalone or as part of a larger, cloud-based infrastructure.

Droplet Plans

The Droplet plan you choose determines the amount of resources (like CPU, RAM, disk storage, and network bandwidth) allocated to your Droplet. You can choose shared or dedicated CPUs.

CPU Droplets

We offer the following CPU Droplet plan types:

Droplet Plan CPU vCPUs Memory
Basic Shared 1 - 8 1 - 32 GB RAM
General Purpose Dedicated 2 - 48 8 - 240 GB RAM
4 GB RAM / vCPU
CPU-Optimized Dedicated 2 - 48 4 - 120 GB
2 GB RAM / vCPU
Memory-Optimized Dedicated 2 - 32 16 - 384 GB RAM
8 GB RAM / vCPU
Storage-Optimized Dedicated 2 - 32 16 - 384 GB RAM
8 GB RAM / vCPU
146 - 225 GB SSD / vCPU

CPU Droplets can have Regular CPUs or Premium CPUs. You can choose between Intel and AMD for Premium CPUs. Droplets with Premium CPUs are guaranteed to use one of the latest two generations of CPUs we have. They also use NVMe SSDs and have higher network throughput speed.

Choosing the Right CPU Droplet Plan

In-depth comparisons of available CPU Droplet plans, including hardware and software, an explanation of shared CPU and dedicated CPU plans, and how to make a data-driven decision on which plan is best for your use case.

GPU Droplets

We offer GPU Droplets with the following hardware configurations:

AMD GPU Slug GPU Memory Droplet Memory (NVMe) Droplet vCPUs Boot Disk Scratch Disk Transfer Allowance
MI300X gpu-mi300x1-192gb 192 GB 240 GiB 20 720 GB 5 TiB 15,000 GiB
MI300X (8x) gpu-mi300x8-1536gb 1,536 GB 1,920 GiB 160 2,046 GB 40 TiB 60,000 GiB
MI325X (8x) By contract 2,048 GB 1,310 GiB 160 720 GiB 40 TiB 60,000 GiB
NVIDIA GPU Slug GPU Memory Droplet Memory (NVMe) Droplet vCPUs Boot Disk Scratch Disk Transfer Allowance
H100 gpu-h100x1-80gb 80 GB 240 GiB 20 720 GiB 5 TiB 15 TB
H100 (8x) gpu-h100x8-640gb 640 GB 1,920 GiB 160 2,046 GiB 40 TiB 60 TB
L40s gpu-l40sx1-48gb 48 GB 64 GiB 8 500 GiB None 10 TB
RTX 4000 gpu-4000adax1-20gb 20 GB 32 GiB 8 500 GiB None 10 TB
RTX 6000 gpu-6000adax1-48gb 48 GB 64 GiB 8 500 GiB None 10 TB
H200 gpu-h200x1-141gb 141 GB 240 GiB 24 720 GiB 5 TiB 15 TB
H200 (8x) gpu-h200x8-1128gb 1,128 GB 1,920 GiB 192 2,046 GiB 40 TiB 60 TB

All GPU Droplets have a maximum bandwidth of 10 Gbps public and 25 Gbps private.

Like CPU Droplets, all GPU Droplets have a boot disk, which is a local, persistent disk on the Droplet to store data for software like the operating system and ML frameworks. Additionally, some GPU Droplets have a scratch disk, a local, non-persistent disk to store data for staging purposes, like inference and training. Non-GPU Droplets do not have a scratch disk.

Images

Linux Images

We offer a variety of Linux images you can use to deploy Droplets. You can select these images when you create a Droplet from the control panel or use the image IDs or slugs in API requests and CLI commands to create Droplets.

You can view the list of availabile Linux images for the current distributions and versions we offer as well as the slug and image ID of each image. You can also retrieve this list yourself with the API’s /v2/images endpoint or with doctl compute image list-distribution --public.

AI/ML-Ready Images

We provide AI/ML-ready images for AMD and NVIDIA GPU Droplets which have the necessary drivers and software preinstalled to use the GPUs.

Learn more about AI/ML-ready images.

Inference-Optimized Image

Our inference-optimized image for NVIDIA GPU Droplets is built for LLM setup and deployment. It includes Docker and vLLM, and has built-in support for:

  • Hugging Face model downloads
  • Multi-model (one, two, or four) concurrency: run one, two, or four models with customizable tensor parallelism settings to optimize hardware utilization
  • Speculative decoding, including the use of draft models
  • Special handling for FP8 quantization for efficient, low-precision inference
  • Prompt caching

It supports the following models:

GPU and model configuration Supported models
H100x8
Single model
  • meta-llama/Llama-3.1-8B-Instruct
  • meta-llama/Llama-3.3-70B-Instruct
  • meta-llama/Llama-4-Scout-17B-16E-Instruct
  • deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  • Any custom model available on Hugging Face
H100x8
Two concurrent models
  • meta-llama/Llama-3.1-8B-Instruct
  • meta-llama/Llama-3.3-70B-Instruct
  • deepseek-ai/DeepSeek-R1-Distill-Llama-70B
H100x8
Four concurrent models
  • meta-llama/Llama-3.1-8B-Instruct
  • meta-llama/Llama-3.3-70B-Instruct
  • deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  • meta-llama/Llama-3.3-70B-Instruct-FP8-Speculative-Decoding
H100x1
  • meta-llama/Llama-3.1-8B
  • meta-llama/Llama-3.1-8B-FP8
  • mistralai/Mistral-Nemo-Instruct-2407
  • mistralai/Mistral-Nemo-Instruct-2407-FP8
RTX 400
  • meta-llama/Llama-3.1-8B-FP8
L40S
  • meta-llama/Llama-3.1-8B
  • meta-llama/Llama-3.1-8B-FP8
  • mistralai/Mistral-Nemo-Instruct-2407
  • mistralai/Mistral-Nemo-Instruct-2407-FP8
RTX 6000
  • meta-llama/Llama-3.1-8B
  • meta-llama/Llama-3.1-8B-FP8
  • mistralai/Mistral-Nemo-Instruct-2407
  • mistralai/Mistral-Nemo-Instruct-2407-FP8

Learn more about our inference-optimized image.

Autoscale Pools

Droplet autoscale pools enable automatic horizontal scaling for a pool of Droplets based on resource utilization or a fixed size.

Integration with Other DigitalOcean Resources

Droplets integrate natively with other DigitalOcean products and features:

  • Tags are custom labels you apply to Droplets and other resources that have multiple uses: filtering, automatic inclusion in firewall rules and load balancer backend pools, and API call execution on multiple resources at once.

  • DigitalOcean Reserved IPs are additional static IPv4 and IPv6 addresses you can use to access a Droplet without replacing or changing the Droplet’s original public IP addresses.

  • DigitalOcean Volumes Block Storage are additional storage (in units called volumes) for your Droplets. You can move volumes between Droplets in the same region and increase the size of a volume without powering down the Droplet it’s attached to.

    Volumes are most useful when you need more storage space but don’t need the additional processing power or memory that a larger Droplet would provide.

  • DigitalOcean Cloud Firewalls are a free, network-based, stateful firewall service for DigitalOcean Droplets. They block all traffic that isn’t expressly permitted by a rule.

  • DigitalOcean Load Balancers are a fully-managed, highly available load balancing service that distribute traffic to groups of Droplets.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.