How to Create Kubernetes Clusters

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service. Deploy Kubernetes clusters with a fully managed control plane, high availability, autoscaling, and native integration with DigitalOcean Load Balancers and volumes. You can add node pools using shared and dedicated CPUs, and NVIDIA H100 GPUs in a single GPU or 8 GPU configuration. DOKS clusters are compatible with standard Kubernetes toolchains and the DigitalOcean API and CLI.


Create a Cluster Using Automation

The API and CLI often require region, node size, and version values. You can retrieve a list of available option values using the /v2/kubernetes/options endpoint or the doctl kubernetes options command.

How to Create a Kubernetes Cluster Using the DigitalOcean CLI
  1. Install doctl, the DigitalOcean command-line tool.

  2. Create a personal access token and save it for use with doctl.

  3. Use the token to grant doctl access to your DigitalOcean account.

              doctl auth init
              
  4. Finally, run doctl kubernetes cluster create. Basic usage looks like this, but you can read the usage docs for more details:

                doctl kubernetes cluster create <name> [flags]
              

    The following example creates a cluster named example-cluster in the nyc1 region with a node pool, using Kubernetes version 1.28.2-do.0:

                  doctl kubernetes cluster create example-cluster --region nyc1 --version 1.28.2-do.0 --maintenance-window saturday=02:00 --node-pool "name=example-pool;size=s-2vcpu-2gb;count=5;tag=web;tag=frontend;label=key1=value1;label=key2=label2;taint=key1=value1:NoSchedule;taint=key2:NoExecute"
                
How to Create a Kubernetes Cluster Using the DigitalOcean API
  1. Create a personal access token and save it for use with the API.

  2. Send a POST request to https://api.digitalocean.com/v2/kubernetes/clusters

    cURL

    Using cURL:

                    curl -X POST \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
      -d '{"name": "prod-cluster-01","region": "nyc1","version": "1.14.1\
      -do.4","tags": ["production","web-team"],"node_pools": [{"size": "s-1vcpu-2gb","count": 3,"name": "frontend-pool","tags": ["frontend"],"labels": {"service": "frontend", "priority": "high"}},{"size": "c-4","count": 2,"name": "backend-pool"}]}' \
      "https://api.digitalocean.com/v2/kubernetes/clusters"
                  

    Go

    Using Godo, the official DigitalOcean V2 API client for Go:

                    import (
        "context"
        "os"
    
        "github.com/digitalocean/godo"
    )
    
    func main() {
        token := os.Getenv("DIGITALOCEAN_TOKEN")
    
        client := godo.NewFromToken(token)
        ctx := context.TODO()
    
        createRequest := &godo.KubernetesClusterCreateRequest{
            Name:        "prod-cluster-01",
            RegionSlug:  "nyc1",
            VersionSlug: "1.14.1-do.4",
            Tags:        []string{"production", "web-team"},
            NodePools: []*godo.KubernetesNodePoolCreateRequest{
                &godo.KubernetesNodePoolCreateRequest{
                    Name:  "frontend-pool",
                    Size:  "s-2vcpu-2gb",
                    Count: 3,
                    Tags:  []string{"frontend"},
                    Labels:  map[string]string{"service": "frontend", "priority": "high"},
                },
                &godo.KubernetesNodePoolCreateRequest{
                    Name:  "backend-pool",
                    Size:  "c-4",
                    Count: 2,
                },
            },
        }
    
        cluster, _, err := client.Kubernetes.Create(ctx, createRequest)
    }
                  

    Ruby

    Using DropletKit, the official DigitalOcean V2 API client for Ruby:

                    require 'droplet_kit'
    token = ENV['DIGITALOCEAN_TOKEN']
    client = DropletKit::Client.new(access_token: token)
    
    cluster = DropletKit::KubernetesCluster.new(
      name: 'prod-cluster-01',
      region: 'nyc1',
      version: '1.14.1-do.4',
      tags: ['production', 'web-team'],
      node_pools: [
        {
          name: 'frontend-pool',
          size: 's-2vcpu-2gb',
          count: 3,
          tags: ['frontend'],
          labels: {service: 'frontend', priority: 'high'}
        },
        {
          name: 'backend-pool',
          size: 'c-4',
          count: 2
        }
      ]
    )
    
    client.kubernetes_clusters.create(cluster)
                  

    Python

                    import os
    from pydo import Client
    
    client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))
    
    req = {
      "name": "prod-cluster-01",
      "region": "nyc1",
      "version": "1.18.6-do.0",
      "node_pools": [
        {
          "size": "s-1vcpu-2gb",
          "count": 3,
          "name": "worker-pool"
        }
      ]
    }
    
    resp = client.kubernetes.create_cluster(body=req)
                  

Create a Cluster with GPU Node Pools

You can also create a new cluster with GPU node pools.

Note
In rare cases, it can take several hours for a GPU Droplet to provision. If you have an unusually long creation time, open a support ticket.
How to Create a Cluster with a GPU Worker Node Using the DigitalOcean CLI

To create a cluster with a GPU worker node, run doctl kubernetes cluster create specifying the GPU machine type. The following example creates a cluster with a worker node in single GPU configuration with 80 GB of memory and 3 node pools:

doctl kubernetes cluster create gpu-cluster --region tor1 --version latest --node-pool "name=gpu-worker-pool;size=gpu-h100x1-80gb;count=3"
How to Create a Cluster with a GPU Worker Node Using the DigitalOcean API

To create a cluster with a GPU worker node, send a POST request to https://api.digitalocean.com/v2/kubernetes/clusters with the following request body:

curl --location 'https://api.digitalocean.com/v2/kubernetes/clusters' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $DIGITALOCEAN_TOKEN' \
--data '{
    "name": "gpu--cluster",
    "region": "tor1",
    "version": "latest",
    "node_pools": [
        {
            "size": "gpu-h100x1-80gb",
            "count": 3,
            "name": "gpu-worker-pool"
        }
    ]
}'

This creates a cluster with a worker node in single GPU configuration with 80 GB of memory and 3 node pools.

To run GPU workloads after you create a cluster, use the GPU nodes-specific labels and taint in your workload specifications to schedule pods that match. You can use a configuration spec, similar to the pod spec shown below, for your actual workloads:

    
        
            
apiVersion: v1
kind: Pod
metadata:
  name: gpu-workload
spec:
  restartPolicy: Never
  nodeSelector:
    doks.digitalocean.com/gpu-brand: nvidia
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
  tolerations:
    - key: nvidia.com/gpu
      operator: Exists
      effect: NoSchedule

        
    

The above spec shows how to create a pod that runs NVIDIA’s CUDA image and uses the labels and taint for GPU worker nodes.

You can use the cluster autoscaler to scale the GPU node pool down to 1 or use the DigitalOcean CLI or API to manually scale the node pool down to 0. For example:

  doctl kubernetes cluster node-pool update <your-cluster-id> <your-nodepool-id> --count 0 

Create a Cluster with VPC-native Networking Early Availability

You can also create a new cluster with VPC-native networking. This feature is in early availability and is only available on new clusters using Kubernetes 1.31+ created via the CLI or API.

To create a cluster with VPC-native networking, you need to provide two network subnets in CIDR notation (for example, 192.168.0.0/16), one for the cluster and another for services running on the cluster. The subnets must follow RFC 1918 guidelines and must not overlap with each other or with any VPCs or VPC-native clusters within the same team.

The chosen subnet size determines how many nodes can run in a cluster, and how many services can be created, and it cannot be extended later. Every node is assigned a /25 network out of the cluster subnet, and you can create one service for each IP address in the services subnet.

We recommend using /16 for the cluster and /19 for services, which results in a cluster that can have up to 512 nodes and 8192 services. Avoid choosing smaller than /20 for the cluster (64 nodes) and /22 for services (1024 services).

How to Find a Suitable Unused Network Range

Choosing suitable network ranges for a new VPC-native cluster is currently a manual process. You must compile a list of all VPC and VPC-native DOKS cluster networks in use on your team, then choose a non-overlapping RFC 1918 range.

To gather this information using the control panel, visit the VPC page, which lists the subnet ranges of all VPCs and VPC-native Kubernetes Clusters in your team.

To use doctl, first print the network ranges of all existing VPCs on the team:

doctl vpcs list

Next, list all clusters and their cluster_subnet and service_subnet ranges. This command requires installing the jq command:

doctl kubernetes cluster list -o json | jq -r ".[] | [.id, .cluster_subnet, .service_subnet] | @tsv"

The RFC 1918 ranges are:

  • 10.0.0.0 – 10.255.255.255
  • 172.16.0.0 – 172.31.255.255
  • 192.168.0.0 – 192.168.255.255

Choose any non-overlapping subnets in these ranges.

When you have determined your subnet ranges, use the doctl CLI or the API to create a cluster.

How to Create a Cluster with VPC-Native Networking Using the DigitalOcean CLI

To create a cluster with VPC-native networking, use doctl version v1.115.0 or higher to run doctl kubernetes cluster create, specifying a Kubernetes --version of 1.31 or higher and the --cluster-subnet and --service-subnet in CIDR notation. The following example creates a VPC-native cluster:

doctl kubernetes cluster create vpc-native-cluster --region nyc1 --version 1.31 --node-pool "name=example-pool;size=s-2vcpu-2gb;count=3" --cluster-subnet "192.168.0.0/16" --service-subnet "172.16.0.0/19" 

For more information, read the doctl kubernetes cluster create reference.

How to Create a Cluster with VPC-Native Networking Using the DigitalOcean API

To create a cluster with VPC-native networking using the API, send a POST request to https://api.digitalocean.com/v2/kubernetes/clusters with the following request body:

curl -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
  -d '{"name":"vpc-cluster-01","region":"nyc1","version": "1.31",
    "node_pools": [{"size": "s-1vcpu-2gb","count": 3,"name": "worker-pool"}],
    "cluster_subnet": "192.168.0.0/20",
    "service_subnet": "192.168.16.0/24"}' \
  "https://api.digitalocean.com/v2/kubernetes/clusters"

For more information, read the API reference for creating DOKS clusters

Create a Cluster Using the Control Panel

You can create a DigitalOcean Kubernetes Cluster at any time from the DigitalOcean Control Panel by opening the Create menu in the top right.

The create menu

In the Create menu, click Kubernetes to go to the Create a cluster page. On this page, you choose a Kubernetes version, datacenter region, and cluster capacity for your cluster and then create it.

Choose a datacenter region

Choose the region for your cluster. Your cluster’s control plane and worker nodes are located in the same region.

If you plan to use GPU worker nodes or volumes for persistent data storage, choose a region with GPU worker nodes support or volumes support. If you add a DigitalOcean Load Balancer to your deployment, it is automatically placed in the same region as the cluster.

VPC Network

In the VPC Network section, choose a VPC network for the cluster. You can choose one you’ve created or use your default network for the datacenter region. VPC enables an additional networking interface that can only be accessed by other resources within the same VPC network. This keeps traffic between Droplets and other applicable resource from being routed outside the datacenter over the public internet.

Select a Kubernetes version

The latest patch version for the three most recent minor versions of Kubernetes are available for new cluster creation. The latest stable release is selected by default.

Choose cluster capacity

A DOKS cluster has one or more node pools. Each node pool consists of a group of identical worker nodes. Worker nodes are built on Droplets.

To create a cluster, specify the following for the node pool:

  • Node pool name: Choose a name for the node pool when it’s created. Nodes inside this pool inherit this naming scheme when they are created. If you rename the node pool later, the nodes only inherit the new naming scheme when they are recreated (when you recycle the nodes or resize the node pool).

  • Choose the machine type:

    • Shared CPUs are built on Basic Droplet plans.

    • Dedicated CPUs are built on General Purpose, CPU-Optimized (Regular Intel CPU or Premium Intel CPU), or Memory-Optimized Droplet plans.

    • GPUs are built on GPU Droplets which are powered by NVIDIA’s H100 GPUs and can be in a single GPU or 8 GPU configuration.

    All machine types, except the Basic nodes, are dedicated CPU Droplets.

    Select the node type from the Machine type (Droplet) dropdown list. Choosing the right Kubernetes plan highly depends on your workload. See Choosing the Right Kubernetes Plan for more information.

  • Node plan: Choose the specific plan you want for your worker nodes. Each of the workers in a node pool has identical resources.

  • Select the Set node pool to autoscale option to enable autoscaling.

  • Nodes: For a fixed-size cluster, choose how many nodes to include in the node pool. By default, three worker nodes are selected.

  • Minimum nodes and Maximum nodes: For an autoscaling-enabled cluster, choose the minimum and maximum number of nodes for when the load decreases or increases.

To take advantage of different resource capacities, you can add additional node pools with the Add Additional Node Pool button and assign pods to the node pools with the appropriate scheduling constraints.

Note
One-node clusters are an inexpensive option to begin learning Kubernetes or for a development environment where resiliency isn’t a concern. However, a stand-alone Droplet with a container runtime or minikube generally gives you better performance than a one-node cluster for the same cost.

Select additional options

Then, you can optionally select the following settings.

DOKS HA control plane

Get extra reliability for critical workloads

DigitalOcean Kubernetes provides a high availability (HA) option that increases uptime and provides 99.95% SLA uptime for control planes. In the Get extra reliability for critical workloads section, select the Add high availability checkbox. Using this option creates multiple backup replicas of each control plane component and provides extra reliability for critical workloads.

Note
Once enabled, you cannot disable high availability.

Automate database management

To integrate your Kubernetes cluster with a DigitalOcean database, we recommend adding a free database operator during creation, now in beta. The database operator allows you to create new databases, automatically link them to your Kubernetes cluster, and manage them. For detailed information on using the operator, see our documentation on GitHub and how-to guide for diagrams.

While the operator is free, any databases you create with the operator are billed normally.

Finalize

Finalize the cluster settings. You can specify a name, assign a project, and optionally add tags to the cluster.

Name

By default, cluster names begin with k8s, followed by the version of Kubernetes, the datacenter region, and the cluster ID. You can customize the cluster name, which is also used in the tag.

Project

The new cluster belongs to your default project. You can assign the cluster to a different project.

You can also change the project after you create the cluster. Go to the Kubernetes page in the control panel. From the cluster’s More menu, select Move to and select the project you want to move the cluster to.

Associated resources, such as load balancers and volumes, also move when you move the cluster to a different project.

Due to a known issue, resources associated with a DOKS cluster – such as load balancers, volumes, and volume snapshots – belong to the default project upon creation, regardless of which project the cluster belongs to. You can associate resources to the correct project by reconciling the cluster or moving the cluster to a different project.

Tags

Clusters automatically have three tags:

  • k8s
  • The specific cluster ID, like k8s:EXAMPLEc-3515-4a0c-91a3-2452eEXAMPLE
  • The resource type, for example, k8s:worker

You can also add custom tags to a cluster and its node pools in the cluster’s Overview and Resources pages. Any custom tags you add to worker nodes in a node pool (for example, from the Droplets page), are deleted to maintain consistency between the node pool and its worker nodes.

At the bottom of this section, you see the Total monthly cost for your cluster based on the resources you’ve chosen. When you create the cluster, billing begins for each resource (for example, worker nodes, volumes, load balancers) as it is created and ends as it is destroyed.

Create Cluster

When you have entered your settings, create the cluster by clicking the Create Cluster button. It can take several minutes for cluster creation to finish.

Once your cluster is provisioned, you can view cluster information such as the cluster configuration, cost, cluster ID, and control plane high availability indicator in the Overview tab.

Overview tab

To manage the cluster, use kubectl, the official Kubernetes command-line client. See How to Connect to a DigitalOcean Kubernetes Cluster to get set up.