How to Add Node Pools to a Cluster

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service. Deploy Kubernetes clusters with a fully managed control plane, high availability, autoscaling, and native integration with DigitalOcean Load Balancers and volumes. You can add node pools using shared and dedicated CPUs, and NVIDIA H100 GPUs in a single GPU or 8 GPU configuration. DOKS clusters are compatible with standard Kubernetes toolchains and the DigitalOcean API and CLI.


A node pool is a group of nodes in a DOKS cluster with the same configuration.

All the worker nodes within a node pool have identical resources, but each node pool can have a different worker configuration. This lets you have different services on different node pools, where each pool has the RAM, CPU, and attached storage resources the service requires.

You can create and modify node pools at any time. Worker nodes are automatically deleted and recreated when needed, and you can manually recycle worker nodes. Nodes in the node pool inherit the node pool’s naming scheme when you first create a node pool, however, renaming a node pool does not rename the nodes. Nodes inherit the new naming scheme when recycled or when resizing the node pool which creates new nodes.

You can add custom tags to a cluster and its node pools. DOKS deletes any custom tags you added to worker nodes in a node pool (for example, from the Droplets page) to maintain consistency between the node pool and its worker nodes.

Add a Node Pool to a Cluster Using Automation

How to Add a Node Pool to a Kubernetes Cluster Using the DigitalOcean CLI
  1. Install doctl, the DigitalOcean command-line tool.

  2. Create a personal access token and save it for use with doctl.

  3. Use the token to grant doctl access to your DigitalOcean account.

              doctl auth init
              
  4. Finally, run doctl kubernetes cluster node-pool create. Basic usage looks like this, but you can read the usage docs for more details:

                doctl kubernetes cluster node-pool create <cluster-id|cluster-name> [flags]
              

    The following example creates a node pool named example-pool in a cluster named example-cluster:

                  doctl kubernetes cluster node-pool create example-cluster --name example-pool --size s-1vcpu-2gb --count 3 --taint "key1=value1:NoSchedule" --taint "key2:NoExecute"
                
How to Add a Node Pool to a Kubernetes Cluster Using the DigitalOcean API

Add a GPU Worker Node to a Cluster

You can also add a GPU node pool to an existing cluster on versions 1.30.4-do.0, 1.29.8-do.0, 1.28.13-do.0, and later.

Note
In rare cases, it can take several hours for a GPU Droplet to provision. If you have an unusually long creation time, open a support ticket.
How to Add a GPU Worker Node to a Cluster Using the DigitalOcean CLI

To add a GPU worker node to an existing cluster, run doctl kubernetes cluster node-pool create specifying the GPU machine type. The following example adds a GPU worker node in single GPU configuration with 80 GB of memory and 4 node pools to a cluster named gpu-cluster:

doctl kubernetes cluster node-pool create gpu-cluster --name gpu-worker-pool-1 --size gpu-h100x1-80gb --count 4

We automatically apply the nvidia.com/gpu:NoSchedule taint to all GPU nodes. You do not need to apply this taint manually.

How to Add a GPU Worker Node to a Cluster Using the DigitalOcean API

To add a GPU worker node to an existing cluster, send a POST request to https://api.digitalocean.com/v2/kubernetes/clusters. The following example adds a GPU worker node in single GPU configuration with 80 GB of memory and 4 node pools to an existing cluster specified by its cluster ID cluster_id. :

curl --location --request POST 'https://api.digitalocean.com/v2/kubernetes/clusters/{cluster_id}/node_pools' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $DIGITALOCEAN_TOKEN' \
--data '{
    "node_pools": [
        {
            "size": "gpu-h100x1-80gb",
            "count": 4,
            "name": "new-gpu-worker-pool"
        }
}',

We automatically apply the nvidia.com/gpu:NoSchedule taint to all GPU nodes. You do not need to apply this taint manually.

To run GPU workloads after you create a cluster, use the GPU nodes-specific labels and taint in your workload specifications to schedule pods that match. You can use a configuration spec, similar to the pod spec shown below, for your actual workloads:

    
        
            
apiVersion: v1
kind: Pod
metadata:
  name: gpu-workload
spec:
  restartPolicy: Never
  nodeSelector:
    doks.digitalocean.com/gpu-brand: nvidia
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
  tolerations:
    - key: nvidia.com/gpu
      operator: Exists
      effect: NoSchedule

        
    

The above spec shows how to create a pod that runs NVIDIA’s CUDA image and uses the labels and taint for GPU worker nodes.

You can use the cluster autoscaler to scale the GPU node pool down to 1 or use the DigitalOcean CLI or API to manually scale the node pool down to 0. For example:

  doctl kubernetes cluster node-pool update <your-cluster-id> <your-nodepool-id> --count 0 

Add a Node Pool to a Cluster Using the Control Panel

To add additional node pools to an existing cluster, open the cluster’s More menu and select View Nodes. Click Add Node Pool. On the Add node pool(s) page, specify the following for the node pool:

  • Node pool name: Choose a name for the node pool when it’s created. Nodes inside this pool inherit this naming scheme when they are created. If you rename the node pool later, the nodes only inherit the new naming scheme when they are recreated (when you recycle the nodes or resize the node pool).

  • Choose the machine type:

    • Shared CPUs are built on Basic Droplet plans.

    • Dedicated CPUs are built on General Purpose, CPU-Optimized (Regular Intel CPU or Premium Intel CPU), or Memory-Optimized Droplet plans.

    • GPUs are built on GPU Droplets which are powered by NVIDIA’s H100 GPUs and can be in a single GPU or 8 GPU configuration.

    All machine types, except the Basic nodes, are dedicated CPU Droplets.

    Select the node type from the Machine type (Droplet) dropdown list. Choosing the right Kubernetes plan highly depends on your workload. See Choosing the Right Kubernetes Plan for more information.

  • Node plan: Choose the specific plan you want for your worker nodes. Each of the workers in a node pool has identical resources.

  • Select the Set node pool to autoscale option to enable autoscaling.

  • Nodes: For a fixed-size cluster, choose how many nodes to include in the node pool. By default, three worker nodes are selected.

  • Minimum nodes and Maximum nodes: For an autoscaling-enabled cluster, choose the minimum and maximum number of nodes for when the load decreases or increases.

Click Add Node Pool(s) to add additional node pools.

Click Save to save your changes and provision your new nodes.