How to Add Node Pools to a Cluster

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service. Deploy Kubernetes clusters with a fully managed control plane, high availability, autoscaling, and native integration with DigitalOcean Load Balancers and volumes. You can add node pools using shared and dedicated CPUs, and NVIDIA H100 GPUs in a single GPU or 8 GPU configuration. DOKS clusters are compatible with standard Kubernetes toolchains and the DigitalOcean API and CLI.


Add a Node Pool to a Cluster Using Automation

How to Add a Node Pool to a Kubernetes Cluster Using the DigitalOcean CLI
  1. Install doctl, the DigitalOcean command-line tool.

  2. Create a personal access token and save it for use with doctl.

  3. Use the token to grant doctl access to your DigitalOcean account.

              doctl auth init
              
  4. Finally, run doctl kubernetes cluster node-pool create. Basic usage looks like this, but you can read the usage docs for more details:

                doctl kubernetes cluster node-pool create <cluster-id|cluster-name> [flags]
              

    The following example creates a node pool named example-pool in a cluster named example-cluster:

                  doctl kubernetes cluster node-pool create example-cluster --name example-pool --size s-1vcpu-2gb --count 3 --taint "key1=value1:NoSchedule" --taint "key2:NoExecute"
                
How to Add a Node Pool to a Kubernetes Cluster Using the DigitalOcean API

Add a GPU Worker Node to an Existing Cluster

You can also add a GPU node pool to an existing cluster on versions 1.30.4-do.0, 1.29.8-do.0, 1.28.13-do.0, and later.

Note
In rare cases, it can take several hours for a GPU Droplet to provision. If you have an unusually long creation time, open a support ticket.
How to Add a GPU Worker Node to a Cluster Using the DigitalOcean CLI

To add a GPU worker node to an existing cluster, run doctl kubernetes cluster node-pool create specifying the GPU machine type. The following example adds a GPU worker node in single GPU configuration with 80 GB of memory and 4 node pools to a cluster named gpu-cluster:

doctl kubernetes cluster node-pool create gpu-cluster --name gpu-worker-pool-1 --size gpu-h100x1-80gb --count 4
How to Add a GPU Worker Node to a Cluster Using the DigitalOcean API

To add a GPU worker node to an existing cluster, send a POST request to https://api.digitalocean.com/v2/kubernetes/clusters. The following example adds a GPU worker node in single GPU configuration with 80 GB of memory and 4 node pools to an existing cluster specified by its cluster ID cluster_id. :

curl --location --request POST 'https://api.digitalocean.com/v2/kubernetes/clusters/{cluster_id}/node_pools' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $DIGITALOCEAN_TOKEN' \
--data '{
    "node_pools": [
        {
            "size": "gpu-h100x1-80gb",
            "count": 4,
            "name": "new-gpu-worker-pool"
        }
}',

To run GPU workloads after you create a cluster, use the GPU nodes-specific labels and taint in your workload specifications to schedule pods that match. You can use a configuration spec, similar to the pod spec shown below, for your actual workloads:

    
        
            
apiVersion: v1
kind: Pod
metadata:
  name: gpu-workload
spec:
  restartPolicy: Never
  nodeSelector:
    doks.digitalocean.com/gpu-brand: nvidia
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
  tolerations:
    - key: nvidia.com/gpu
      operator: Exists
      effect: NoSchedule

        
    

The above spec shows how to create a pod that runs NVIDIA’s CUDA image and uses the labels and taint for GPU worker nodes.

You can use the cluster autoscaler to scale the GPU node pool down to 1 or use the DigitalOcean CLI or API to manually scale the node pool down to 0. For example:

  doctl kubernetes cluster node-pool update <your-cluster-id> <your-nodepool-id> --count 0 

Add a Node Pool to a Cluster Using the Control Panel

To add additional node pools to an existing cluster:

  1. Open the cluster’s More menu and select View Nodes.
  2. Click Add Node Pool.
  3. Select the name, type, size, and number of Droplets in the pool. Click Add Node Pool(s) to add additional node pools.
  4. Click Save to save your changes and provision your new nodes.