# How to Add Node Pools to a Cluster DigitalOcean Kubernetes (DOKS) is a Kubernetes service with a fully managed control plane, high availability, and autoscaling. DOKS integrates with standard Kubernetes toolchains and DigitalOcean’s load balancers, volumes, CPU and GPU Droplets, API, and CLI. A node pool is a group of nodes in a DOKS cluster with the same configuration. All the worker nodes within a node pool have identical resources, but each node pool can have a different worker configuration. This lets you have different services on different node pools, where each pool has the RAM, CPU, and attached storage resources the service requires. You can create and modify node pools at any time. Worker nodes are automatically deleted and recreated when needed, and you can manually recycle worker nodes. Nodes in the node pool inherit the node pool’s naming scheme when you first create a node pool, however, renaming a node pool does not rename the nodes. Nodes inherit the new naming scheme when recycled or when resizing the node pool which creates new nodes. **Note**: You can scale a fixed node pool down to 0 nodes as long as you have another fixed node pool with at least 1 node or a GPU node pool with 0 nodes. You can add custom tags to a cluster and its node pools. DOKS deletes any custom tags you added to worker nodes in a node pool (for example, from the [Droplets page](https://cloud.digitalocean.com/droplets/)) to maintain consistency between the node pool and its worker nodes. ## Add a Node Pool to a Cluster Using Automation ## How to Add a Node Pool to a Kubernetes Cluster Using the DigitalOcean CLI 1. [Install `doctl`](https://docs.digitalocean.com/reference/doctl/how-to/install/index.html.md), the official DigitalOcean CLI. 2. [Create a personal access token](https://docs.digitalocean.com/reference/api/create-personal-access-token/index.html.md) and save it for use with `doctl`. 3. Use the token to grant `doctl` access to your DigitalOcean account. ```shell doctl auth init ``` 4. Finally, run `doctl kubernetes cluster node-pool create`. Basic usage looks like this, but you can [read the usage docs](https://docs.digitalocean.com/reference/doctl/reference/kubernetes/cluster/node-pool/create/index.html.md) for more details: ```shell doctl kubernetes cluster node-pool create [flags] ``` The following example creates a node pool named `example-pool` in a cluster named `example-cluster`: ```shell doctl kubernetes cluster node-pool create example-cluster --name example-pool --size s-1vcpu-2gb --count 3 --taint "key1=value1:NoSchedule" --taint "key2:NoExecute" ``` ## How to Add a Node Pool to a Kubernetes Cluster Using the DigitalOcean API 1. [Create a personal access token](https://docs.digitalocean.com/reference/api/create-personal-access-token/index.html.md) and save it for use with the API. 2. Send a POST request to [`https://api.digitalocean.com/v2/kubernetes/clusters/$K8S_CLUSTER_ID/node_pools create`](https://docs.digitalocean.com/reference/api/digitalocean//index.html.md#operation/%25!s%28%3Cnil%3E%29). ### Add a GPU Worker Node to a Cluster You can also add a [GPU node pool](https://docs.digitalocean.com/products/kubernetes/details/supported-gpus/index.html.md) to an existing cluster on versions 1.30.4-do.0, 1.29.8-do.0, 1.28.13-do.0, and later. **Note**: In rare cases, it can take several hours for a GPU Droplet to provision. If you have an unusually long creation time, [open a support ticket](https://cloudsupport.digitalocean.com). ## How to Add a GPU Worker Node to a Cluster Using the DigitalOcean CLI To add a GPU worker node to an existing cluster, run `doctl kubernetes cluster node-pool create` specifying the GPU machine type. The following example adds an NVIDIA GPU worker node in single GPU configuration with 80 GB of memory and four node pools to a cluster named `gpu-cluster`: ```shell doctl kubernetes cluster node-pool create gpu-cluster --name gpu-worker-pool-1 --size gpu-h100x1-80gb --count 4 ``` We automatically apply the `nvidia.com/gpu:NoSchedule` taint to all NVIDIA GPU nodes. You do not need to apply this taint manually. ## How to Add a GPU Worker Node to a Cluster Using the DigitalOcean API To add a GPU worker node to an existing cluster, send a `POST` request to `https://api.digitalocean.com/v2/kubernetes/clusters`. The following example adds an NVIDIA GPU worker node in single GPU configuration with 80 GB of memory and four node pools to an existing cluster specified by its cluster ID `cluster_id`. : ```shell curl --location --request POST 'https://api.digitalocean.com/v2/kubernetes/clusters/{cluster_id}/node_pools' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer $DIGITALOCEAN_TOKEN' \ --data '{ "node_pools": [ { "size": "gpu-h100x1-80gb", "count": 4, "name": "new-gpu-worker-pool" } }', ``` We automatically apply the `nvidia.com/gpu:NoSchedule` taint to all NVIDIA GPU nodes. You do not need to apply this taint manually. To run GPU workloads after you create a cluster, use the [GPU nodes-specific labels and taint](https://docs.digitalocean.com/products/kubernetes/details/managed/index.html.md#automatic-application-of-labels-and-taints-to-nodes) in your [workload](https://kubernetes.io/docs/concepts/workloads/) specifications to schedule pods that match. You can use a configuration spec, similar to the pod spec shown below, for your actual workloads: `cuda-pod.yaml` ```yaml apiVersion: v1 kind: Pod metadata: name: gpu-workload spec: restartPolicy: Never nodeSelector: doks.digitalocean.com/gpu-brand: nvidia containers: - name: cuda-container image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2 tolerations: - key: nvidia.com/gpu operator: Exists effect: NoSchedule ``` The above spec shows how to create a pod that runs NVIDIA’s CUDA image and uses the labels and taint for GPU worker nodes. You can use the [cluster autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#basics) to automatically scale the GPU node pool down to zero, or use the DigitalOcean CLI or API to manually scale the node pool down to zero. For example: ```shell doctl kubernetes cluster node-pool update --count 0 ``` ## Add a Node Pool to a Cluster Using the Control Panel To add additional node pools to an existing cluster, open the cluster’s **More** menu and select **View Nodes**. Click **Add Node Pool**. On the **Add node pool(s)** page, specify the following for the node pool: - **Node pool name**: Choose a name for the node pool when it’s created. Nodes inside this pool inherit this naming scheme when they are created. If you rename the node pool later, the nodes only inherit the new naming scheme when they are recreated (when you recycle the nodes or resize the node pool). - Choose the machine type: - **Shared CPUs** are built on Basic Droplet plans. - **Dedicated CPUs** are built on General Purpose, CPU-Optimized (Regular Intel CPU or Premium Intel CPU), or Memory-Optimized Droplet plans. - **GPUs** are built on GPU Droplets which are powered by AMD and NVIDIA GPUs and can be in a single GPU or 8 GPU configuration. **Note**: ``` A cluster must have at least one CPU node pool to be fully operational. This pool is needed to host essential DOKS managed workloads such as CoreDNS, preventing them from running on more expensive GPU nodes. For high availability of these workloads, we recommend a minimum of two CPU nodes. ``` All machine types, except the Basic nodes, are [dedicated CPU Droplets](https://docs.digitalocean.com/products/droplets/concepts/choosing-a-plan/index.html.md#shared-vs-dedicated). Select the node type from the **Machine type (Droplet)** drop-down list. Choosing the right Kubernetes plan highly depends on your workload. See [Choosing the Right Kubernetes Plan](https://docs.digitalocean.com/products/kubernetes/concepts/choosing-a-plan/index.html.md) for more information. - **Node plan**: Choose the specific plan you want for your worker nodes. Each of the workers in a node pool has identical resources. Some high-tier node plans are locked. To request access to those plans, click **Submit a request**. In the **Request access to more nodes or higher-tier plans**, specify the reason for the request and number of nodes you are requesting and click **Submit**. - Select the **Set node pool to autoscale** option to enable autoscaling. - **Nodes**: For a fixed-size cluster, choose how many nodes to include in the node pool. By default, three worker nodes are selected. - **Minimum nodes** and **Maximum nodes**: For an autoscaling-enabled cluster, choose the minimum and maximum number of nodes for when the load decreases or increases. Click **Add Node Pool(s)** to add additional node pools. Click **Save** to apply your changes and provision your new nodes.