Kubernetes on DigitalOcean

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.

Plans and Pricing

The cost of a DigitalOcean Kubernetes cluster is based on the cluster’s resources:

  • The control plane is fully managed by DigitalOcean and included at no cost.

  • You are currently billed at a worker node level. Worker nodes are built on Droplets and charged at the same rate as Droplets.

    Starting July 1, 2022, you will be billed at the node pool level instead. Billing will be calculated based on the configuration of your node pools during the month. Your bill will depend on the usage of your node pools in that month.

    • The per-hour price of the worker node will be capped at 28 days or 672 hours (28 days multiplied by 24 hours). For example, the hourly price for a $100/month worker node is ($100)/(24 hours * 28 days) = $0.1488/hr. If your cluster has one node pool with three nodes, no autoscaling, and is up for 27 days, the bill is ($0.1488/hr * 24 hours * 27 days)*3 = $289. Similarly, if the cluster is up for 29 days or for the entire time in a month, the bill is ($0.1488/hr * 24 hours * 28 days)*3 = $300. Note that if your node pool remains active for more than 28 days or 672 hours in a month, you will only be billed for 672 hours.

    • For a node pool, its monthly bill will be calculated based on its highest usage within the 28-day window. For example, for a cluster with one node pool configured for auto-scaling from three to six nodes, if the cluster uses three nodes for the first 15 days and six nodes in the last 15 days of the month, the cost is for six nodes for 15 days plus three nodes for 13 days. Similarly, if four nodes run for 28 days and six nodes for 2 days, the cost is for six nodes for 2 days plus four nodes for 26 days.

    • Billing will start when the node in the node pool is ready, even if it unhealthy. However, nodes you create that do not join the cluster will not be billed.

    • Node pools will have per-second billing instead of being rounded up or down to the nearest hour. For example, your node pool can have a varying number of nodes for different time spans. Their usage will be summed down to seconds.

      If a node lasts less than a minute, then you will be billed at least 1 minute for that span.

    • For every hour the node pool exists during the month, it will earn 1/672 of its total bandwidth allowance up to that limit. Once it has been active for 672 hours, it has reached its full bandwidth allowance.

    • Surge upgrades will be available at no additional cost. Surge upgrades can create up to 10 additional nodes. There will be no additional cost for the extra nodes created during the upgrades.

    In a 31-day month, your bill should stay almost the same. If your cluster configuration is fixed, then the bill will be identical to what you currently receive. For a 30-day or 31-day month, you will see a small increase of ~1 day’s worth of usage and ~3 day’s worth of usage, respectively.

    If you use the cluster for a partial month, such as 10 days, your price may increase very slightly because of the hourly pricing change (based on the 28-day cap).

  • Integration with DigitalOcean Load Balancers is charged at the same rate as DigitalOcean Load Balancers.

  • Integration with block storage volumes is charged at the same rate as volumes.

All charges for Kubernetes clusters appear in the Kubernetes section of monthly invoices.


Outbound data transfer is shared between all Droplets, including Kubernetes worker nodes, so bandwidth for Kubernetes cluster worker nodes is charged at the same rate as Droplet bandwidth pricing.

You can view your accumulated monthly transfer allowance on your account’s billing page in the Droplet transfer section. For an in-depth description of how data transfer accrual works, read our detailed bandwidth billing page.

Starting July 1, 2022, a DOKS cluster will accrue free bandwidth per the highest 28-days of usage. For example, if your Droplet has a monthly bandwidth quota of 5TB, then you will accrue free bandwidth at the rate of 5TB/(24 hours * 28 days) = 7.44 GB/hr.

Regional Availability

At least one datacenter in every region supports Kubernetes. Kubernetes will not be offered in NYC2, AMS2, or SFO1.

Learn more in the regional availability matrix


Kubernetes is a powerful open-source system for managing containerized applications in a clustered environment. Its focus is to improve how you manage related, distributed components and services across varied infrastructure.

DigitalOcean Kubernetes is a managed Kubernetes service lets you deploy scalable and secure Kubernetes clusters without the complexities of administrating the control plane. We manage the Kubernetes control plane and the underlying containerized infrastructure.

DigitalOcean Kubernetes provides administrator access to the cluster and full access to the Kubernetes API with no restrictions on which API objects you can create. We manage key services and settings on your behalf that you cannot or should not modify.

You retain full access to the cluster with existing toolchains. You have cluster-level administrative rights to create and delete any Kubernetes API objects through the DigitalOcean API and doctl.

There are no restrictions on the API objects you can create as long as the underlying Kubernetes version supports them. We offer the latest version of Kubernetes as well as earlier patch levels of the latest minor version for special use cases. You can also install popular tools like Helm, metrics-server, and Istio.

We only support features that are in a beta and general availability stage in upstream Kubernetes. See the Kubernetes documentation to check which feature is in the alpha, beta or general availability stage.

For updates on DOKS’s latest features and integrations, see the DOKS release notes. For a full list of changes for each available version of Kubernetes, including updates to the backend, API, and system components, see the DOKS changelog.

Conformance Certification

DOKS conforms to the Cloud Native Computing Foundation’s Kubernetes Software Conformance Certification program and is proud to be a CNCF Certified Kubernetes product.

In addition, we run our own extended suite of end-to-end tests on every DOKS release to ensure stability, performance, and upgradability.

Worker Nodes and Node Pools

Worker nodes are built on Droplets, but unlike standalone Droplets, worker nodes are managed with the Kubernetes command-line client kubectl and are not accessible with SSH. On both the control plane and the worker nodes, DigitalOcean maintains the system updates, security patches, operating system configuration and installed packages.

All the worker nodes within a node pool have identical resources, but each node pool can have a different worker configuration. This lets you have different services on different node pools, where each pool has the RAM, CPU, and attached storage resources the service requires.

You can create and modify node pools at any time. Worker nodes are automatically deleted and respawned when needed, and you can manually recycle worker nodes. Nodes in the node pool will inherit the node pool’s naming scheme when you first create a node pool, however, renaming a node pool will not rename the nodes. Nodes will inherit the new naming scheme only when they are recycled or the node pool is resized, creating new nodes.

Kubernetes role-based access control (RBAC) is enabled by default. See Using RBAC Authorization for details.

Persistent Data

You can persist data in DigitalOcean Kubernetes clusters to block storage volumes using the DigitalOcean CSI plugin. (See the feature overview page to learn which block storage volume features are available on DigitalOcean Kubernetes.) We recommend against using HostPath volumes because nodes are frequently replaced and all data stored on the nodes will be lost.

You can also persist data to DigitalOcean object storage by using the Spaces API to interact with Spaces from within your application.

Load Balancing

The DigitalOcean Kubernetes Cloud Controller supports provisioning DigitalOcean Load Balancers.

VPC Networks

Clusters are added to a VPC network for the datacenter region by default. This keeps traffic between clusters and other applicable resources from being routed outside the datacenter over the public internet.

Cluster networking is preconfigured with Cilium. Overlay networking is preconfigured with Cilium and supports network policies.


Clusters are automatically tagged with k8s and the specific cluster ID, like k8s:EXAMPLEc-3515-4a0c-91a3-2452eEXAMPLE. Worker nodes are additionally tagged with k8s:worker.

You can add custom tags to a cluster and its node pools. Any custom tags added to worker nodes in a node pool (for example, from the Droplets page), are deleted to maintain consistency between the node pool and its worker nodes.


  • The control plane configuration is managed by DigitalOcean. You cannot modify the control plane files, feature gates, or admission controllers. See The Managed Elements of DigitalOcean Kubernetes for more specifics.

  • The number of Kubernetes clusters you can create is determined by your account’s Droplet limit. If you reach the Droplet limit when creating a new cluster, you can request to increase the limit in the DigitalOcean Control Panel.

  • The control plane is not highly available and may be temporarily unavailable during upgrades or maintenance. This does not affect running clusters and does not make the cluster workers or workloads unavailable. To reduce downtime, you can enable the high-availability (HA) control plane setting when creating clusters on DOKS versions beginning with 1.21.3-do.0.

    For clusters managed by multiple control plane nodes, a cpc-bridge-proxy service reserved port 443 on a worker node’s internal network interface. To minimize the potential for port conflict, the service now reserves port 16443 in certain clusters.
  • Worker nodes are subject to Droplet limits. Similarly, the following managed resources are subject to their respective limits:

  • The number of open ports allowed in a Kubernetes service is limited to 250.

  • A cluster must have at least one worker node and cannot be scaled down to zero worker nodes.

  • You cannot manually resize DOKS nodes by using the control panel to edit the Droplets. The reconciler will view this as aberrant and revert such changes. To resize DOKS nodes, create a node pool of the desired size, and once it is fully provisioned, remove the old one.

  • The manual deletion of nodes using kubectl delete is not supported, and will put your cluster in an unpredictable state. Instead, resize the node pool to the desired number of nodes, or use doctl kubernetes cluster node-pool delete node.

  • DigitalOcean support team has read-only access to your Kubernetes clusters when troubleshooting and cannot see your secret objects.

Resource Limits

  • Clusters can have up to 512 nodes.

  • A single worker node can have up to 110 pods.

  • All worker nodes for a cluster are provisioned in the same datacenter region.

  • Network throughput is capped at 2 Gbps per worker node.

For general information on the upper limits of Kubernetes cluster sizes and how large cluster sizes affect scaling behavior, see the official Kubernetes documentation on building large clusters and scalability validation of the release

Allocatable Memory

The size of DOKS nodes determines the maximum amount of memory you can allocate to Pods. Because of this, we recommend using nodes with less than 2 GB of allocatable memory only for development purposes and not production. These distinctions are visible during the cluster creation process.

The following table describes the maximum allocatable memory that will be available for scheduling pods.

Size Slugs Node Memory (GiB) Maximum Pod Allocatable Memory
s-1vcpu-2gb, s-2vcpu-2gb 2 1 GiB
s-1vcpu-3gb 3 1.66 GiB
s-2vcpu-4gb, c-2 4 2.5 GiB
s-4vcpu-8gb, g-2vcpu-8gb, gd-2vcpu-8gb, c-4 8 6 GiB
s-6vcpu-16gb, g-4vcpu-16gb, gd-4vcpu-16gb, c-8 16 13 GiB
s-8vcpu-32gb, g-8vcpu-32gb, gd-8vcpu-32gb, c-16 32 28 GiB
s-12vcpu-48gb 48 43 GiB
s-16vcpu-64gb, g-16vcpu-64gb, gd-16vcpu-64gb, c-32 64 58 GiB
s-20vcpu-96gb 96 88.5 GiB
s-24vcpu-128gb, g-32vcpu-128gb, gd-32vcpu-128gb 128 119.5 GiB
g-40vcpu-160gb, gd-40vcpu-160gb 160 151 GiB
s-32vcpu-192gb 192 182 GiB

This memory reservation is due to the following processes that are running on DOKS nodes:

  • kubelet
  • kube-proxy
  • containerd/docker
  • cilium
  • cilium-operator
  • coredns
  • do-node-agent
  • konnectivity-agent
  • cpc-bridge-proxy
  • kubelet-rubber-stamp (older releases only)
  • The OS

In clusters running Kubernetes 1.16 or higher, the allocatable memory is encoded in the “Kube Reserved” and “System Reserved” values in kubelet. For more information, see Reserve Compute Resources for System Daemons in the Kubernetes Documentation.

Names and Tags

  • At creation time, the k8s prefix is reserved for system tags and cannot be used at the beginning of custom tags.

  • You cannot tag load balancers or block storage volumes.

  • Although it’s currently possible, we will not support tagging individual worker nodes in the future.

Feature Support

In DigitalOcean Kubernetes clusters, we do not yet support:

Known Issues

  • In addition to the cluster’s Resources tab, cluster resources (worker nodes, load balancers, and block storage volumes) are also listed outside the Kubernetes page in the DigitalOcean Control Panel. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel’s Kubernetes page.

  • The DigitalOcean autoscaler does not support a min_node size of 0, therefore the mininum node size for an autoscaling group is 1.

  • Installing webhooks targeted at services within the cluster can cause Kubernetes version upgrades to fail because internal services may not be accessible during an upgrade.

  • The certificate authority, client certificate, and client key data in the kubeconfig.yaml file displayed in the control panel expire every seven days after download. If you use this file, you will need to download a new certificate every week. To avoid this, we strongly recommend using doctl.

  • Kubernetes 1-Click Apps can be installed multiple times to a cluster and will be installed in the same namespace each time. This means that subsequent installations of a given 1-Click App will overwrite the previous instance of that 1-Click App, as well as the data that was associated with it.

  • If a Kubernetes 1-Click App is currently installing and a subsequent install request for the same App is made, the subsequent request will not be processed. Only once the 1st request is completed (Done or Failed) may a subsequent request be made to install the same Kubernetes 1-Click App on the same cluster.

  • Kubernetes 1-Click Apps that are deleted from a cluster still appear in the history of installed 1-Click Apps on the cluster’s Overview page. If a 1-Click App was installed on a cluster multiple times, it will be listed as installed multiple times regardless of whether the 1-Click App is currently present on the cluster.

  • Resources associated with a DOKS cluster such as load balancers, volumes, and volume snapshots, belong to the default project upon creation, regardless of which project the cluster belongs to. Resources get associated with the correct project when a cluster gets reconciled or is moved between projects.

  • Resources associated with a DOKS cluster are not visible within a project through the control panel or the public API despite belonging to that project.

  • When you renew a Let’s Encrypt certificate, DOKS gives it a new UUID and automatically updates all annotations in the certificate’s cluster to use the new UUID. However, you must manually update any external configuration files and tools that reference the UUID.

Latest Updates

15 March 2022

11 March 2022

12 October 2021

  • High-availability control plane is now in early availability in the following regions: ams3, nyc1, sfo3, and sgp1.

  • Released v1.65.0 of doctl, the official DigitalOcean CLI. This release includes a number of new features:

    • The --ha flag was added to the kubernetes cluster create sub-command to optionally create a cluster configured with a highly-available control plane. This feature is in early availability
    • The kubernetes cluster sub-commands now include a “Support Features” field when displaying version options
    • The --disable-lets-encrypt-dns-records flag was added to the compute load-balancer create sub-command to optionally disable automatic DNS record creation for Let’s Encrypt certificates that are added to the load balancer

For more information, see all Kubernetes release notes.