Kubernetes Limits

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains, integrate natively with DigitalOcean Load Balancers and volumes, and can be managed programmatically using the API and command line. For critical workloads, add the high-availability control plane to increase uptime with 99.95% SLA.


Limits

  • The control plane configuration is managed by DigitalOcean. You cannot modify the control plane files, feature gates, or admission controllers. See The Managed Elements of DigitalOcean Kubernetes for more specifics.

  • The number of Kubernetes clusters you can create is determined by your account’s Droplet limit. If you reach the Droplet limit when creating a new cluster, you can request to increase the limit in the DigitalOcean Control Panel.

  • The control plane is not highly available and may be temporarily unavailable during upgrades or maintenance. This does not affect running clusters and does not make the cluster workers or workloads unavailable. To reduce downtime, you can enable the high-availability (HA) control plane setting when creating clusters on DOKS versions beginning with 1.21.3-do.0.

    Note
    For clusters managed by multiple control plane nodes, a cpc-bridge-proxy service reserved port 443 on a worker node’s internal network interface. To minimize the potential for port conflict, the service now reserves port 16443 in certain clusters.
  • Worker nodes are subject to Droplet limits. Similarly, the following managed resources are subject to their respective limits:

  • The number of open ports allowed in a Kubernetes service is limited to 250.

  • A cluster must have at least one worker node and cannot be scaled down to zero worker nodes.

  • You cannot manually resize DOKS nodes by using the control panel to edit the Droplets. The reconciler views this as aberrant and revert such changes. To resize DOKS nodes, create a node pool of the desired size, and once it is fully provisioned, remove the old one.

  • The manual deletion of nodes using kubectl delete is not supported, and puts your cluster in an unpredictable state. Instead, resize the node pool to the desired number of nodes, or use doctl kubernetes cluster node-pool delete node.

  • DigitalOcean support team has read-only access to your Kubernetes clusters when troubleshooting and cannot see your secret objects.

  • The SCTP protocol enabled in DOKS 1.28 only works if the target and source pod communicating via SCTP are hosted on the same worker node. SCTP has been disabled in DOKS >= 1.29.

  • Service objects sharing the same host port with different L4 protocols is not supported (ex. Service A (using TCP) and Service B (using UDP) cannot share port 9090).

Resource Limits

  • Clusters can have up to 512 nodes.

  • A single worker node can have up to 110 pods.

  • All worker nodes for a cluster are provisioned in the same datacenter region.

  • Network throughput is capped at 10 Gbps for CPU-Optimized worker nodes with Premium CPUs. For all other node types, the network throughput is capped at 2 Gbps per worker node.

For general information on the upper limits of Kubernetes cluster sizes and how large cluster sizes affect scaling behavior, see the official Kubernetes documentation on building large clusters and scalability validation of the release

Allocatable Memory

The size of DOKS nodes determines the maximum amount of memory you can allocate to Pods. Because of this, we recommend using nodes with less than 2 GB of allocatable memory only for development purposes and not production. These distinctions are visible during the cluster creation process.

The following table describes the maximum allocatable memory that is available for scheduling pods.

Size Slugs Node Memory (GiB) Maximum Pod Allocatable Memory
s-1vcpu-2gb, s-2vcpu-2gb 2 1 GiB
s-1vcpu-3gb 3 1.66 GiB
s-2vcpu-4gb, c-2 4 2.5 GiB
s-4vcpu-8gb, g-2vcpu-8gb, gd-2vcpu-8gb, c-4 8 6 GiB
s-6vcpu-16gb, g-4vcpu-16gb, gd-4vcpu-16gb, c-8 16 13 GiB
s-8vcpu-32gb, g-8vcpu-32gb, gd-8vcpu-32gb, c-16 32 28 GiB
s-12vcpu-48gb 48 43 GiB
s-16vcpu-64gb, g-16vcpu-64gb, gd-16vcpu-64gb, c-32 64 58 GiB
s-20vcpu-96gb 96 88.5 GiB
s-24vcpu-128gb, g-32vcpu-128gb, gd-32vcpu-128gb 128 119.5 GiB
g-40vcpu-160gb, gd-40vcpu-160gb 160 151 GiB
s-32vcpu-192gb 192 182 GiB

This memory reservation is due to the following processes that are running on DOKS nodes:

  • kubelet
  • kube-proxy
  • containerd/docker
  • cilium
  • cilium-operator
  • coredns
  • do-node-agent
  • konnectivity-agent
  • cpc-bridge-proxy
  • kubelet-rubber-stamp (older releases only)
  • The OS

In clusters running Kubernetes 1.16 or higher, the allocatable memory is encoded in the “Kube Reserved” and “System Reserved” values in kubelet. For more information, see Reserve Compute Resources for System Daemons in the Kubernetes Documentation.

Names and Tags

  • At creation time, the k8s prefix is reserved for system tags and cannot be used at the beginning of custom tags.

  • You cannot tag load balancers or volumes.

  • Although it’s currently possible, we do not support tagging individual worker nodes in the future.

Feature Support

In DigitalOcean Kubernetes clusters, we do not yet support:

Known Issues

  • In addition to the cluster’s Resources tab, cluster resources (worker nodes, load balancers, and volumes) are also listed outside the Kubernetes page in the DigitalOcean Control Panel. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel’s Kubernetes page.

  • The new control plane architecture does not support using a Kubernetes NetworkPolicy to selectively allow access to the API server when a NetworkPolicy restricts it. You can instead use CiliumNetworkPolicies, as described in Upgrading to the New Control Plane.

  • The DigitalOcean autoscaler does not support a min_node size of 0, therefore the minimum node size for an autoscaling group is 1.

  • Installing webhooks targeted at services within the cluster can cause Kubernetes version upgrades to fail because internal services may not be accessible during an upgrade.

  • The certificate authority, client certificate, and client key data in the kubeconfig.yaml file displayed in the control panel expire every seven days after download. If you use this file, you need to download a new certificate every week. To avoid this, we strongly recommend using doctl.

  • Kubernetes 1-Click Apps can be installed multiple times to a cluster and is installed in the same namespace each time. This means that subsequent installations of a given 1-Click App overwrites the previous instance of that 1-Click App, as well as the data that was associated with it.

  • If a Kubernetes 1-Click App is currently installing and a subsequent install request for the same App is made, the subsequent request is not processed. Only once the 1st request is completed (Done or Failed) may a subsequent request be made to install the same Kubernetes 1-Click App on the same cluster.

  • Kubernetes 1-Click Apps that are deleted from a cluster still appear in the history of installed 1-Click Apps on the cluster’s Overview page. If a 1-Click App was installed on a cluster multiple times, it is listed as installed multiple times regardless of whether the 1-Click App is currently present on the cluster.

  • Resources associated with a DOKS cluster such as load balancers, volumes, and volume snapshots, belong to the default project upon creation, regardless of which project the cluster belongs to. Resources get associated with the correct project when a cluster gets reconciled or is moved between projects.

  • Resources associated with a DOKS cluster are not visible within a project through the control panel or the public API despite belonging to that project.

  • When you renew a Let’s Encrypt certificate, DOKS gives it a new UUID and automatically updates all annotations in the certificate’s cluster to use the new UUID. However, you must manually update any external configuration files and tools that reference the UUID.