DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.
The cost of a DigitalOcean Kubernetes cluster is based on the cluster’s resources:
Nodes are built on Droplets. The control plane is fully managed by DigitalOcean and included at no cost. Worker nodes are charged at the same rate as Droplets. Basic Droplets (Regular, Premium Intel, and Premium AMD) and CPU-Optimized Droplet plans are available for worker nodes.
All charges for Kubernetes clusters appear in the Kubernetes section of monthly invoices.
Outbound data transfer is shared between all Droplets, including Kubernetes worker nodes, so bandwidth for Kubernetes cluster worker nodes is charged at the same rate as Droplet bandwidth pricing.
You can view your accumulated monthly transfer allowance on your account’s billing page in the Droplet transfer section. For an in-depth description of how data transfer accrual works, read our detailed bandwidth billing page.
At least one datacenter in every region supports Kubernetes. Kubernetes will not be offered in NYC2, AMS2, or SFO1.
Learn more in the regional availability matrix
Kubernetes is a powerful open-source system for managing containerized applications in a clustered environment. Its focus is to improve how you manage related, distributed components and services across varied infrastructure.
DigitalOcean Kubernetes is a managed Kubernetes service lets you deploy scalable and secure Kubernetes clusters without the complexities of administrating the control plane. We manage the Kubernetes control plane and the underlying containerized infrastructure.
There are no restrictions on the API objects you can create as long as the underlying Kubernetes version supports them. We offer the latest version of Kubernetes as well as earlier patch levels of the latest minor version for special use cases. You can also install popular tools like Helm, metrics-server, and Istio.
DOKS conforms to the Cloud Native Computing Foundation’s Kubernetes Software Conformance Certification program and is proud to be a CNCF Certified Kubernetes product.
In addition, we run our own extended suite of end-to-end tests on every DOKS release to ensure stability, performance, and upgradability.
Worker nodes are built on Droplets, but unlike standalone Droplets, worker nodes are managed with the Kubernetes command-line client
kubectl and are not accessible with SSH. On both the control plane and the worker nodes, DigitalOcean maintains the system updates, security patches, operating system configuration and installed packages.
All the worker nodes within a node pool have identical resources, but each node pool can have a different worker configuration. This lets you have different services on different node pools, where each pool has the RAM, CPU, and attached storage resources the service requires.
You can create and modify node pools at any time. Worker nodes are automatically deleted and respawned when needed, and you can manually recycle worker nodes. Nodes in the node pool will inherit the node pool’s naming scheme when you first create a node pool, however, renaming a node pool will not rename the nodes. Nodes will inherit the new naming scheme only when they are recycled or the node pool is resized, creating new nodes.
Kubernetes role-based access control (RBAC) is enabled by default. See Using RBAC Authorization for details.
You can persist data in DigitalOcean Kubernetes clusters to block storage volumes using the DigitalOcean CSI plugin. (See the feature overview page to learn which block storage volume features are available on DigitalOcean Kubernetes.) We recommend against using HostPath volumes because nodes are frequently replaced and all data stored on the nodes will be lost.
You can also persist data to DigitalOcean object storage by using the Spaces API to interact with Spaces from within your application.
Clusters are added to a VPC network for the datacenter region by default. This keeps traffic between clusters and other applicable resources from being routed outside the datacenter over the public internet.
Cluster logs are rotated when they reach 10 MB in size. The last 2 copies are retained in addition to the current active log.
Clusters are automatically tagged with
k8s and the specific cluster ID, like
k8s:EXAMPLEc-3515-4a0c-91a3-2452eEXAMPLE. Worker nodes are additionally tagged with
k8s:worker. You can add custom tags to the cluster and worker nodes in the Tags field.
The control plane configuration is managed by DigitalOcean. You cannot modify the control plane files, feature gates, or admission controllers. See The Managed Elements of DigitalOcean Kubernetes for more specifics.
The number of Kubernetes clusters you can create is determined by your account’s Droplet limit. If you reach the Droplet limit when creating a new cluster, you can request to increase the limit in the DigitalOcean Control Panel.
The control plane is not highly available and may be temporarily unavailable during upgrades or maintenance. This does not affect running clusters and does not make the cluster workers or workloads unavailable.
You cannot manually resize DOKS nodes by using the control panel to edit the Droplets. The reconciler will view this as aberrant and revert such changes. To resize DOKS nodes, create a node pool of the desired size, and once it is fully provisioned, remove the old one.
The manual deletion of nodes using
kubectl delete is not supported, and will put your cluster in an unpredictable state. Instead, resize the node pool to the desired number of nodes, or use
doctl kubernetes cluster node-pool delete node.
Clusters can have up to 512 nodes.
A single worker node can have up to 110 pods.
All worker nodes for a cluster are provisioned in the same datacenter region.
Network throughput is capped at 2 Gbps per worker node.
For general information on the upper limits of Kubernetes cluster sizes and how large cluster sizes affect scaling behavior, see the official Kubernetes documentation on building large clusters and scalability validation of the release
The size of DOKS nodes determines the maximum amount of memory you can allocate to Pods. Because of this, we recommend using nodes with less than 2 GB of allocatable memory only for development purposes and not production. These distinctions are visible during the cluster creation process.
The following table describes the maximum allocatable memory that will be available for scheduling pods.
|Size Slugs||Node Memory (GiB)||Maximum Pod Allocatable Memory|
This memory reservation is due to the following processes that are running on DOKS nodes:
In clusters running Kubernetes 1.16 or higher, the allocatable memory is encoded in the “Kube Reserved” and “System Reserved” values in kubelet. For more information, see Reserve Compute Resources for System Daemons in the Kubernetes Documentation.
At creation time, the
k8s prefix is reserved for system tags and cannot be used at the beginning of custom tags.
You cannot tag load balancers or block storage volumes.
Although it’s currently possible, we will not support tagging individual worker nodes in the future.
In DigitalOcean Kubernetes clusters, we do not yet support:
In the DigitalOcean Control Panel, cluster resources (worker nodes, load balancers, and block storage volumes) are listed outside the Kubernetes page. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with
kubectl or from the control panel’s Kubernetes page.
The DigitalOcean autoscaler does not support a
min_node size of
0, therefore the mininum node size for an autoscaling group is
Installing webhooks targeted at services within the cluster can cause Kubernetes version upgrades to fail because internal services may not be accessible during an upgrade.
The certificate authority, client certificate, and client key data in the
kubeconfig.yaml file displayed in the control panel expire every seven days after download. If you use this file, you will need to download a new certificate every week. To avoid this, we strongly recommend using
Kubernetes 1-Click Apps can be installed multiple times to a cluster and will be installed in the same namespace each time. This means that subsequent installations of a given 1-Click App will overwrite the previous instance of that 1-Click App, as well as the data that was associated with it.
If a Kubernetes 1-Click App is currently installing and a subsequent install request for the same App is made, the subsequent request will not be processed. Only once the 1st request is completed (Done or Failed) may a subsequent request be made to install the same Kubernetes 1-Click App on the same cluster.
Kubernetes 1-Click Apps that are deleted from a cluster still appear in the history of installed 1-Click Apps on the cluster’s Overview page. If a 1-Click App was installed on a cluster multiple times, it will be listed as installed multiple times regardless of whether the 1-Click App is currently present on the cluster.
You can now do the following on Kubernetes clusters:
Use surge upgrade when upgrading an existing cluster. Surge upgrade is enabled by default when you create a new cluster.
Move a Kubernetes cluster and its associated resources, such as Droplets, load balancers and block storage volumes, to a project using the DigitalOcean control panel or
doctl command-line tool. You can also assign a project when you create a new cluster. If you do not specify a project, it gets assigned to the default project.
Due to capacity limits in the region, we have disabled the creation of new resources in SFO2 for new customers. Existing customers with resources in SFO2 are unaffected and can still create and destroy resources in SFO2.
On Kubernetes 1.19 and later we now provision two fully-managed firewalls for each new Kubernetes cluster. One firewall manages the connection between worker nodes and control plane, and the other manages connections between worker nodes and the public internet.
For more information, see all Kubernetes release notes.