DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.
Choosing the right Kubernetes plan highly depends on your workload. An oversized cluster underuses its resources and costs more, but an undersized cluster running at full CPU or memory suffers from degraded performance or errors.
This guide covers the following topics:
If this is your first time testing out Kubernetes, see our Build and Deploy Your First Image to Your First Cluster tutorial.
Nodes are built on Droplets. You can choose from among the following Droplet plans for your node’s machine type:
|Machine Type||CPU||vCPUs||Memory||Common Uses|
|Basic (Regular and Premium)||Shared||1 - 8||1 - 16 GB RAM||Testing, Low-Traffic Servers|
|General Purpose||Dedicated||2 - 40||8 - 160 GB RAM
4 GB RAM / vCPU
|Medium to High-Traffic Servers|
|CPU-Optimized||Dedicated||2 - 48||4 - 96 GB
2 GB RAM / vCPU
|CI/CD, Video Encoding, Batch Processing|
|Memory-Optimized||Dedicated||2 - 32||16 - 256 GB RAM
8 GB RAM / vCPU
|High-Performing Databases, Caches, Indexing|
|Storage-Optimized||Dedicated||2 - 32||16 - 256 GB RAM
8 GB RAM / vCPU
150 - 225 GB SSD / vCPU
|Data Storage, Monitoring, Analytics|
Node size and count determine the overall CPU, RAM, and storage of your cluster. The better hardware a node has, the more pods can operate effectively within it before needing another node.
Node size determines the maximum amount of memory you can allocate to pods within it. For a full breakdown of memory available per pod, see the allocatable memory table.
Because of this, we recommend using nodes with less than 2GB of allocatable memory only for development purposes and not production. For production clusters, we recommend sizing nodes large enough (2.5 GB or more) to absorb the workload of a down node.
Larger nodes are easier to manage, are more cost efficient, and can run more demanding applications; however, they also require more pod management and cause a larger impact if they fail. If you later enable autoscaling for a node pool, also note that DigitalOcean only adds and removes nodes of the chosen size, which results in larger spikes of both performance and cost.
After creating a cluster, we recommend benchmarking and load testing your workload to see how it performs under simulated load. For bursty apps or batch jobs, look at resource usage when load is at its expected peak, especially when using shared CPU Basic nodes. If you notice that your app’s performance is too variable for your production needs, consider a machine type with dedicated vCPUs.
Using Kubernetes metrics, you can get more information on your cluster’s CPU load and memory usage:
If your cluster has high CPU usage most of the time and also significant memory usage, consider scaling both vCPUs and memory and using balanced General Purpose nodes.
If your cluster has high CPU usage most of the time but very low memory usage, you might be able to save money with CPU-Optimized nodes.
If your cluster has high memory usage most of the time (potentially maxing out and swapping to disk) but low or moderate CPU usage, consider scaling memory and using Memory-Optimized nodes.
If your cluster has low to moderate CPU or memory usage most of the time but sometimes bursts up and hits resource limits, consider shared CPU Basic nodes and scale the limiting resource accordingly.
Nodes include unlimited free inbound data transfer and some amount of free outbound data transfer, depending on the Droplet instance type and size. Depending on your workload type and bandwidth usage, you could scale your nodes to take advantage of additional free outbound data transfer. For example, streaming and video applications require more bandwidth and network capabilities.
If you need additional storage, you can use network-attached block storage to attach additional volumes to a cluster.