DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service. Deploy Kubernetes clusters with a fully managed control plane, high availability, autoscaling, and native integration with DigitalOcean Load Balancers and volumes. You can add node pools using shared and dedicated CPUs, and NVIDIA H100 GPUs in a single GPU or 8 GPU configuration. DOKS clusters are compatible with standard Kubernetes toolchains and the DigitalOcean API and CLI.
The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file. Load balancers created in the control panel or via the API cannot be used by your Kubernetes clusters. The DigitalOcean Load Balancer Service routes load balancer traffic to all worker nodes on the cluster. Only nodes configured to accept the traffic pass health checks. Any other nodes fail and show as unhealthy, but this is expected. Our community article, How to Set Up an Ingress-NGINX controller with Cert-Manager on DigitalOcean Kubernetes, provides a detailed, practical example.
The example configuration below defines a load balancer and creates it if one with the same name does not already exist. Additional configuration examples are available in the DigitalOcean Cloud Controller Manager repository.
You can add an external load balancer to a cluster by creating a new configuration file or adding the following lines to your existing service config file. Both the type
and ports
values are required for type: LoadBalancer
:
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
You can configure how many nodes a load balancer contains at creation by setting the size unit annotation. In the context of a service file, this looks like:
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
service.beta.kubernetes.io/do-loadbalancer-size-unit: "3"
service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
You can control the load balancer service access by adding firewall rules. Use the LoadBalancerSourceRanges
field to allow incoming connections and the deny-list
annotation to block incoming connections. The following example configures the load balancer’s firewall to block incoming connections from 198.51.100.0/16
IP block and to accept connections from the 203.0.113.24
and 203.0.113.68
addresses.
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
service.beta.kubernetes.io/do-loadbalancer-size-unit: "3"
service.beta.kubernetes.io/do-loadbalancer-deny-rules: "cidr:198.51.100.0/16"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
LoadBalancerSourceRanges:
- 203.0.113.24, 203.0.113.68
This is the minimum definition required to trigger creation of a DigitalOcean Load Balancer on your account and billing begins once the creation is completed. Currently, you cannot assign a reserved IP address to a DigitalOcean Load Balancer.
The number of nodes a load balancer contains determines how many connections it can maintain at once. Load balancers with more nodes can maintain more connections, making them more highly available. The number of nodes can be an integer between 1
and 200
and defaults to 1
.
Your actual load balancer node limit is determined by your account’s limits. To request a limit increase, contact support.
You can resize the load balancer after creation once per minute.
Once you apply the config file to a deployment, you can see the load balancer in the Resources tab of your cluster in the control panel.
Alternatively, use kubectl get services
to see its status:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.0.2.1 <none> 443/TCP 2h
sample-load-balancer LoadBalancer 192.0.2.167 <pending> 80:32490/TCP 6s
When the load balancer creation is complete, the EXTERNAL-IP
column displays the external IP address instead of <pending>
. In the PORT(S)
column, the first port is the incoming port (80), and the second port is the node port
(32490), not the container port supplied in the targetPort
parameter.
kubectl
or from the control panel’s Kubernetes page.If the provisioning process for the load balancer is unsuccessful, you can access the service’s event stream to troubleshoot any errors. The event stream includes information on provisioning status and reconciliation errors.
To get detailed information about the load balancer configuration of a single load balancer, including the event stream at the bottom, use kubectl
’s describe service
command:
kubectl describe service <LB-NAME>
Name: sample-load-balancer
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"sample-load-balancer","namespace":"default"},"spec":{"ports":[{"name":"https",...
Selector: <none>
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.245.178.117
IPs: 10.245.178.117
LoadBalancer Ingress: 203.0.113.86
Port: https 80/TCP
TargetPort: 443/TCP
NodePort: https 32490/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m (x2 over 38m) service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 1m (x2 over 37m) service-controller Ensured load balancer
For more about managing load balancers, see:
Other Kubernetes Components in the DigitalOcean Community’s Introduction to Kubernetes.
Kubernetes Services in the official Kubernetes Concepts guide