How to Configure Advanced Load Balancer Settings in Kubernetes Clusters

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service. Deploy Kubernetes clusters with a fully managed control plane, high availability, autoscaling, and native integration with DigitalOcean Load Balancers and volumes. You can add node pools using shared and dedicated CPUs, and NVIDIA H100 GPUs in a single GPU or 8 GPU configuration. DOKS clusters are compatible with standard Kubernetes toolchains and the DigitalOcean API and CLI.


The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file. The same DigitalOcean Load Balancer limits apply to load balancers you add to DOKS clusters.

Warning
In addition to the cluster’s Resources tab, cluster resources (worker nodes, load balancers, and volumes) are also listed outside the Kubernetes page in the DigitalOcean Control Panel. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel’s Kubernetes page.

You can specify the following advanced settings in the metadata stanza of your configuration file under annotations. To prevent misconfiguration, an invalid value for an annotation results in an error.

Name

Available in: 1.14.10-do.3, 1.15.11-do.0, 1.16.8-do.0, 1.17.5-do.0 and later

This setting lets you specify a custom name or to rename an existing DigitalOcean Load Balancer. The name must:

  • Be less than or equal to 255 characters.
  • Start with an alphanumeric character.
  • Consist of alphanumeric characters or the ‘.’ (dot) or ‘-’ (dash) characters, except for the final character which must not be a ‘-’ (dash).

If you do not specify a custom name, the load balancer defaults to a name starting with the character a appended by the Service UID.

The following example creates a load balancer with the name my.example.com:

    
        
            
. . .
metadata:
  name: name-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-name: "my.example.com"
. . .

        
    

Protocol

Available in: 1.11.x and later (UDP: 1.21.11-do.1 and 1.22.8-do.1 or later)

This setting lets you specify the protocol for DigitalOcean Load Balancers. Options are tcp, http, https, and http2. Defaults to tcp.

If https, http2, or http3 are specified, then you must also specify either service.beta.kubernetes.io/do-loadbalancer-certificate-id, service.beta.kubernetes.io/do-loadbalancer-certificate-name, or service.beta.kubernetes.io/do-loadbalancer-tls-passthrough. You must also set up a health check with a port that uses either TCP, HTTP, or HTTPS to work properly.

The following example shows how to specify https as the load balancer protocol:

    
        
            
. . .
metadata:
  name: https-protocol-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456"
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
. . .

        
    

In order to use the UDP protocol with a Load Balancer, use the ports section in the load balancer service config file as shown below:

    
        
            
spec:
  type: LoadBalancer
  selector:
    app: nginx-example
  ports:
    - name: udp
      protocol: UDP
      port: 53
      targetPort: 53

        
    

A load balancer port cannot be shared between TCP and UDP due to a bug in Kubernetes.

Health Checks

Available in: 1.11.x and later (basic functionality)

In the Target section, you choose the Protocol (HTTP, HTTPS, or TCP), Port (80 by default), and Path (/ by default) that nodes should respond on.

In the Additional Settings section, you choose:

  • The Check Interval, which is how many seconds the load balancer waits between health checks.
  • The Response Timeout, which is how many seconds the load balancer waits between responses.
  • The Unhealthy Threshold, which is how many consecutive times a node must fail a health check before the load balancer stops forwarding traffic to it.
  • The Healthy Threshold, which is how many consecutive times a node must pass a health check before the load balancer forwards traffic to it.

The success criteria for HTTP and HTTPS health checks is a status code response in the range 200 - 399. The success criteria for TCP health checks is completing a TCP handshake to connect.

Note
HTTP and HTTPS health checks may fail with Droplets running Apache on Rocky Linux because the default Apache page returns a 403 Forbidden HTTP response code. To fix this, either change the health check from HTTP/HTTPS to TCP or configure Apache to return a 200 OK response code by creating an HTML page in Apache’s root directory.

DigitalOcean Cloud Controller automatically sets a health check configuration suitable for identifying pod and node availability and facilitating graceful rotation of workload and node replacements. The specific values depend on the external traffic policy.

Note
In general, you should not explicitly change the health check annotations for port, path, and protocol.

Port

The load balancer performs health checks against a port on your service. The default is the first node port on the worker nodes as defined in the service. To specify your own value, you must specify an exposed port and not the NodePort, and also set the service.beta.kubernetes.io/do-loadbalancer-override-health-check annotation.

Path

The load balancer uses the path used to check if a backend Droplet is healthy. The default is /. To specify your own value, you must also set the service.beta.kubernetes.io/do-loadbalancer-override-health-check annotation.

Protocol

The load balancer uses the protocol to check if a backend Droplet is healthy. The default is tcp. Other options are http and https. To specify your own value, you must also set the service.beta.kubernetes.io/do-loadbalancer-override-health-check annotation.

While UDP is not a supported health check protocol, if your load balancer has UDP service ports, you must configure a TCP service as a health check for the load balancer to work properly.

Override Health Check

To override the default values for health check port, path, and protocol with your own values, you must additionally set service.beta.kubernetes.io/do-loadbalancer-override-health-check to true.

Interval

The number of seconds between two consecutive health checks. The value must be between 3 and 300. The default value is 3.

Timeout

The number of seconds the load balancer instance waits for a response before marking a health check as failed. The value must be between 3 and 300. The default value is 5.

Threshold

The number of times a health check must fail for a backend Droplet before it is marked as unhealthy and removed from the pool for the given service. The value must be between 2 and 10. The default value is 3.

The following example shows how to configure health checks for a load balancer:

    
        
            
metadata:
  name: health-check-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456"
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "80"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/health"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-check-interval-seconds: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-response-timeout-seconds: "5"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-healthy-threshold: "5"

        
    

External Traffic Policies and Health Checks

Load balancers managed by DOKS assess the health of the endpoints for the LoadBalancer service that provisioned them.

A health check’s behavior is dependent on the service’s externaltrafficpolicy. A service’s externaltrafficpolicy can be set to either Local or Cluster. A Local policy only accepts health checks if the destination pod is running locally, while a Cluster policy allows the nodes to distribute requests to pods in other nodes within the cluster.

Services with a Local policy assess nodes without any local endpoints for the service as unhealthy. The kube-proxy /healthz endpoint on the separate health check node port, created explicitly by Kubernetes, indicates if the node has active pods.

Services with a Cluster policy can assess nodes as healthy even if they do not contain pods hosting that service. The kube-proxy /healthz endpoint available on each worker node indicates if the node is healthy.

Specifically, an external traffic policy of Local guarantees that worker nodes can be drained prior to their removal. You should chose the behavior-related health check parameters (such as frequency and thresholds) such that a node is marked unhealthy before the last target pod on the node stops accepting requests.

To change this setting for a service, run the following command with your desired policy:

kubectl patch svc myservice -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Note
Because DigitalOcean load balancers terminate client connection requests with proxy, setting externalTrafficPolicy to Local does not preserve the client source IP address. If your service requires retaining the request’s original IP address, see Preserving Client Source IP Address.

Ports

You can specify which ports of the load balancer should use for HTTP, HTTP2 or TLS protocol.

Note
Ports must not be shared between HTTP, TLS, and HTTP2 port annotations.

HTTP Ports

Available in: 1.14.10-do.3, 1.15.11-do.0, 1.16.8-do.0, 1.17.5-do.0 and later

Use this annotation to specify which ports of the load balancer should use the HTTP protocol.

Values are a comma separated list of ports (for example, 80, 8080).

The following example shows how to specify an HTTP port:

    
        
            
. . .
metadata:
  name: http-ports-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-http-ports: "80"
. . .

        
    

HTTP/2 Ports

Available in: 1.12.10-do.2, 1.13.9-do.0, 1.14.5-do.0 and later

Use this annotation to specify which ports of the load balancer should use the HTTP/2 protocol.

Values are a comma separated list of ports (for example, 443, 6443, 7443). If specified, you must also specify either service.beta.kubernetes.io/do-loadbalancer-tls-passthrough, service.beta.kubernetes.io/do-loadbalancer-certificate-id, or service.beta.kubernetes.io/do-loadbalancer-certificate-name.

If service.beta.kubernetes.io/do-loadbalancer-protocol is not set to http2, then this annotation is required for implicit HTTP/2 usage. Unlike service.beta.kubernetes.io/do-loadbalancer-tls-ports, no default port is assumed for HTTP/2 to retain compatibility with the semantics of implicit HTTPS usage.

The following example shows how to specify a HTTP/2 port:

    
        
            
. . .
metadata:
  name: http2-ports-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-http2-ports: "443,80"
. . .

        
    

HTTP/3 Ports

Available in: 1.25.4-do.0, 1.24.8-do.0, 1.23.14-do.0

Use this annotation to specify which port of the load balancer should use the HTTP/3 protocol. Unlike other annotations, you cannot specify multiple ports; you can only specify the HTTP/3 protocol for a single port on the cluster. As no default is assumed for HTTP/3, you must provide a port number.

To use the HTTP/3 protocol, you must provide the service.beta.kubernetes.io/do-loadbalancer-certificate-id or service.beta.kubernetes.io/do-loadbalancer-certificate-name annotation. Because the load balancer can only receive HTTP/3 traffic, you must also specify a protocol for the sending traffic by providing either a service.beta.kubernetes.io/do-loadbalancer-tls-ports or service.beta.kubernetes.io/do-loadbalancer-http2-ports annotation.

TLS Ports

Available in: 1.11.x and later

Use this annotation to specify which ports of the load balancer should use the HTTPS protocol:

Values are a comma separated list of ports (for example, 443, 6443, 7443). If specified, you must also specify one of the following:

  • service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: Specifies whether the load balancer should pass encrypted data to backend Droplets. Options are true or false. Defaults to false.
  • service.beta.kubernetes.io/do-loadbalancer-certificate-id: Specifies the certificate ID used for the HTTPS protocol. To list available certificates and their IDs, install doctl and run doctl compute certificate list.
  • service.beta.kubernetes.io/do-loadbalancer-certificate-name (available in DOKS 1.26 and later): Specifies the certificate name used for the HTTPS protocol. We recommend using this annotation because the name of a Let’s Encrypt certificate does not change when it rotates, but the ID changes each time. The name of the certificate must be unique within an account. To list available certificates and their names, install doctl and run doctl compute certificate list.

If you don’t specify an HTTPS port but specify either service.beta.kubernetes.io/do-loadbalancer-tls-passthrough, service.beta.kubernetes.io/do-loadbalancer-certificate-id, or service.beta.kubernetes.io/do-loadbalancer-certificate-name, the load balancer uses port 443 for HTTPS unless service.beta.kubernetes.io/do-loadbalancer-http2-ports already specifies 443.

The following example shows how to specify a TLS port with passthrough:

    
        
            
. . .
metadata:
  name: tls-ports-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
. . .

        
    

HTTP Idle Timeout

Available in: 1.24.8-do.0 and later

Specifies the HTTP idle timeout configuration in seconds. The default is 60.

The following example specifies a timeout of 65 seconds:

    
        
            
. . .
metadata:
  name: http-idle-timeout
  annotations:
    kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
    service.beta.kubernetes.io/do-loadbalancer-http-idle-timeout-seconds: 65
. . .

        
    

Accessing by Hostname

Available in: 1.12.10-do.2, 1.13.9-do.0, 1.14.5-do.0 and later

Because of an existing limitation in upstream Kubernetes, pods cannot talk to other pods via the IP address of an external load balancer set up through a LoadBalancer-typed service.

As a workaround, you can set up a DNS record for a custom hostname (at a provider of your choice) and have it point to the external IP address of the load balancer. Then, instruct the service to return the custom hostname by specifying the hostname in the service.beta.kubernetes.io/do-loadbalancer-hostname annotation and retrieving the service’s status.Hostname field afterwards.

The workflow for setting up the service.beta.kubernetes.io/do-loadbalancer-hostname annotation is generally:

  1. Deploy the manifest with your service (example below).
  2. Wait for the service’s external IP to become available.
  3. Add an A or AAAA DNS record for your hostname pointing to the external IP.
  4. Add the hostname annotation to your manifest (example below). Deploy it.
    
        
            
kind: Service
apiVersion: v1
metadata:
  name: hello
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456"
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
    service.beta.kubernetes.io/do-loadbalancer-hostname: "hello.example.com"
spec:
  type: LoadBalancer
  selector:
    app: my-app-example
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
. . .

        
    

SSL Certificates

Available in: 1.11.x and later

You can encrypt traffic to your Kubernetes cluster by using an SSL certificate with the load balancer. You need to create or upload an SSL certificate first, and then use one of the following annotations to reference the certificate in the load balancer’s configuration file:

  • service.beta.kubernetes.io/do-loadbalancer-certificate-id for the certificate ID.

  • service.beta.kubernetes.io/do-loadbalancer-certificate-name (available in 1.26x and later) for the certificate name. We recommend using this annotation because the name of a Let’s Encrypt certificate does not change when it rotates, but the ID changes each time. The name of the certificate must be unique within an account.

If you provide both, the certificate ID takes precedence.

To obtain the IDs of uploaded SSL certificates, use doctl compute certificate list or the /v2/certificates API endpoint.

To use the certificate, you must also specify HTTPS as the load balancer protocol using either the service.beta.kubernetes.io/do-loadbalancer-protocol or the service.beta.kubernetes.io/do-loadbalancer-tls-ports annotation.

Additionally, you can specify whether to disable automatic DNS record creation for the certificate upon the load balancer’s creation using the do-loadbalancer-disable-lets-encrypt-dns-records annotation. If you specify true, we do not automatically create a DNS A record at the apex of your domain to support the SSL certificate. This setting is available in versions 1.21.5, 1.20.11, and 1.19.15.

The example below creates a load balancer using an SSL certificate:

    
        
            
---
kind: Service
apiVersion: v1
metadata:
  name: https-with-cert
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
    service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
spec:
  type: LoadBalancer
  selector:
    app: nginx-example
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
. . .

        
    

See the full configuration example.

Note

When you renew a Let’s Encrypt certificate, DOKS gives it a new UUID and automatically updates all annotations in the certificate’s cluster to use the new UUID. However, you must manually update any external configuration files and tools that reference the UUID.

For further troubleshooting, examine your certificates and their details with the compute certificate list command, or contact our support team.

Forced SSL Connections

Available in: 1.11.x and later

The SSL option redirects HTTP requests on port 80 to HTTPS on port 443. When you enable this option, HTTP URLs are forwarded to HTTPS with a 307 redirect.

The example below contains the configuration settings that must be true for the redirect to work.

    
        
            
. . .
  name: https-with-redirect-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
  spec:
    type: LoadBalancer
    selector:
      app: nginx-example
    ports:
      - name: http
        protocol: TCP
        port: 80
        targetPort: 80
      - name: https
        protocol: TCP
        port: 443
        targetPort: 80
. . .

        
    

See the full configuration example for forced SSL connections.

Size Unit

Available in: 1.19.15-do.0, 1.20.11-do.0, 1.21.5-do.0 and later

This setting lets you specify how many nodes the load balancer is created with. The more nodes a load balancers has, the more simultaneous connections it can manage.

The value can be an integer between 1 and 200 and defaults to 1. Your actual load balancer node limit is determined by your account’s limits. To request an increase to your limits, contact support.

You can resize the load balancer after creation once per minute.

The following example shows how to specify a the number of nodes a load balancer contains:

    
        
            
. . .
metadata:
  name: nginx
  annotations:
    kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
    service.beta.kubernetes.io/do-loadbalancer-size-unit: "3"
    service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
. . .

        
    

Sticky Sessions

Available in: 1.11.x and later

Sticky sessions send subsequent requests from the same client to the same node by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client’s browser. This option is useful for application sessions that rely on connecting to the same node for each request.

  • Sticky sessions route consistently to the same nodes, not pods, so you should avoid having more than one pod per node serving requests.
  • Sticky sessions require your service to configure externalTrafficPolicy: Local to preserve the client source IP addresses when incoming traffic is forwarded to other nodes.

Use the do-loadbalancer-sticky-sessions-type annotation to explicitly enable (cookies) or disable (none) sticky sessions, otherwise the load balancer defaults to disabling sticky sessions:

    
        
            
metadata:
  name: sticky-session-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"

        
    

See a full configuration example for sticky sessions.

PROXY Protocol

Available in: 1.11.x and later

Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your nodes. The software running on the nodes must be properly configured to accept the connection information from the load balancer.

Options are true or false. Defaults to false.

    
        
            
---
. . .
metadata:
  name: proxy-protocol
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. . .

        
    

Preserving Client Source IP Address

DigitalOcean load balancers do not automatically retain the client source IP address when forwarding requests. To preserve the source IP address, do one of the following:

  • Enable PROXY protocol - This requires the receiving application or ingress provider to be able to parse the PROXY protocol header.

  • Use the X-Forwarded-For HTTP header - DigitalOcean load balancers automatically add this header. This option works only when the entry and target protocols are HTTP or HTTP/2 (except for TLS passthrough).

For more information, see Cross-platform support in the Kubernetes documentation.

Backend Keepalive

Available in: 1.14.10-do.3, 1.15.11-do.0, 1.16.8-do.0, 1.17.5-do.0 and later

By default, DigitalOcean Load Balancers ignore the Connection: keep-alive header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the Connection: keep-alive header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.

Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.

The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough.

There are no hard limits to the number of connections between the load balancer and each server. However, if the target servers are undersized, they may not be able to handle incoming traffic and may lose packets. See Best Practices for Performance on DigitalOcean Load Balancers.

Options are true or false. Defaults to false.

    
        
            
---
. . .
metadata:
  name: backend-keepalive
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "true"
. . .

        
    

Disown

Available in: 1.14.10-do.4, 1.15.12-do.0, 1.16.10-do.0, 1.17.6-do.0 and later

This setting lets you specify whether to disown a managed load balancer. Disowned load balancers cannot be mutated any further, including creation, updates and deletion. You can use this setting to change ownership of a load balancer from one Service to another, including a Service in another cluster. For more information, see Changing ownership of a load balancer.

Options are true or false. Defaults to false. You must supply the value as a string, otherwise you may run into a Kubernetes bug that throws away all annotations on your Service resource.

Warning
Disowned load balancers may not work correctly while disowned. This is because necessary load balancer updates, such as target nodes or configuration annotations, stop being propagated to the load balancer. Similarly, the Service status field may not reflect the load balancer’s current state anymore. Consequently, you should assign disowned load balancers to a new Service as soon as possible.

The following example shows how to disown a load balancer:

    
        
            
. . .
metadata:
  name: disown-snippet
  annotations:
    kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
    service.kubernetes.io/do-loadbalancer-disown: "true"
. . .

        
    

Firewall Rules

Available in: 1.24.8-do.0 and later

Deny Rules

Specifies the firewall rules that block traffic from passing. Rules must be in the format {type}:{source}.

The following example shows how to add firewall rules to block incoming connections from the 198.51.100.0/16 IP block.

    
        
            
. . .
metadata:
  name: firewall-rules
  annotations:
    kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
    service.beta.kubernetes.io/do-loadbalancer-deny-rules: "cidr:198.51.100.0/16"
. . .

        
    

Allow Rules

Note
The service.beta.kubernetes.io/do-loadbalancer-allow-rules annotation is deprecated. Use the LoadBalancerSourceRanges field in the service configuration file instead. Values specified in the field take precedence over this annotation.

Specifies the firewall rules that allow traffic to pass. Rules must be in the format {type}:{source}.

References

For more about managing load balancers, see: