DigitalOcean fully manages Regional Load Balancers and Global Load Balancers, ensuring they are highly available load balancing services. Load balancers distribute traffic to groups of backend resources in specific regions or across different regions, which prevents the health of a service from depending on the health of a single server, cluster, or region.
Using a load balancer as a gateway gives you the flexibility to change your backend infrastructure without affecting the availability of your services, enabling seamless scaling, rolling deployments, large architecture redesigns, and more.
Additionally, sharing the processing workload among a group of servers rather than relying on a single server prevents any one machine from being overwhelmed by requests.
Load balancing services like DigitalOcean Load Balancers give you the benefits of load balancing without the burden of managing the operational complexities.
We offer both regional load balancers (load balancers that span a single datacenter region) and global load balancers (load balancers that span multiple all datacenter regions). Global load balancers are currently in beta.
Both global and regional load balancers have the following features:
All DigitalOcean Load Balancers automatically monitor their backend pools and only send requests to Droplets that pass health checks. You can define health check endpoints and set the parameters around what constitutes a healthy response. The load balancer automatically removes Droplets that fail health checks from rotation and adds them back when the health checks pass.
Additionally, DigitalOcean Load Balancers with more nodes can stay more highly available by distributing traffic among the remaining nodes when a node goes down.
There are two different ways to define backend Droplets for a load balancer:
Tags are custom labels you can apply to Droplets.
You can choose up to 10 backend Droplets by name. However, we recommend using tags as a more scalable automated solution. If you need to add more than 10 Droplets to a load balancer, you can use a tag. You can apply the tag to as many Droplets as needed and then add the tag to the load balancer. There is no limit to the number of Droplets to which you can apply a tag. Using a tag automatically updates your load balancer when you add or remove the tag from Droplets.
You can use one tag per load balancer.
The load balancer automatically connects to Droplets in its VPC network. If a Droplet’s private networking interface has been disabled, the load balancer connects to the Droplet using its public IP address when added to the load balancer. All Droplets created after 1 October, 2020 are added to a VPC network by default.
Load balancers send traffic to Droplet using dynamic backend IP addresses that are separate from the public IP addresses displayed in the control panel. Backend IP addresses may change at any time and should not be used to configure firewalls.
The load balancing algorithm assigns new connections and requests to the backends as equally as possible, while maintaining performance as more backends are introduced. In nearly all cases, the load balancing algorithm provides better performance and distribution than traditional round robin and least connections options.
Standard HTTP balancing directs requests based on standard HTTP mechanisms. The load balancer sets the X-Forwarded-For
, X-Forwarded-Proto
, and X-Forwarded-Port
headers to give the backend servers information about the original request.
If user sessions depend on the client always connecting to the same backend, a cookie can be sent to the client to enable sticky sessions.
You can balance secure traffic using either HTTPS or HTTP/2. Both protocols can be configured with:
SSL termination, which handles the SSL decryption at the load balancer after you add your SSL certificate and private key. Your load balancer can also act as a gateway between HTTP/2 client traffic and HTTP/1.0 or HTTP/1.1 backend applications this way.
SSL passthrough, which forwards encrypted traffic to your backend Droplets. This is a good for end-to-end encryption and distributing the SSL decryption overhead, but you need to manage the SSL certificates yourself.
You can configure load balancers to redirect HTTP traffic on port 80 to HTTPS or HTTP/2 on port 443. This way, the load balancer can listen for traffic on both ports but redirect unencrypted traffic for better security.
DigitalOcean Load Balancer Let’s Encrypt certificates are fully managed and automatically renewed on your behalf every 60 days. You can use SSL certificates with HTTPS and HTTP/2.
You can set the amount of time a load balancer allows an HTTP connection to remain idle before timing out.
Regional load balancers have the following additional features:
You can configure a single DigitalOcean Load Balancer to handle multiple protocols and ports. You can control traffic routing with configurable rules that specify the ports and protocols that the load balancer should listen on, as well as the way that it should select and forward requests to the backend servers.
Because DigitalOcean Load Balancers are network load balancers, not application load balancers, they do not support directing traffic to specific backends based on URLs, cookies, HTTP headers, etc.
You can also configure load balancers to receive and balance HTTP/3 traffic.
HTTP/3 can only be used as an entry protocol, and only one HTTP/3 rule may be specified per load balancer. Any forwarding rules using HTTP/3 require an additional rule that specifies either HTTP/2 or HTTPS as the entry protocol.
For browser compatibility purposes, an additional HTTP or HTTP/2 forwarding rule is required when setting up forwarding rules using HTTP/3. Two forwarding rules may share the same entry port.
Responses from load balancers using HTTP/3 include an alt-svc
header with their responses to indicate to the client that the endpoint is available over HTTP/3.
TCP balancing is available for applications that do not speak HTTP. For example, deploying a load balancer in front of a database cluster like Galera would allow you to spread requests across all available machines.
UDP balancing is available for applications that require more time-sensitive transmission, such as live broadcasts. Forwarding rules using UDP require you to set both the entry and target protocols to UDP. When using UDP, the load balancer requires that you set up a health check with a port that uses TCP, HTTP, or HTTPS to work properly.
Because UDP is a stateless protocol, the load balancer maintains its own session state in order to route return traffic from Droplets back to the client. The load balancer triggers a session timeout when it hasn’t detected any sending or receiving traffic for one minute.
When using UDP, the load balancer assigns incoming connections to healthy target Droplets using the source IP of the client. This means that all subsequent requests from the same client land on the same target Droplet. If a target Droplet becomes unhealthy or you add or remove target Droplets, the load balancer assigns clients to new target Droplets.
DigitalOcean Load Balancers support the WebSocket protocol without any additional configuration.
When using WebSockets, the load balancer uses a special one hour inactivity timeout instead of the default 60 second timeout.
The following forwarding rule configurations support WebSockets:
You can use WebSockets with or without backend keepalive enabled.
We test our load balancers against the industry standard Autobahn|Testsuite for all configurations.
PROXY protocol is a way to send client connection information (like origin IP addresses and port numbers) to the final backend server rather than discarding it at the load balancer. This information can be helpful for use cases like analyzing traffic logs or changing application functionality based on geographical IP.
DigitalOcean Load Balancers have support for PROXY protocol version 1. Configure your backend services to accept PROXY protocol headers after you enable it on your load balancer.
Global load balancers have the following additional features:
You can add resources across multiple regions to a global load balancer’s backend pool.
You can enable CDN caching on global load balancers to cache content closer to your users around the world. This can improve your application’s performance by reducing the number of requests to your origin servers.