DigitalOcean fully manages Regional Load Balancers and Global Load Balancers, ensuring they are highly available load balancing services. Load balancers distribute traffic to groups of backend resources in specific regions or across different regions, which prevents the health of a service from depending on the health of a single server, cluster, or region.
Here are some recommendations on how to get the best performance from your load balancers based on your use case and application architecture.
In most production workloads, HTTP/2 outperforms HTTP and HTTPS due to its pipelining and connection handling. We recommend using it unless there is a clear case for HTTP or HTTPS.
HTTP/2 is a major update to the older HTTP/1.x protocol. It was designed primarily to reduce page load time and resource usage.
Its major features offer significant performance improvements; for example, HTTP/2 is binary (instead of text) and fully multiplexed, uses header compression, and has a prioritization mechanism for delivering files.
The IETF HTTP Working Group’s documentation on HTTP/2 is a good resource to learn more.
You can use HTTP/2 by setting your load balancer’s forwarding rules in the control panel. Additionally, load balancers can terminate HTTP/2 client connections, allowing them to function as gateways for HTTP/2 clients and HTTP/1.x applications. In other words, you can transition your existing applications without upgrading the backed apps on your Droplets from HTTP/1.x to HTTP/2.
Monitoring provides critical performance insights and should be part of any production setups.
Often times, performance issues are caused by a lack of resources on the backend rather than the load balancer itself or its configuration. Monitoring enables you to identify the bottlenecks affecting your infrastructure’s performance, including when your workload is overloading your Droplets, so you can implement the most impactful changes.
There are a number of ways to monitor performance. One place to start is with DigitalOcean Monitoring, a free, opt-in service that gives you information on your infrastructure’s resource usage.
You can start by looking at the default Droplet Graphs and setting up the DigitalOcean Agent to get more information on CPU, memory, and disk utilization. If you find that you don’t have enough resources for your workload, you can scale your Droplets.
If your backend Droplets don’t have enough resources to keep up with your workload, you should consider scaling up or out.
It doesn’t matter how your load balancer distributes work among your Droplets if the total workload is too large for them to handle, so verify that your backend Droplet pool has sufficient resources.
There are two ways to scale: horizontally, which distributes work over more servers, and vertically, which increases the resources available to existing servers. Although load balancers facilitate horizontal scaling, both kinds of scaling improve performance.
To scale horizontally, you can add more Droplets to your load balancer by navigating to a particular load balancer’s page in the control panel and clicking the Add Droplets button.
The kind of Droplets you use impacts performance as well, so make sure you choose the right Droplet for your application. For example, CPU-Optimized Droplets work best for computationally intensive workloads, like CI/CD and high performance application servers.
To scale vertically, you can resize your existing Droplets to give them more RAM and CPU.
If you determine that your load balancer cannot maintain enough connections or distribute traffic quick enough, you can scale the load balancer so that it can manage more connections at once.
If your load balancer’s connections or requests per second is close to reaching its maximum limit. You can monitor the load balancer’s traffic patterns using the Graphs tab.
You may also want to increase the load balancer’s number of nodes if you are expecting an increase in traffic.
You can change the number of nodes your load balancer contains in the Scaling Configuration section of its Settings tab.