DigitalOcean Managed Databases are a fully managed, high performance database cluster service. Using managed databases is a powerful alternative to installing, configuring, maintaining, and securing databases by hand.
Clusters include daily backups with point-in-time recovery (PITR), standby nodes for high availability, and end-to-end SSL encryption. Managed databases are multi-region and scalable, and their automated failover means even single-node plans add resiliency to your infrastructure. When you create a new managed database cluster, the cluster is placed in a VPC network by default.
We currently offer the following database engines:
Feature | PostgreSQL | MySQL | Caching | MongoDB | Kafka | OpenSearch |
---|---|---|---|---|---|---|
High availability with automated failover | ■ | ■ | ■ | ■ | ■ | ■ |
End-to-end security | ■ | ■ | ■ | ■ | ■ | ■ |
Automatic updates | ■ | ■ | ■ | ■ | ■ | ■ |
Logs | ■ | ■ | ■ | ■ | ■ | ■ |
Metrics | ■ | ■ | ■ | ■ | ■ | ■ |
VPC | ■ | ■ | ■ | ■ | ■ | ■ |
Forking | ■ | ■ | ■ | ■ | ||
Read-only nodes | ■ | ■ | ■ | ■ | ||
Daily point-in-time backups | ■ | ■ | ■ | |||
Eviction policies | ■ | |||||
Query insights | ■ |
DigitalOcean Managed Databases offers three types of nodes:
The primary node of a database cluster processes queries, updates the database, returns results to clients, and acts as the single source of data for all other nodes.
Standby nodes are copies of the primary node that automatically take over if the primary node fails. Database clusters may have zero, one, or two standby nodes. Standby nodes can be added to an existing cluster at any time, with the exception of the smallest plan of single node clusters. Kafka and OpenSearch do not support standby nodes.
Read-only nodes are copies of the primary node that process queries and return results but cannot make changes to the database itself. They provide geographically distinct, horizontal read scaling. Read-only nodes can be added to a cluster at any time.
A database cluster is comprised of one primary node and its standby nodes. Read-only nodes are not considered part of the cluster.
All database clusters have automated failover, meaning they automatically detect and replace degraded or failing nodes.
High availability requires redundancy in addition to automatic failover. Database clusters must have at least one standby node to be highly available because standby nodes provide redundancy for the primary node:
Without standby nodes, the primary node is a single point of failure, so the cluster is not highly available.
If the primary node fails, the service becomes unavailable until the primary node’s replacement is reprovisioned. The amount of time it takes to reprovision a node depends on the amount of data being stored; larger databases require more time.
With one standby node, the cluster is highly available.
If the primary node fails, the service remains available. The standby node is immediately promoted to primary and begins serving requests while a replacement standby node is provisioned in the background.
If both nodes fail simultaneously, the service becomes unavailable until at least one of the nodes is reprovisioned.
With two standby nodes, the cluster is highly available and very resilient against downtime.
Even if two nodes fail simultaneously, the service remains available while two replacements are provisioned in the background.
The service only becomes unavailable in the unlikely event of all three nodes failing at the same time.
In other words, the effect of a primary node’s failure on service availability depends on the cluster configuration. Provisioning a new replacement node takes time, but failing over to a standby node is immediate.
Additional redundancy in the database cluster also minimizes the risk of data loss. If there are no running nodes to copy data from, the database cluster reprovisions nodes using the most recent backup and the write-ahead log to recover the database to as close to the point of failure as possible. However, the write-ahead log backs up every five minutes, so the most recent writes to the database may be lost if the cluster needs to recover this way.
Platform maintenance updates, node failover, or other brief outages (up to 5-10 seconds) can occur and cause your applications to disconnect from your database nodes. If your application fails to connect during one of these events and is not configured to attempt a reconnect, this may cause disruptions in your application’s service even if the node or standby node is back online and ready to accept requests.
To maintain availability, you should configure your client applications to attempt reconnection a reasonable number of times before timing out and causing a service disruption.
PostgreSQL clusters now support pgvector
v0.7.2. You can verify your access to this feature by running \dx
from psql
or querying pg_extension
and locating vector
in the output. If you do not have access to this pgvector version yet, update your PostgreSQL cluster. For a full list of supported extensions, see our guide Supported PostgreSQL Extensions.
Managed Redis is now called Managed Caching.
Managed databases now supports log forwarding to OpenSearch, Elasticsearch, and Rsyslog. You can create and manage log sinks using the control panel and DigitalOcean API. For more detailed steps, see our guides for MySQL, PostgreSQL, Redis, MongoDB, and Kafka.
For more information, see the full release notes.