DigitalOcean’s Managed Databases are a fully managed, high performance database cluster service. Using managed databases is a powerful alternative to installing, configuring, maintaining, and securing databases by hand.
Clusters include daily backups with point-in-time recovery (PITR), standby nodes for high availability, and end-to-end SSL encryption. Managed databases are multi-region and scalable, and their automated failover means even single-node plans add resiliency to your infrastructure. When you create a new managed database cluster, the cluster is placed in a VPC network by default.
We currently offer three database engines: PostgreSQL, Redis, and MySQL.
|High availability with automated failover||■||■||■||■|
|Daily point-in-time backups||■||■||■|
DigitalOcean Managed Databases offers three types of nodes:
The primary node of a database cluster processes queries, updates the database, returns results to clients, and acts as the single source of data for all other nodes.
Standby nodes are copies of the primary node that automatically take over if the primary node fails. Database clusters may have zero, one, or two standby nodes. Standby nodes can be added to an existing cluster at any time, with the exception of the smallest plan of single node clusters.
Read-only nodes are copies of the primary node that process queries and return results but cannot make changes to the database itself. They provide geographically distinct, horizontal read scaling. Read-only nodes can be added to a cluster at any time.
A database cluster is comprised of one primary node and its standby nodes. Read-only nodes are not considered part of the cluster.
All database clusters have automated failover, meaning they automatically detect and replace degraded or failing nodes.
High availability requires redundancy in addition to automatic failover. Database clusters must have at least one standby node to be highly available because standby nodes provide redundancy for the primary node:
Without standby nodes, the primary node is a single point of failure, so the cluster is not highly available.
If the primary node fails, the service becomes unavailable until the primary node’s replacement is reprovisioned. The amount of time it takes to reprovision a node depends on the amount of data being stored; larger databases require more time.
With one standby node, the cluster is highly available.
If the primary node fails, the service remains available. The standby node is immediately promoted to primary and begins serving requests while a replacement standby node is provisioned in the background.
If both nodes fail simultaneously, the service becomes unavailable until at least one of the nodes is reprovisioned.
With two standby nodes, the cluster is highly available and very resilient against downtime.
Even if two nodes fail simultaneously, the service remains available while two replacements are provisioned in the background.
The service only becomes unavailable in the unlikely event of all three nodes failing at the same time.
In other words, the effect of a primary node’s failure on service availability depends on the cluster configuration. Provisioning a new replacement node takes time, but failing over to a standby node is immediate.
Additional redundancy in the database cluster also minimizes the risk of data loss. If there are no running nodes to copy data from, the database cluster reprovisions nodes using the most recent backup and the write-ahead log to recover the database to as close to the point of failure as possible. However, the write-ahead log backs up every five minutes, so the most recent writes to the database may be lost if the cluster needs to recover this way.
Platform maintenance updates, node failover, or other brief outages (up to 5-10 seconds) can occur and cause your applications to disconnect from your database nodes. If your application fails to connect during one of these events and is not configured to attempt a reconnect, this may cause disruptions in your application’s service even if the node or standby node is back online and ready to accept requests.
To maintain availability, you should configure your client applications to attempt reconnection a reasonable number of times before timing out and causing a service disruption.
Released v1.63.0 of doctl, the official DigitalOcean CLI. This release includes a number of new features:
database firewallsub-commands now support apps as trusted sources
monitoring alertsub-commands for creating and managing alert policies
--droplet-agentflag was added to the
compute droplet createsub-command to optionally disable installing the agent for the Droplet web console
MongoDB is now available as a managed database engine in the AMS3, BLR1, FRA1, LON1, NYC1, NYC3, SFO3, SGP1, and TOR1 regions.
The MongoDB database engine is now in general availability.
PostgreSQL 13 is now available for database clusters. You can also now perform in-place upgrades for PostgreSQL clusters to newer versions without any downtime. We currently support PostgreSQL 10, 11, 12, and 13.
For more information, see the full release notes.