Redis is an open source, key-value database built with an in-memory design that emphasizes speed. It has support for rich data types, atomic operations, and Lua scripting.
When you create a Caching database cluster, you choose the cluster configuration, including the size of the nodes. The node size determines how much memory and how many vCPUs the nodes have.
Plan Size | Available Memory (Adjusted) |
---|---|
1 GiB | 0.5 GiB |
2 GiB | 1.2 GiB |
4 GiB | 2.5 GiB |
8 GiB | 5.2 GiB |
16 GiB | 11.5 GiB |
32 GiB | 21.7 GiB |
64 GiB | 43.7 GiB |
Some of the memory in Caching nodes is reserved for the database cluster’s normal operations. In other words, a Caching node’s amount of available memory is less than its total amount of memory. This memory overhead comes from how Caching handles forking and replication, which are part of a few common operations:
New standby nodes. When a new standby node connects to the database cluster’s existing main node, the main forks a copy of itself to send its current memory contents to the new node.
Data persistence. Every 10 minutes, Caching persists its current state to disk, which requires similar forking.
High availability failover. In high availability (HA) setups, if a node fails, the cluster needs to synchronize a new node with the current main by requesting a full copy of its state and then following its replication stream.
By default, 70% of Caching nodes’ total memory is allocated as available, and an additional 10% is allocated to the replication log. The duration of operations like backups and replications is proportional to the total amount of memory Caching uses, so the amount of overhead allocated for these operations to execute without using swap is similarly proportional.
If there’s not enough memory for the cluster to finish a fork, the cluster will enter an out of memory (OOM) state and become unresponsive. If the amount of data in a node’s initial sync is larger than the replication log size, the new node will be unable to follow the replication log stream and will need to repeat the initial sync. If this continues to happen, the node will repeatedly retry the sync and fail to fully connect with the main.
Avoid OOM node failures and failover standby node creation loops by configuring applications to keep writes to a minimum. You can also customize the cluster’s eviction policy, which determine when and how Caching evicts old data when the database size reaches its limit. Learn more about Caching memory usage and other performance metrics in the Redis documentation FAQ.