Memory Usage for Valkey Database Clusters
Validated on 24 Sep 2019 • Last edited on 13 Jan 2026
Valkey is a high-performance, open-source database that stores key-value data in memory, and is designed for caching, message queues, and primary database use. Fully compatible with Redis, Valkey serves as a drop-in replacement.
When you create a Valkey database cluster, you choose the cluster configuration, including the size of the nodes. The node size determines how much memory and how many vCPUs the nodes have.
| Plan Size | Available Memory (Adjusted) |
|---|---|
| 1 GiB | 0.5 GiB |
| 2 GiB | 1.2 GiB |
| 4 GiB | 2.5 GiB |
| 8 GiB | 5.2 GiB |
| 16 GiB | 11.5 GiB |
| 32 GiB | 21.7 GiB |
| 64 GiB | 43.7 GiB |
Some of the memory in Valkey nodes is reserved for the database cluster’s normal operations. In other words, a Valkey node’s amount of available memory is less than its total amount of memory. This memory overhead comes from how Valkey handles forking and replication, which are part of a few common operations:
-
New standby nodes. When a new standby node connects to the database cluster’s existing main node, the main forks a copy of itself to send its current memory contents to the new node.
-
Data persistence. Every 10 minutes, Valkey persists its current state to disk, which requires similar forking.
-
High availability failover. In high availability (HA) setups, if a node fails, the cluster needs to synchronize a new node with the current main by requesting a full copy of its state and then following its replication stream.
By default, 70% of Valkey nodes’ total memory is allocated as available, and an additional 10% is allocated to the replication log. The duration of operations like backups and replications is proportional to the total amount of memory Valkey uses, so the amount of overhead allocated for these operations to execute without using swap is similarly proportional.
If there’s not enough memory for the cluster to finish a fork, the cluster will enter an out of memory (OOM) state and become unresponsive. If the amount of data in a node’s initial sync is larger than the replication log size, the new node will be unable to follow the replication log stream and will need to repeat the initial sync. If this continues to happen, the node will repeatedly retry the sync and fail to fully connect with the main.
Avoid OOM node failures and failover standby node creation loops by configuring applications to keep writes to a minimum. You can also customize the cluster’s eviction policy, which determine when and how Valkey evicts old data when the database size reaches its limit. Learn more about Valkey memory usage and other performance metrics in the Redis documentation FAQ.