Learn

Redis Software Developer Observability Playbook

Introduction#

This tutorial provides monitoring guidance for developers running applications that connect to Redis Software. In particular, this guide focuses on the systems and resources that are most likely to impact the performance of your application.

Figure 1. Dashboard showing relevant statistics for a Node

Figure 1. Dashboard showing relevant statistics for a Node

To effectively monitor a Redis Enterprise cluster you need to observe core cluster resources and key database performance indicators.

Core cluster resources include:

Key database performance indicators include:

Figure 2. Dashboard showing an overview of cluster metrics

Figure 2. Dashboard showing an overview of cluster metrics

In addition to manually monitoring these resources and indicators, we recommend setting up alerts.

Core cluster resource monitoring#

Memory#

Every Redis Software database has a maximum configured memory limit to ensure isolation in a multi-database cluster.

Figure 3. Dashboard displaying high-level cluster metrics

Figure 3. Dashboard displaying high-level cluster metrics - Cluster Dashboard

Memory Thresholds#

The appropriate memory threshold depends on how the application is using Redis.

  • Caching workloads, which permit Redis to evict keys, can safely use 100% of available memory.
  • Non-caching workloads do not permit key eviction and should be closely monitored as soon as memory usage reaches 80%.

Caching workloads#

For applications using Redis solely as a cache, you can safely let the memory usage reach 100% as long as you have an eviction policy in place. This will ensure that Redis can evict keys while continuing to accept new writes.

NB Eviction will increase write command latency as Redis has to cleanup the memory/objects before accepting a new write to prevent OOM when memory usage is at 100%

While your Redis database is using 100% of available memory in a caching context, it’s still important to monitor performance. The key performance indicators include:

  • Latency
  • Cache hit ratio
  • Evicted keys

Read latency#

Latency has two important definitions, depending on context:

  • In context Redis itself, latency is the time it takes for Redis to respond to a request. See Latency for a broader discussion of this metric.
  • In the context of your application, latency is the time it takes for the application to process a request. This will include the time it takes to execute both reads and writes to Redis, as well as calls to other databases and services. Note that its possible for Redis to report low latency while the application is experiencing high latency. This may indicate a low cache hit ratio, ultimately caused by insufficient memory.

You need to monitor both application-level and Redis-level latency to diagnose caching performance issues in production.

Cache hit ratio and eviction#

Cache hit ratio is the percentage of read requests that Redis serves successfully. Eviction rate is the rate at which Redis evicts keys from the cache. These metrics are often inversely correlated: a high eviction rate may cause a low cache hit ratio.

If the Redis server is empty, the hit ratio will be 0%. As the application runs and the fills the cache, the hit ratio will increase.

When the entire cached working set fits in memory, then the cache hit ratio will reach close to 100% while the percent of used memory will remain below 100%.

When the working set cannot fit in memory, the eviction policy will start to evict keys. The greater the rate of key eviction, the lower the cache hit ratio.

In both cases, keys will may be manually invalidated by the application or evicted through the uses of TTLs (time-to-live) and an eviction policy.

The ideal cache hit ratio depends on the application, but generally, the ratio should be greater than 50%. Low hit ratios coupled with high numbers of object evictions may indicate that your cache is too small. This can cause thrashing on the application side, a scenario where the cache is constantly being invalidated.

The upshot here is that when your Redis database is using 100% of available memory, you need to measure the rate of key evictions.

An acceptable rate of key evictions depends on the total number of keys in the database and the measure of application-level latency. If application latency is high, check to see that key evictions have not increased.

Eviction policies#

Eviction policy guidelines#

  • Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests. That is, you expect a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.
  • Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform.
  • Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.

The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.

NB Setting an expire value to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need for an expire configuration for the key to be evicted under memory pressure.

Non-caching workloads#

If no eviction policy is enabled, then Redis will stop accepting writes once memory reaches 100%. Therefore, for non-caching workloads, we recommend that you configure an alert at 80% memory usage. Once your database reaches this 80% threshold, you should closely review the rate of memory usage growth.

CPU#

Redis Software provides several CPU metrics:

To understand CPU metrics, it’s worth recalling how a Redis Software cluster is organized. A cluster consists of one or more nodes. Each node is a VM (or cloud compute instance) or a bare-metal server.

A database is a set of processes, known as shards, deployed across the nodes of a cluster.

In the dashboard, shard CPU is the CPU utilization of the processes that make up the database. When diagnosing performance issues, start by looking at shard CPU.

Figure 4. Dashboard displaying CPU usage

Figure 4. Dashboard displaying CPU usage - Database Dashboard

CPU Thresholds#

In general, we define high CPU as any CPU utilization above 80% of total capacity.

Shard CPU should remain below 80%. Shards are single-threaded, so a shard CPU of 100% means that the shard is fully utilized.

Figure 5. Display showing Proxy CPU usage

Figure 5. Display showing Proxy CPU usage - Proxy Dashboard

Proxy CPU should remain below 80% of total capacity. The proxy is a multi-threaded process that handles client connections and forwards requests to the appropriate shard. Because the total number of proxy threads is configurable, the proxy CPU may exceed 100%. A proxy configured with 6 threads can reach 600% CPU utilization, so in this case, keeping utilization below 80% means keeping the total proxy CPU usage below 480%.

Figure 6. Dashboard displaying an ensemble of Node CPU usage data

Figure 6. Dashboard displaying an ensemble of Node CPU usage data - Node Dashboard

Node CPU should also remain below 80% of total capacity. As with the proxy, the node CPU is variable depending on the CPU capacity of the node. You will need to calibrate your alerting based on the number of cores in your nodes.

CPU Troubleshooting#

Connections#

The Redis Software database dashboard indicates to the total number of connections to the database.

This connection count metric should be monitored with both a minimum and maximum number of connections in mind. Based on the number of application instances connecting to Redis (and whether your application uses connection pooling), you should have a rough idea of the minimum and maximum number of connections you expect to see for any given database. This number should remain relatively constant over time.

Connections Troubleshooting#

Figure 7. Dashboard displaying connections

Figure 7. Dashboard displaying connections - Database Dashboard

Network ingress/egress#

The network ingress / egress panel show the amount of data being sent to and received from the database. Large spikes in network traffic can indicate that the cluster is under-provisioned or that the application is reading and/or writing unusually large keys. A correlation between high network traffic and high CPU utilization may indicate a large key scenario.

Unbalanced database endpoint#

One possible cause is that the database endpoint is not located on the same node as master shards. In addition to added network latency, if data plane internode encryption is enabled, CPU consumption can increase as well.

One solution is to used the optimal shard placement and proxy policy to ensure endpoints are collocated on nodes hosting master shards. If you need to restore balance (e.g. after node failure) you can manually failover shard(s) with the rladmin cli tool.

Extreme network traffic utilization may approach the limits of the underlying network infrastructure. In this case, the only remediation is to add additional nodes to the cluster and scale the database’s shards across them.

Synchronization#

In Redis Software, geographically-distributed synchronization is based on CRDT technology. The Redis Enterprise implementation of CRDT is called an Active-Active database (formerly known as CRDB). With Active-Active databases, applications can read and write to the same data set from different geographical locations seamlessly and with low latency, without changing the way the application connects to the database.

An Active-Active architecture is a data resiliency architecture that distributes the database information over multiple data centers via independent and geographically distributed clusters and nodes. It is a network of separate processing nodes, each having access to a common replicated database such that all nodes can participate in a common application ensuring local low latency with each region being able to run in isolation.

To achieve consistency between participating clusters, Redis Active-Active synchronization uses a process called the syncer.

The syncer keeps a replication backlog, which stores changes to the dataset that the syncer sends to other participating clusters. The syncer uses partial syncs to keep replicas up to date with changes, or a full sync in the event a replica or primary is lost.

Figure 8. Dashboard displaying connection metrics between zones

Figure 8. Dashboard displaying connection metrics between zones - Synchronization Dashboard

CRDT provides three fundamental benefits over other geo-distributed solutions:

  • It offers local latency on read and write operations, regardless of the number of geo-replicated regions and their distance from each other.
  • It enables seamless conflict resolution (“conflict-free”) for simple and complex data types like those of Redis core.
  • Even if most of the geo-replicated regions in a CRDT database (for example, 3 out of 5) are down, the remaining geo-replicated regions are uninterrupted and can continue to handle read and write operations, ensuring business continuity.

Database performance indicators#

There several key performance indicators that report your database’s performance against your application’s workload:

  • Latency
  • Cache hit rate
  • Key eviction rate

Latency#

Latency is the time it takes for Redis to respond to a request. Redis Software measures latency from the first byte received by the proxy to the last byte sent in the command’s response.

An adequately provisioned Redis database running efficient Redis operations will report an average latency below 1 millisecond. In fact, it’s common to measure latency in terms is microseconds. Customers regularly achieve, and sometime require, average latencies of 400-600 microseconds.

Figure 9. Dashboard display of latency metrics

Figure 9. Dashboard display of latency metrics - Database Dashboard

The metrics distinguish between read and write latency. Understanding whether high latency is due to read or writes can help you to isolate the underlying issue.

Note that these latency metrics do not include network round trip time or application-level serialization, which is why it’s essential to measure request latency at the application, as well.

Figure 10. Display showing a noticeable spike in latency

Figure 10. Display showing a noticeable spike in latency

Latency Troubleshooting#

Here are some possible causes of high database latency. Note that high database latency is just one possible cause of high application latency. Application latency can be caused by a variety of factors, including a low cache hit rate, a high rate of evictions, or a networking issue.

Cache hit rate#

Cache hit rate is the percentage of all read operations that return a response. Cache hit rate is a composite statistic that is computed by dividing the number of read hits by the total number of read operations. When an application tries to read a key that exists, this is known as a cache hit. Alternatively, when an application tries to read a key that does not exist, this is knows as a cache miss.

For caching workloads, the cache hit rate should generally be above 50%, although the exact ideal cache hit rate can vary greatly depending on the application and depending on whether the cache is already populated.

Figure 11. Dashboard showing the cache hit ratio along with read/write misses

Figure 11. Dashboard showing the cache hit ratio along with read/write misses - Database Dashboard

Note: Redis Enterprise actually reports four different cache hit / miss metrics. These are defined as follows:

Cache hit rate troubleshooting#

Cache hit rate is usually only relevant for caching workloads. See Cache hit ratio and eviction for tips on troubleshooting cache hit rate.

Key eviction rate#

They key eviction rate is rate at which objects are being evicted from the database. If an eviction policy is in place for a database, eviction will begin once the database approaches its max memory capacity.

A high or increasing rate of evictions will negatively affect database latency, especially if the rate of necessary key evictions exceeds the rate of new key insertions.

See Cache hit ratio and eviction for a discussion if key eviction and its relationship with memory usage.

Figure 12. Dashboard displaying object evictions

Figure 12. Dashboard displaying object evictions - Database Dashboard

Proxy performance#

Redis Software (RS) provides high-performance data access through a proxy process that manages and optimizes access to shards within the RS cluster. Each node contains a single proxy process. Each proxy can be active and take incoming traffic or it can be passive and wait for failovers.

Proxy policies#

Figure 13. Dashboard displaying proxy thread activity

Figure 13. Dashboard displaying proxy thread activity - Proxy Thread Dashboard

When needed, we can tune the number of proxy threads using the "rladmin tune proxy" command in order to be able to make the proxy use more CPU cores. Nevertheless, cores used by the proxy won’t be available for Redis, therefore we need to take into account the number of Redis nodes on the host and the total number of available cores.

How to set a new number of proxy cores using the command:

  • <id|all> - you can either tune a specific proxy by its id, or all proxies.
  • <mode> - determines whether or not the proxy can automatically adjust the number of threads depending on load.
  • <threads> and <max_threads> - determine the initial number of threads created on startup, and the maximum number of threads allowed.
  • <scale_threshold> - determines the CPU utilization threshold that triggers spawning new threads. This CPU utilization level needs to be maintained for at least scale_duration seconds before automatic scaling is performed.

The following table indicates ideal proxy thread counts for the specified environments.

Data access anti-patterns#

There are three data access patterns that can limit the performance of your Redis database:

  • Slow operations
  • Hot keys
  • Large keys

This section defines each of these patterns and describes how to diagnose and mitigate them.

Slow operations#

Slow operations are operations that take longer than a few milliseconds to complete.

Not all Redis operations are equally efficient. The most efficient Redis operations are O(1) operations; that is, they have a constant time complexity. Example of such operations include GET, SET, SADD, and HSET.

These constant time operations are unlikely to cause high CPU utilization. Even so, it’s still possible for a high rate of constant time operations to overwhelm an underprovisioned database.

Other Redis operations exhibit greater levels of time complexity. O(n) (linear time) operations are more likely to cause high CPU utilization. Examples include HGETALL, SMEMBERS, and LREM. These operations are not necessarily problematic, but they can be if executed against data structures holding a large number of elements (e.g., a list with 1 million elements).

That said, the KEYS command should almost never be run against a production system, since returning a list of all keys in a large Redis database can cause significant slowdowns and block other operations. If you need to scan the keyspace, especially in a production cluster, always use the SCAN command instead.

Slow operations troubleshooting#

The best way to discover slow operations is to view the slow log. The slow low is available in the Redis Software and Redis Cloud consoles: * Redis Software slow log docs * Redis cloud slow log docs

Figure 14. Redis Cloud dashboard showing slow database operations

Figure 14. Redis Cloud dashboard showing slow database operations

Hot keys#

A hot key is a key that is accessed extremely frequently (e.g., thousands of time a second or more).

Each key in Redis belongs to one, and only one, shard. For this reason, a hot key can cause high CPU utilization on that one shard, which can increase latency for all other operations.

Hot keys troubleshooting#

You may suspect that you have a hot key if you see high CPU utilization on a single shard. There are two main way to identify hot keys: using the Redis CLI and sampling the operations against Redis.

To use the Redis CLI to identify hot keys:

  1. 1.First confirm that you have enough available memory to enable an eviction policy.
  2. 2.Next, enable the LFU (least-frequently used) eviction policy on the database.
  3. 3.Finally, run redis-cli --hotkeys

You may also identify hot keys by sampling the operations against Redis. You can use do this by running the MONITOR command against the high CPU shard. Since this a potentially high-impact operation, you should only use this technique as a secondary resort. For mission-critical databases, consider contact Redis support for assistance.

Remediation#

Once you discover a hot key, you need to find a way to reduce the number of operations against it. This means getting an understanding of the application’s access pattern and the reasons for such frequently access.

If the hot key operations are read-only, then consider implementing an application-local cache so that fewer read request are sent to Redis. For example, even a local cache that expires every 5 seconds can entirely eliminate a hot key issue.

Large keys#

Large keys are keys that are hundreds of kilobytes or larger. High network traffic and high CPU utilization can be caused by large keys.

Large keys troubleshooting#

To identify large keys, you can sample the keyspace using the Redis CLI.

Run redis-cli --memkeys against your database to sample the keyspace in real time and potentially identify the largest keys in your database.

Remediation#

Addressing a large key issues requires understanding why the application is creating large keys in the first place. As such, it’s difficult to provide general advice to solving this issue. Resolution often requires a change to the application’s data model or the way it interacts with Redis.

Alerting#

The Redis Software observability package includes a suite of alerts and their associated tests for use with Prometheus. Not all the alerts are appropriate for all environments; for example, installations that do not use persistence have no need for storage alerts.

The alerts are packaged with a series of test that validate the individual triggers. You can use these test to validate your modification to these alerts for specific environments and use cases.

To use these alerts, install Prometheus Alertmanager. For a comprehensive guide to alerting with Prometheus and Grafana, see the Grafana blog post on the subject.

Configuring Prometheus#

To configure Prometheus for alerting, open the prometheus.yml configuration file.

Uncomment the Alertmanager section of the file. The following configuration starts Alertmanager and instructs it to listen on its default port of 9093.

prometheus.yml
# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - alertmanager:9093

The Rule file section of the config file instructs Alertmanager to read specific rules files. If you pasted the 'alerts.yml' file into '/etc/prometheus' then the following configuration would be required.

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "error_rules.yml"
  - "alerts.yml"

Once this is done, restart Prometheus.

The built-in configuration, error_rules.yml, has a single alert: Critical Connection Exception. If you open the Prometheus console, by default located at port 9090, and select the Alert tab, you will see this alert, as well as the alerts in any other file you have included as a rules file.

Prometheus alerts

The following is a list of alerts contained in the alerts.yml file. There are several points consider:

  • Not all Redis Software deployments export all metrics
  • Most metrics only alert if the specified trigger persists for a given duration

List of alerts#

Appendix A: Grafana dashboards#

Grafana dashboards are available for Redis Software and Redis Cloud deployments.

These dashboards come in three styles, which may be used in concert with one another to provide a holistic picture of your deployment.

  1. 1.Classic dashboards provide detailed information about the cluster, nodes, and individual databases.
  2. 2.Basic dashboards provide a high-level overviews of the various cluster components.
  3. 3.Extended dashboards which requires a third-party library to perform ReST calls.

There are two additional sets of dashboards for Redis Software that provide drill-down functionality: the workflow dashboards.

Software#

Workflow#

Cloud#

NB - The 'workflow' dashboards are intended to be used as a package. Therefore they should all be installed, as they contain links to the other dashboards in the group permitting rapid navigation between the overview and the drill-down views.