• notice
  • Congratulations on the launch of the Sought Tech site

What is cache penetration? breakdown? avalanche? How to solve?

write in front

In the previous " [High Concurrency] How does Redis help the high concurrency spike system? After reading this, I totally understand! ! "In the article, we take the scenario of inventory deduction in a high-concurrency spike system as an example to illustrate how Redis helps the spike system. Then, when it comes to Redis, more scenarios are often used as the cache of the system. When it comes to cache, especially the distributed cache system, in the actual high concurrency scenario, a little carelessness will cause cache penetration and cache. Issues with breakdown and cache avalanches. So what is cache penetration? What is cache breakdown and what is cache avalanche? How are they caused? How to solve it? Today, we will discuss these issues together.

cache penetration

First, let's talk about cache penetration. What is cache penetration? The cache penetration problem is partly related to the cache hit rate. If our cache design is unreasonable and the cache hit rate is very low, most of the pressure on data access will be concentrated at the back-end database level.

What is cache penetration?

If, when requesting data, no qualified data is found in the cache layer and the database layer, that is, there is no hit data in the cache layer and the database layer, then this situation is called cache penetration.

We can use the following diagram to represent the phenomenon of cache penetration.

The main reason for cache penetration is to query the data corresponding to a certain key. If there is no corresponding data in the Redis cache, it is directly queried in the database. If the data to be queried does not exist in the database, the database will return empty, and Redis will not cache this empty result. This results in that every time the data is queried through such a key, it will be directly queried in the database, and Redis will not cache empty results. This creates the problem of cache penetration.

How to solve the problem of cache penetration?

Now that we know that the main reason for cache penetration is that there is no corresponding data in the cache, we directly query the database, the database returns empty results, and the cache does not store empty results.

Then we naturally thought of the first solution: cache empty objects. When the result queried from the database for the first time is empty, we will load the empty object into the cache and set a reasonable expiration time, so that the security of the back-end database can be guaranteed to a certain extent.

The second solution to the problem of cache penetration is to use Bloom filters, which can process regular key values with large amounts of data. Whether a record exists or not is essentially a Bool value, which can be stored using only 1bit. We can use a Bloom filter to condense this representation of yes, no, etc. into a data structure. For example, the data of user gender, which we are most familiar with, is very suitable for processing with Bloom filters.

cache breakdown

If we set the same expiration time for most of the data in the cache, at a certain point, the data in the cache will expire in batches.

What is cache breakdown?

If the data in the cache expires in batches at a certain time, most of the user's requests will fall directly on the database. This phenomenon is called cache breakdown.

We can use the following diagram to represent the thread of cache breakdown.

The main reason for cache breakdown is that we set an expiration time for the data in the cache. If a large amount of data is obtained from the database at a certain time, and the same expiration time is set, the cached data will be invalid at the same time, causing the problem of cache breakdown.

How to solve the problem of cache breakdown?

For hot data, we can set the data in the cache to never expire; we can also update the expiration time of the data in the cache when accessing the data; if it is a batch of cache items, we can set the cache items for these caches Items are assigned a reasonable expiration time to avoid invalidation at the same time.

Another solution is to use distributed locks to ensure that there is only one thread to query the back-end services for each key at the same time. While a thread is querying the back-end services, other threads do not have permission to obtain distributed locks. wait. However, in high concurrency scenarios, this solution has relatively high access pressure on distributed locks.

Cache Avalanche

If the caching system fails, all concurrent traffic goes directly to the database.

What is a cache avalanche?

If at some point the cache set fails, or the cache system fails, all concurrent traffic goes directly to the database. The call volume of the data storage layer will increase sharply, and it will not take long for the database to be overwhelmed by large traffic. This kind of cascading service failure is called cache avalanche.

We can use the following diagram to represent the phenomenon of cache avalanches.

The main reason for the cache avalanche is the centralized failure of the cache, or the failure of the cache service, and the instantaneous large concurrent traffic overwhelms the database.

How to fix cache avalanche problem?

One of the most common solutions to the cache avalanche problem is to ensure the high availability of Redis. Deploying the Redis cache as a high-availability cluster (multiple activities in different places if necessary) can effectively prevent the occurrence of cache avalanches.

In order to alleviate large concurrent traffic, we can also prevent cache avalanches by using current limiting and downgrading. For example, after the cache is invalidated, the number of threads that read the database and write to the cache can be controlled by locking or using a queue. The specific point is to set some keys to allow only one thread to query data and write to the cache, and other threads to wait. It can effectively alleviate the huge impact of large concurrent traffic on the database.

In addition, we can also load data that may be accessed in large quantities into the cache through data preheating. When large concurrent access is about to occur, manually trigger the loading of different data into the cache in advance, and set different expiration times for the data. Make the time points of cache invalidation as uniform as possible, so as not to invalidate all at the same time.


Tags

Technical otaku

Sought technology together

Related Topic

0 Comments

Leave a Reply

+