Many applications often only display the latest or most popular records, but in order for the old records to still be accessible, a paging navigation bar is needed.However, how to better implement paging through MySQL is always a headache.Although there is no solution that can be used, understanding the underlying database is more or less helpful for optimizing paging queries.
Let's take a look at a commonly used but poorly performed query.
ORDER BY id DESC
LIMIT 0, 15
This query takes 0.00sec.So, what's wrong with this query? In fact, there is no problem with this query statement and parameters, because it uses the primary key of the following table and only reads 15 records.
CREATE TABLE city (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
city varchar(128) NOT NULL,
PRIMARY KEY (id)
The real problem is when the offset (page offset) is large, like the following:
ORDER BY id DESC
LIMIT 100000, 15;
The above query requires 0.22sec when there are 2M rows of records.Looking at the SQL execution plan through EXPLAIN, you can find that the SQL retrieved 100015 rows, but only 15 rows were needed in the end.A large paging offset will increase the data used, and MySQL will load a large amount of data that will not be used eventually into the memory.Even if we assume that most website users only access the first few pages of data, a small number of requests for large page offsets will cause harm to the entire system.Facebook is aware of this, but Facebook did not optimize the database in order to be able to handle more requests per second, but instead focused on reducing the variance in request response time.
For paging requests, another piece of information is also very important, that is, the total number of records.We can easily get the total number of records through the following query.
However, the above SQL takes 9.28sec when using InnoDB as the storage engine.An incorrect optimization is to use SQL_CALC_FOUND_ROWS, SQL_CALC_FOUND_ROWS can prepare the number of eligible records in advance when paging query, and then just execute a sentence select FOUND_ROWS(); can get the total number of records.But in most cases, the short query statement does not mean performance improvement.Unfortunately, this paging query method is used in many mainstream frameworks.Let's take a look at the query performance of this statement.
SELECT SQL_CALC_FOUND_ROWS *
ORDER BY id DESC
LIMIT 100000, 15;
This sentence takes 20.02sec, which is twice as long as the previous one.It turns out that using SQL_CALC_FOUND_ROWS for paging is a bad idea.
Let's take a look at how to optimize.The article is divided into two parts, the first part is how to get the total number of records, and the second part is how to get the real records.
Calculate the number of rows efficiently
If the engine used is MyISAM, you can directly execute COUNT(*) to get the number of rows.Similarly, in the heap table, the number of rows is also stored in the table's meta information.But if the engine is InnoDB, the situation will be more complicated, because InnoDB does not save the specific number of rows in the table.
We can cache the number of rows, and then can be updated regularly through a daemon or when the cache is invalidated by some operation of the user, execute the following statement:
Let's enter the most important part of this article and get the records to be displayed on the pagination.As mentioned above, a large offset will affect performance, so we have to rewrite the query statement.To demonstrate, we create a new table "news", sorted by current events (the latest released at the top), to achieve a high-performance paging.For simplicity, we assume that the Id of the latest news is also the largest.
CREATE TABLE news(
id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT,
title VARCHAR(128) NOT NULL
A more efficient way is based on the last news Id displayed by the user.The sentence to query the next page is as follows, and the last Id displayed on the current page needs to be passed in.
FROM news WHERE id < $last_id ORDER BY id DESC LIMIT $perpage
The statement for querying the previous page is similar, except that the first Id of the current page needs to be passed in, and the order must be reversed.
FROM news WHERE id > $last_id
ORDER BY id ASC
The above query method is suitable for simple paging, that is, no specific page navigation is displayed, only "previous page" and "next page" are displayed.For example, the footer of a blog displays "previous page" and "next page" Button.But if you want to achieve real page navigation is still difficult, let's take a look at another way.
SELECT id, ((@cnt:=@cnt + 1) + $perpage-1)% $perpage cnt
JOIN (SELECT @cnt:=0)T
WHERE id < $last_id ORDER BY id DESC LIMIT $perpage * $buttons)C WHERE cnt=0;
Through the above statement, an id corresponding to an offset can be calculated for each paged button.This method has another advantage.Assuming that a new article is being published on the website, the position of all articles will be moved back one position, so if the user changes pages when publishing an article, he will see an article twice.If the offset Id of each button is fixed, this problem will be solved.Mark Callaghan published a similar blog, using a combined index and two location variables, but the basic idea is the same.
If the records in the table are rarely deleted or modified, you can also store the page numbers corresponding to the records in the table and create an appropriate index on the column.In this way, when a new record is added, the following query needs to be executed to regenerate the corresponding page number.
UPDATE news SET page=CEIL((p:=p + 1)/$perpage) ORDER BY id DESC;
Of course, you can also add a table dedicated to paging, which can be maintained by a background program.
UPDATE pagination T
SELECT id, CEIL((p:=p + 1)/$perpage) page
ORDER BY id
Now it is very simple to get the elements of any page:
FROM news A
JOIN pagination B ON A.id=B.ID
There is another method for paging that is similar to the previous method.This method is more suitable when the data set is relatively small and there is no index available—for example, when processing search results.Execute the following query on an ordinary server, when there are 2M records, it will take about 2sec.This method is relatively simple, just create a temporary table to store all Id (this is also the most performance-consuming place).
CREATE TEMPORARY TABLE _tmp (KEY SORT(random))
SELECT id, FLOOR(RAND() * 0x8000000) random
ALTER TABLE _tmp ADD OFFSET INT UNSIGNED PRIMARY KEY AUTO_INCREMENT, DROP INDEX SORT, ORDER BY random;
Next, you can execute the paging query as follows.
WHERE OFFSET >=$offset
ORDER BY OFFSET
Simply put, the optimization for paging is...Avoid scanning too many records when the amount of data is large.
The blog is relatively long, so the translation is a bit rough..., I will check it again later.When doing the test by myself, some query time is a bit inconsistent with the author, but the author’s blog was written in 2011, so~Don’t care about the specific data, understand the spirit~~