• notice
  • Congratulations on the launch of the Sought Tech site

Talking about 30 methods commonly used by optimizing SQL statements in mysql

1.To optimize the query, you should try to avoid full table scans.First, you should consider establishing indexes on the columns involved in where and order by.

2.Try to avoid using the !=or <> operator in the where clause, otherwise the engine will abandon the use of the index and perform a full table scan.

3.Try to avoid the null value judgment of the field in the where clause, otherwise it will cause the engine to give up using the index and perform a full table scan, such as:
select id from t where num is null
You can set the default value of 0 on num to ensure that the num column in the table does not have a null value, and then query like this:
select id from t where num=0

4.Try to avoid the where clause Use or to connect conditions in the, otherwise it will cause the engine to give up using the index and perform a full table scan, such as:
select id from t where num=10 or num=20
You can query like this:
select id from t where num=10
union all
select id from t where num=20

5.The following query will also cause a full table scan:
select id from t where name like ' %abc%'
To improve efficiency, you can consider full-text search.

6.in and not in should also be used with caution, otherwise it will lead to a full table scan, such as:
select id from t where num in(1,2,3)
For continuous values If you can use between, do not use in:
select id from t where num between 1 and 3

7.If you use parameters in the where clause, it will also cause a full table scan.Because SQL only parses local variables at runtime, the optimizer cannot defer the choice of the access plan until runtime; it must choose at compile time.However, if the access plan is established at compile time, the value of the variable is still unknown and therefore cannot be used as an input item for index selection.For example, the following statement will perform a full table scan:
select id from t where num=@num
You can change to force the query to use an index:
select id from t with(index(index name)) where num=@num

8.Try to avoid performing expression operations on fields in the where clause, which will cause the engine to abandon the use of indexes and perform full table scans.Such as:
select id from t where num/2=100
should be changed to:
select id from t where num=100*2

9.Try to avoid the where clause Perform functional operations on the fields in the, which will cause the engine to abandon the use of the index and perform a full table scan.Such as:
select id from t where substring(name,1,3)='abc'--id whose name starts with abc
select id from t where datediff(day,createdate,'2005-11-30 ')=0--'2005-11-30' generated id
should be changed to:
select id from t where name like'abc%'
select id from t where createdate>='2005-11-30' and createdate<'2005-12-1'

10.Do not perform functions, arithmetic operations or other expression operations on the left side of the "=" in the where clause, otherwise the system may fail Use the index correctly.

11.When using an index field as a condition, if the index is a composite index, then the first field in the index must be used as the condition to ensure that the system uses the index, otherwise the index will not Will be used, and the order of the fields should be consistent with the index order as much as possible.

12.Do not write some meaningless queries, such as the need to generate an empty table structure:
select col1,col2 into #t from t where 1=0
This type of code will not return anything The result set, but will consume system resources, should be changed to this:
create table #t(...)

13.In many cases, using exists instead of in is a good choice:
select num from a where num in(select num from b)
Replace with the following statement:
select num from a where exists(select 1 from b where num=a.num)

14 Not all indexes are effective for queries.SQL is optimized based on the data in the table.When a large amount of data is repeated in the index column, the SQL query may not use the index.For example, there are fields sex, male, female in a table.Almost half of them, so even if an index is built on sex, it will not play a role in query efficiency.

15.Indexes are not as many as possible.Although the index can improve the efficiency of the corresponding select, it also reduces the efficiency of insert and update, because the index may be rebuilt during insert or update, so how Index construction requires careful consideration, depending on the specific circumstances.The number of indexes of a table should not exceed 6, if there are too many, you should consider whether it is necessary to build indexes on columns that are not frequently used.

16.You should avoid updating clustered index data columns as much as possible, because the order of clustered index data columns is the physical storage order of table records.Once the column value changes, the order of the entire table records will be adjusted.Consume considerable resources.If the application system needs to update the clustered index data columns frequently, then you need to consider whether the index should be built as a clustered index.

17.Use numeric fields as much as possible.If fields that only contain numerical information, try not to design them as characters.This will reduce the performance of queries and connections and increase storage overhead.This is because the engine compares each character in the string one by one when processing queries and connections, and for numeric types, it only needs to be compared once.

18.Use varchar/nvarchar instead of char/nchar as much as possible, because the storage space of variable-length fields is small, which can save storage space, and secondly, for queries, search efficiency in a relatively small field Obviously higher.

19.Do not use select * from t anywhere, replace "*" with a specific field list, and do not return any fields that are not used.

20.Try to use table variables instead of temporary tables.If the table variable contains a lot of data, please note that the index is very limited (only the primary key index).

21.Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.

22.Temporary tables are not unusable, using them appropriately can make certain routines more efficient, for example, when you need to repeatedly refer to a large table or a data set in a commonly used table.However, for one-time events, it is better to use an export table.

23.When creating a new temporary table, if you insert a large amount of data at one time, you can use select into instead of create table to avoid a large amount of log and increase the speed; if the amount of data is not large, in order to ease the system For table resources, create table first, then insert.

24.If temporary tables are used, all temporary tables must be explicitly deleted at the end of the stored procedure, first truncate table, and then drop table, so as to avoid long-term locking of system tables.

25.Try to avoid the use of cursors, because the efficiency of the cursor is poor.If the data operated by the cursor exceeds 10,000 rows, then you should consider rewriting.

26.Before using the cursor-based method or the temporary table method, you should first find a set-based solution to solve the problem.The set-based method is usually more effective.

27.Like temporary tables, cursors are not unusable.Using FAST_FORWARD cursors for small data sets is usually better than other row-by-row processing methods, especially when you must reference several tables to get the data you need.Routines that include "total" in the result set usually execute faster than using cursors.If development time permits, you can try both the cursor-based method and the set-based method to see which method works better.

28.Set SET NOCOUNT ON at the beginning of all stored procedures and triggers, and set SET NOCOUNT OFF at the end.There is no need to send a DONE_IN_PROC message to the client after executing each statement of the stored procedure and trigger.

29.Try to avoid returning a large amount of data to the client.If the amount of data is too large, you should consider whether the corresponding demand is reasonable.

30.Try to avoid large transaction operations and improve system concurrency.

Tags

Technical otaku

Sought technology together

Related Topic

0 Comments

Leave a Reply

+