• notice
  • Congratulations on the launch of the Sought Tech site

MySQL backup and recovery

Discuss the backup and recovery mechanism of MySQL, and how to maintain data tables, including the two main table types: MyISAM and Innodb.The MySQL version designed in this article is 5.0.22.

This article introduces the use of MySQL's own free backup tool to backup, of course, you can choose some more convenient third-party tools to backup and restore MySql database.

Currently the free backup tools supported by MySQL are: mysqldump, mysqlhotcopy, you can also use SQL syntax for backup: BACKUP TABLE or Select INTO OUTFILE, or backup binary log (binlog), or copy data directly files and related configuration files.The MyISAM table is saved in the form of a file, so it is relatively easy to back up, and the methods mentioned above can be used.All Innodb tables are stored in the same data file ibdata1 (maybe multiple files, or independent tablespace files), which is relatively difficult to back up.Free solutions can be to copy data files, backup binlog, Or use mysqldump.

1.mysqldump

1.1 Backup

mysqldump is a SQL-level backup mechanism, which converts data tables into SQL script files, which is relatively suitable when upgrading between different MySQL versions.This is also the most commonly used backup method.

Now let's talk about some main parameters of mysqldump:

--compatible=name

It tells mysqldump which database or older version of MySQL server the exported data will be compatible with.Values ​​can be ansi, mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb, no_key_options, no_tables_options, no_field_options, etc.To use several values, separate them with commas.Of course, it doesn't guarantee full compatibility, but try to be as compatible as possible.

--complete-insert,-c

The exported data adopts the complete Insert method including the field name, that is, all the values ​​are written in one line.Doing so can improve the insertion efficiency, but may be affected by the max_allowed_packet parameter and cause the insertion to fail.So this parameter needs to be used with caution, at least I don't recommend it.

--default-character-set=charset

Specify which character set to use when exporting data.If the data table does not use the default latin1 character set, this option must be specified when exporting, otherwise the problem of garbled characters will occur after importing the data again.

--disable-keys

Tell mysqldump to add/*!40000 Alter TABLE table DISABLE KEYS */; and/*!40000 Alter TABLE table ENABLE KEYS */; statements at the beginning and end of the Insert statement, which can greatly speed up the insert statement because It rebuilds the index after inserting all the data.This option only applies to MyISAM tables.

--extended-insert=true|false

By default, mysqldump enables--complete-insert mode, so if you don't want to use it, use this option and set its value to false.

--hex-blob

Export binary string fields in hexadecimal format.This option must be used if binary data is available.Affected field types are BINARY, VARBINARY, BLOB.

--lock-all-tables,-x

Before starting the export, submit a request to lock all tables in all databases to ensure data consistency.This is a global read lock and is automatically turned off with the--single-transaction and--lock-tables options.

--lock-tables

It is similar to--lock-all-tables, but locks the currently exported data table instead of locking all the tables in the library at once.This option only applies to MyISAM tables.If it is an Innodb table, you can use the--single-transaction option.

--no-create-info,-t

Export only data without adding a Create TABLE statement.

--no-data,-d

Do not export any data, only the database table structure.

--opt

This is just a shortcut option, equivalent to adding--add-drop-tables--add-locking--create-option--disable-keys--extended-insert--lock-tables--quick--set-charset option.This option enables mysqldump to export data quickly, and the exported data can be quickly imported back.This option is enabled by default, but can be disabled with--skip-opt.Note that if you run mysqldump without the--quick or--opt option, the entire result set will be placed in memory.Problems may arise if exporting large databases.

--quick,-q

This option is useful when exporting large tables, it forces mysqldump to fetch records from server queries and output them directly instead of fetching all records and caching them in memory.

--routines,-R

Export stored procedures and custom functions.

--single-transaction

This option submits a BEGIN SQL statement before exporting data.BEGIN will not block any application and can guarantee the consistent state of the database when exporting.It only works with transactional tables such as InnoDB and BDB.

This option and the--lock-tables option are mutually exclusive, because LOCK TABLES causes any pending transaction to commit implicitly.

To export large tables, use the--quick option in combination.

--triggers

Also export triggers.This option is enabled by default, use--skip-triggers to disable it.

Please refer to the manual for other parameters details, I usually use the following SQL to backup MyISAM tables:

/usr/local/mysql/bin/mysqldump-uyejr-pyejr

--default-character-set=utf8--opt--extended-insert=false

--triggers-R--hex-blob-x db_name > db_name.sql

Use the following SQL to back up Innodb tables:

/usr/local/mysql/bin/mysqldump-uyejr-pyejr

--default-character-set=utf8--opt--extended-insert=false

--triggers-R--hex-blob--single-transaction db_name > db_name.sql

In addition, if you want to achieve online backup, you can also use the--master-data parameter to achieve it, as follows:

/usr/local/mysql/bin/mysqldump-uyejr-pyejr

--default-character-set=utf8--opt--master-data=1

--single-transaction--flush-logs db_name > db_name.sql

It just requests the lock table at the beginning, then refreshes the binlog, and then adds a CHANGE MASTER statement to the exported file to specify the binlog location of the current backup.If you want to restore this file to the slave, just You can do it this way.

Note:--extended-insert needs to be enabled or disabled according to the actual situation, which will have a greater impact on the speed of data recovery.

1.2 Restore

The file backed up with mysqldump is an SQL script that can be directly imported.There are two ways to import data.

Use mysql client directly

Example:

/usr/local/mysql/bin/mysql-uyejr-pyejr db_name < db_name.sql

Use SOURCE syntax

In fact, this is not a standard SQL syntax, but a function provided by the mysql client, for example:

SOURCE/tmp/db_name.sql;

The absolute path of the file needs to be specified here, and it must be a file that the mysqld running user (such as nobody) has permission to read.

2.mysqlhotcopy

2.1 Backup

mysqlhotcopy is a PERL program originally written by Tim Bunce.It uses LOCK TABLES, FLUSH TABLES and cp or scp to back up the database quickly.It is the fastest way to back up a database or a single table, but it can only run on the same machine where the database files (including table definition files, data files, and index files) are located.mysqlhotcopy can only be used to back up MyISAM, and it can only run on Unix-like and NetWare systems.

mysqlhotcopy supports copying multiple databases at once, and also supports regular expressions.Here are a few examples:

root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr

db_name/tmp (copy the database directory db_name to/tmp)

root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr

db_name_1...db_name_n/tmp

root#/usr/local/mysql/bin/mysqlhotcopy-h=localhost-u=yejr-p=yejr

db_name./regex//tmp

For more detailed usage, please refer to the manual, or call the following command to view the help of mysqlhotcopy:

perldoc/usr/local/mysql/bin/mysqlhotcopy

Note that to use mysqlhotcopy, you must have Select, RELOAD (to execute FLUSH TABLES) permissions, and you must also have permission to read the datadir/db_name directory.

2.2 Restore

mysqlhotcopy backups the entire database directory, which can be directly copied to the datadir (here/usr/local/mysql/database/) directory specified by mysqld when using it.At the same time, pay attention to the problem of permissions, as shown in the following example :

root#cp-rf db_name/usr/local/mysql/database/

root#chown-R nobody:nobody/usr/local/mysql/database/(change the owner of the db_name directory to mysqld running user)

3, SQL syntax backup

3.1 Backup

The BACKUP TABLE syntax is actually similar to the working principle of mysqlhotcopy, which is to lock the table and then copy the data files.It can achieve online backup, but the effect is not ideal, so it is not recommended.It only copies table structure files and data files, and does not copy index files at the same time, so recovery is slow.

Example:

BACK TABLE tbl_name TO '/tmp/db_name/';

Note that you must have the FILE privilege to execute this SQL, and the directory/tmp/db_name/must be writable by the mysqld user, and the exported file cannot overwrite the existing file to avoid security problems.

Select INTO OUTFILE is to export the data into an ordinary text file, and you can customize the field interval to facilitate the processing of these data.

Example:

Select INTO OUTFILE '/tmp/db_name/tbl_name.txt' FROM tbl_name;

Note that you must have the FILE privilege to execute this SQL, and the file/tmp/db_name/tbl_name.txt must be writable by the mysqld user.The exported file cannot overwrite the existing file to avoid security problems.

3.2 Recovery

The files backed up by the BACKUP TABLE method can be restored by running the RESTORE TABLE statement.

Example:

RESTORE TABLE FROM '/tmp/db_name/';

Permission requirements are similar to those described above.

For files backed up by the Select INTO OUTFILE method, you can run the LOAD DATA INFILE statement to restore the data table.

Example:

LOAD DATA INFILE '/tmp/db_name/tbl_name.txt' INTO TABLE tbl_name;

Permission requirements are similar to those described above.Before importing data, the data table must already exist.If you are concerned about data duplication, you can add the REPLACE keyword to replace existing records or use the IGNORE keyword to ignore them.

4.Enable binary log (binlog)

The method of binlog is relatively more flexible, saves worry and effort, and can also support incremental backup.

You must restart mysqld when binlog is enabled.First, close mysqld, open my.cnf, and add the following lines:

server-id=1

log-bin=binlog

log-bin-index=binlog.index

Then start mysqld and that's it.During operation, binlog.000001 and binlog.index will be generated.The first file is for mysqld to record all data update operations, and the latter file is for all binlog indexes, which cannot be easily deleted.See the manual for information on binlog.

When a backup is needed, you can execute the SQL statement first to let mysqld stop writing to the current binlog, and then the file can be backed up directly.In this way, the purpose of incremental backup can be achieved:

FLUSH LOGS; If you are backing up a slave server in a replication system, you should also back up the master.info and relay-log.info files.

The backup binlog file can be viewed with the tool mysqlbinlog provided by MySQL, such as:

/usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001

This tool allows you to display all SQL statements under the specified database, and can also limit the time range, which is quite convenient, please check the manual for details.

When restoring, it can be done with a statement similar to the following:

/usr/local/mysql/bin/mysqlbinlog/tmp/binlog.000001 | mysql-uyejr-pyejr db_name

Executes the SQL statement output by mysqlbinlog directly as input.

If you have a spare machine, you may wish to use this method to backup.Since the performance requirements of the machine as a slave are relatively low, the cost is low, incremental backup can be achieved at low cost, and part of the data query pressure can be shared, so why not do it?

5.Backup data files directly

Compared with the previous methods, backing up data files is the most direct, fast, and convenient method, but the disadvantage is that incremental backup is basically impossible.In order to ensure data consistency, the following SQL statement needs to be executed before the backing file:

FLUSH TABLES WITH READ LOCK; that is, flush the data in the memory to the disk, and lock the data table at the same time to ensure that no new data is written during the copying process.The data backed up by this method is also very simple to restore, just copy it back to the original database directory.

Note that for Innodb type tables, you also need to back up their log files, the ib_logfile* files.Because when the Innodb table is damaged, you can rely on these log files to recover.

6.Backup strategy

For a system with medium-level business volume, the backup strategy can be determined as follows: the first full backup, an incremental backup every day, and a full backup every week, and so on.For important and busy systems, full backups may be required every day, incremental backups every hour, or even more frequently.In order to achieve online backup and incremental backup without affecting online business, the best way is to use the master-slave replication mechanism (replication) to make backups on the slave machine.

7.Data maintenance and disaster recovery

As a DBA (I'm not yet, hehe), one of the most important jobs is to ensure that data sheets are safe, stable, and fast to use.Therefore, regular maintenance of your data sheet is required.The following SQL statement is useful:

CHECK TABLE or REPAIR TABLE to check or maintain MyISAM tables

OPTIMIZE TABLE, optimizing MyISAM tables

ANALYZE TABLE, analyzing MyISAM tables

Of course, all of the above commands can be done through the tool myisamchk, which will not be described in detail here.

Innodb tables can be defragmented and indexed faster by executing the following statement:

Alter TABLE tbl_name ENGINE=Innodb;

This is actually a NULL operation that ostensibly does nothing, but actually defragments.

Commonly used MyISAM tables can be done with the recovery methods mentioned above.If the index is broken, the myisamchk tool can be used to rebuild the index.For Innodb tables, it is not so straightforward, because it stores all tables in one tablespace.However, Innodb has a checking mechanism called fuzzy checkpoint, as long as the log file is saved, errors can be fixed according to the log file.You can add the following parameters to the my.cnf file to make mysqld automatically check the log file at startup:

innodb_force_recovery=4

See the manual for information on this parameter.

8.Summary

To do a good job of data backup, you must have an appropriate backup strategy.This is a small part of what a DBA does.Everything is difficult at the beginning, so start now!

Tags

Technical otaku

Sought technology together

Related Topic

1 Comments

author

atorvastatin 80mg sale & lt;a href="https://lipiws.top/"& gt;order lipitor 10mg pills& lt;/a& gt; order lipitor 10mg online cheap

Gapaol

2024-03-07

Leave a Reply

+