If you have a very big database containing big tables to be backed up, you might have the same issue with mine: The web server looks like to be dead during MySQL dumping (mysqldump), and the visitors get no response from the database for a long long time.
[root@test ~]# mysqldump -u root -p big_database | gzip > /path-to-dumps/20151108.gz
Luckily, the official document has a cure which can throttle mysqldump operation in order to make room for visitors to get web pages. MySQL :: MySQL 5.7 Reference Manual :: 4.5.4 mysqldump — A Database Backup Program.
mysqldump can retrieve and dump table contents row by row, or it can retrieve the entire content from a table and buffer it in memory before dumping it. Buffering in memory can be a problem if you are dumping large tables.
To dump large tables, combine the --single-transaction option with the --quick option.
Which means, you can backup tables row by row instead of the whole table by the option --quick (or -q) which is enabled by default because of --opt. But you still need --single-transaction to combine with.
Please note that, those options above might slow down the dumping process, but it allows your visitors can still have a way to pass through and get responses from the database.
Since --single-transaction is mutually exclusive with --lock-tables, so you should add --skip-lock-tables to disable the table locks.
Let's see the options we added to the mysqldump command line.
[root@test ~]# mysqldump -u root -p -q --single-transaction --skip-lock-tables big_database | gzip > /path-to-dumps/20151108.gz
In practice, we use gzip to compress the output for reducing disk IO and disk space. Instead, it will cost more CPU resource. But I think the overall performance is gained while dumping big databases. For more example on this practice, you may refer this post for more information: How to Resolve Very Big Dump File of MySQL.