The util.dumpInstance() utility will backup the entire MySQL Instance, util.dumpSchemas() lets you determine which schemas (or databases) to backup, and util.loadDump() is the restoration tool.
Backing Up Instances
util.dumpInstance("/tmp/instance",{ "showProgress" : "true" })
<some output omitted for brevity>
1 thds dumping - 100% (52.82K rows / ~52.81K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Duration: 00:00:00s
Schemas dumped: 4
Tables dumped: 26
Uncompressed data size: 3.36 MB
Compressed data size: 601.45 KB
Compression ratio: 5.6
Rows written: 52819
Bytes written: 601.45 KB
Average uncompressed throughput: 3.36 MB/s
Average compressed throughput: 601.45 KB/s
The above was performed on an old laptop with a spinning disk, limited ram, and running the latest Fedora. I have used these utilities on much bigger instances and have found the performance to be outstanding.
This utility and the others featured in this blog have a log of options and I suggest setting a 'pager' to read through the online help with \h util.dumpInstance.
Schema Backup
I created a quick test database named demo with table named x (don't you love the creativity here) filled with about one million records of four INTEGERS plus an INTEGER primary key. And remember the output below is from a ten plus year old laptop.
JS > util.dumpSchemas(['demo'],"/tmp/demo")
Acquiring global read lock
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `demo`.`x`
Writing DDL for schema `demo`
Writing DDL for table `demo`.`x`
Data dump for table `demo`.`x` will be chunked using column `id`
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Data dump for table `demo`.`x` will be written to 1 file
1 thds dumping - 100% (1000.00K rows / ~997.97K rows), 577.69K rows/s, 19.89 MB/s uncompressed, 8.58 MB/s compressed
Duration: 00:00:01s
Schemas dumped: 1
Tables dumped: 1
Uncompressed data size: 34.44 MB
Compressed data size: 14.85 MB
Compression ratio: 2.3
Rows written: 999999
Bytes written: 14.85 MB
Average uncompressed throughput: 20.11 MB/s
Average compressed throughput: 8.67 MB/s
That is impressive performance. And yes you can back up multiple schemas at one time by putting their name in the JSON array in the first argument.
And Restoring
The best backups in the world are useless unless you can restore from them. I did a quick rename of the table x to y and then restored the data.
Be sure to have local_infile set to "ON" before proceeding.
JS > util.loadDump("/tmp/demo")
Loading DDL and Data from '/tmp/demo' using 4 threads.
Target is MySQL 8.0.21. Dump was produced from MySQL 8.0.21
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL script for schema `demo`
Executing DDL script for `demo`.`x`
[Worker003] demo@x@@0.tsv.zst: Records: 999999 Deleted: 0 Skipped: 0 Warnings: 0
Executing common postamble SQL
1 chunks (1000.00K rows, 34.44 MB) for 1 tables in 1 schemas were loaded in 20 sec (avg throughput 1.72 MB/s)
0 warnings were reported during the load.