Thursday Apr 03, 2014

Data Encryption with MySQL Enterprise Backup 3.10

Introduction

MySQL Enterprise Backup (MEB) 3.10 introduces support for encrypted backups by allowing backup images, or single-file backups, to be encrypted. However, backups stored in multiple files in a backup directory can not be encrypted.

Any MEB command that produces a backup image can be optionally requested to encrypt it. The encrypted backup image can be stored in a file or tape in the same way as an unencrypted backup image. Similarly, any MEB command that reads data from a backup image accepts also an encrypted backup image. This means that encrypted backups can be used in all the same situations as unencrypted backup images.

MEB encrypts data with Advanced Encryption Standard (AES) algorithm in CBC mode with 256-bit keys. AES is a symmetric block cipher which means that the same key is used both for encryption and decryption. The AES cipher has been adopted by the U.S. government and it is now used worldwide.

A new format for the encrypted backup image is introduced. This is a proprietary format developed by Oracle and it allows efficient encryption and decryption in parallel.

Encryption keys

Encryption keys are strings of 256 bits (or 32 bytes) that are represented by strings of 64 hexadecimal digits. The simplest way to create an encryption key for MEB is to type 64 randomly chosen hexadecimal digits and save them in a file. Another method is to use some shell tool to generate a string of random bytes and encode it as hexadecimal digits. For example, one could use the OpenSSL shell command to generate a key as follows:

$ openssl rand 32 -hex
8f3ca9b850ec6366f4a54feba99f2dc42fa79577158911fe8cd641ffff1e63d6

This command uses random data generated on the host for creating the key. Whichever method is used for the creation of the key, the essential point is that the resulting key consists of random bits.

The security of MEB encryption is based on two rules that apply not only to MEB but to all encryption schemes using symmetric block ciphers:

Rule 1: The encryption keys must be random.

Rule 2: The encryption keys must remain secret at all times.

When these rules are followed, it is very difficult for unauthorized persons to get access to the secure data.

Encryption keys can be specified either on the command-line with the

--key=KEY 
option where KEY is a string of 64 hexadecimal digits, or in a file with the
--key-file=FILENAME

option where FILENAME is the name of the file that contains a string of 64 hexadecimal digits.

It is important to notice that specifying the key on the command-line with the --key option is generally not secure because the command-line is usually visible to other users on the system and it may even be saved in system log files that may be accessible by unauthorized persons. Therefore, the --key-file option should be preferred over the --key option in all production environments, and the use of the --key option should be limited to testing and software development environments.

Using encryption

Encryption is very simple to use. Any MEB command that produces a backup image can be requested to encrypt it by specifying the --encrypt option with either --key or --key-file option. The following example shows how to make a compressed backup and store it as an encrypted backup image.


$ mysqlbackup --encrypt --key-file=/backups/key --compress --backup-dir=/full-backup  --backup-image=/backups/image.enc  backup-to-image

MySQL Enterprise Backup version 3.10.0 Linux-3.2.0-58-generic-i686 [2014/03/04]

Copyright (c) 2003, 2014, Oracle and/or its affiliates. All Rights Reserved.

 mysqlbackup: INFO: Starting with following command line ...

 /home/pekka/bzr/meb-3.10/src/build/mysqlbackup --encrypt

        --key-file=/backups/key --compress --backup-dir=/full-backup

        --backup-image=/backups/image.enc backup-to-image

 mysqlbackup: INFO:

IMPORTANT: Please check that mysqlbackup run completes successfully.

           At the end of a successful 'backup-to-image' run mysqlbackup

           prints "mysqlbackup completed OK!".

140306 21:40:33 mysqlbackup: INFO: MEB logfile created at /full-backup/meta/MEB_2014-03-06.21-40-33_compress_img_backup.log

 mysqlbackup: WARNING: innodb_checksum_algorithm could not be obtained from config or server variable and so mysqlbackup uses the default checksum algorithm 'innodb'.

--------------------------------------------------------------------

                       Server Repository Options:

--------------------------------------------------------------------

...

...

...

Backup Image Path = /backups/image.enc

 mysqlbackup: INFO: Unique generated backup id for this is 13941348344547471

 mysqlbackup: INFO: Uses LZ4 r109 for data compression.

 mysqlbackup: INFO: Creating 18 buffers each of size 16794070.

140306 21:40:36 mysqlbackup: INFO: Compress Image Backup operation starts with following threads

        1 read-threads    6 process-threads    1 write-threads

140306 21:40:36 mysqlbackup: INFO: System tablespace file format is Barracuda.

140306 21:40:36 mysqlbackup: INFO: Starting to copy all innodb files...

 mysqlbackup: INFO: Copying meta file /full-backup/backup-my.cnf.

 mysqlbackup: INFO: Copying meta file /full-backup/meta/backup_create.xml.

140306 21:40:36 mysqlbackup: INFO: Copying /sqldata/simple-5.6/ibdata1 (Barracuda file format).

140306 21:40:36 mysqlbackup: INFO: Found checkpoint at lsn 188642964.

...

...

...

140306 21:40:51 mysqlbackup: INFO: Compress Image Backup operation completed successfully.

 mysqlbackup: INFO: Image Path = /backups/image.enc

-------------------------------------------------------------

   Parameters Summary         

-------------------------------------------------------------

   Start LSN                  : 188642816

   End LSN                    : 188642964

-------------------------------------------------------------

mysqlbackup completed OK! with 2 warnings



This resulting encrypted backup image (file "image.enc") can be used with all commands that accept a backup image in the same way as an unencrypted backup image. For example, one could restore the server from the encrypted backup as follows:


$ mysqlbackup --decrypt --key-file=/backups/key --uncompress --backup-image=/backups/image.enc --backup-dir=/full-backup copy-back-and-apply-log

MySQL Enterprise Backup version 3.10.0 Linux-3.2.0-58-generic-i686 [2014/03/04]

Copyright (c) 2003, 2014, Oracle and/or its affiliates. All Rights Reserved.

 mysqlbackup: INFO: Starting with following command line ...

 /home/pekka/bzr/meb-3.10/src/build/mysqlbackup --decrypt

        --key-file=/backups/key --uncompress --backup-image=/backups/image.enc

        --backup-dir=/full-backup copy-back-and-apply-log

 mysqlbackup: INFO:

IMPORTANT: Please check that mysqlbackup run completes successfully.

           At the end of a successful 'copy-back-and-apply-log' run mysqlbackup

           prints "mysqlbackup completed OK!".

 mysqlbackup: INFO: Backup Image MEB version string: 3.10.0 [2014/03/04]

 mysqlbackup: INFO: The input backup image contains compressed backup.

140310 12:51:54 mysqlbackup: INFO: MEB logfile created at /full-backup/meta/MEB_2014-03-10.12-51-54_copy_back_cmprs_img_to_datadir.log

...

...

140310 12:52:14 mysqlbackup: INFO: We were able to parse ibbackup_logfile up to

          lsn 188642964.

140310 12:52:14 mysqlbackup: INFO: The first data file is '/home/pekka/sqldata/copyback-simple-5.6/ibdata1'

          and the new created log files are at '/home/pekka/sqldata/copyback-simple-5.6'

140310 12:52:14 mysqlbackup: INFO: Apply-log operation completed successfully.

140310 12:52:14 mysqlbackup: INFO: Full Backup has been restored successfully.

mysqlbackup completed OK!



In these examples we have used the --key-file option for specifying the encryption key because it is more secure than giving the key on the command-line with the --key option.

Tips

This section describes two tips that may be useful when working with encrypted backups.

The "Wrong key" error

Encryption and decryption use the same key. If decryption is attempted with a key different from the encryption key, a wrong key error occurs. When this happens, MEB prints an error message like the one shown below.


MySQL Enterprise Backup version 3.10.0 Linux-3.2.0-58-generic-i686 [2014/03/04]

Copyright (c) 2003, 2014, Oracle and/or its affiliates. All Rights Reserved.

 mysqlbackup: INFO: Starting with following command line ...

        mysqlbackup --backup-image=/backups/image.enc --decrypt

        --key-file=/key-file2 list-image

 mysqlbackup: INFO:

IMPORTANT: Please check that mysqlbackup run completes successfully.

           At the end of a successful 'list-image' run mysqlbackup

           prints "mysqlbackup completed OK!".

 mysqlbackup: INFO: Creating 14 buffers each of size 16777216.

 mysqlbackup: ERROR: Failed to decrypt encrypted data in file /backups/image.enc : the file may be corrupted or a wrong encryption key was specified.



For the user, this can be problematic because two possible reasons for the failure are offered in the error message: either the backup is corrupted or a wrong key was supplied. This is not a bug or feature of MySQL Enterprise Backup but, instead, it is a theoretical limitation imposed by the encryption scheme. It is not possible even in theory to distinguish with absolute certainty between these two explanations when decryption fails.

However, these two explanations are not always equally likely. If decryption fails at the very start without decrypting any data, then it is more likely that a wrong key was supplied. On the other hand, if the decryption fails later after some data was successfully decrypted, then it is very likely that the correct key was given but the encrypted backup is broken. Using these two rules it is possible to determine with high probability the cases where decryption fails because of a wrong key.

Recognizing encrypted backups

On Unix-like operating systems "magic numbers" may be used for identifying the type of a file. Magic numbers are patterns in files that allow recognizing the type of a file by examining the first bytes in the file. Both the unencrypted backup images and encrypted backup images have magic numbers that can be used by shell tools to detect the file type. For example, by putting these lines to the /etc/magic file

0   string  MBackuP\n   MySQL Enterprise Backup backup image
0   string  MebEncR\n   MySQL Enterprise Backup encrypted backup


the file command detects the backups images as follows:

$ file /backups/image1 /backups/image2
/backups/image1: MySQL Enterprise Backup backup image
/backups/image2: MySQL Enterprise Backup encrypted backup


Wednesday Apr 02, 2014

Offline checksum validation for directory and Image backup using MySQL Enterprise Backup

Data integrity:
-------------------
Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle. Every organization whether it's small or large want to make sure their data is consistent and error free. Data might move to other media/ different storage system for performance, speed, scalability or any other business reasons. So we want to make sure data is not corrupt while migration/movement. Data integrity is a policy which enterprise can enforce to be confident about their own data.

The overall intent of any data integrity technique is to ensure data is recorded exactly as  intended and upon retrieval later, ensure data is the same as it was when originally recorded.


Objective:
---------------
User should be able to verify data integrity of Innodb data files of a taken backup. Because during backup MEB performs integrity check to ensure consistency of data which MEB copies  from server data_dir. This feature gives flexibility to the user to run integrity check on his/her data at any time after backup. Thus this feature allows to check data integrity of backup directory/Image off-line.


Advantage:
----------------
Checksum mismatch/s will cause InnoDB to deliberately shut down a running server. It is preferable to use this command/operation rather than waiting for a server in production usage to encounter the damaged data pages.

This feature will be useful when user has taken a backup and is skeptical that the data might be corrupt before restore. It allows user to verify correctness of their backed data before restore.

 MEB's parallel architecture supports integrity check in parallel. So multiple threads in parallel operating on different chunks of the IBD data at the same time. Performance of data integrity is truly great compared to innochecksum offline utility which is single threaded.




Command-line options:

Existing "validate" command will be used to validate backup directory content. In option field, 

"--backup-dir=back_dir" we have to specify with validate.

e.g. ./mysqlbackup --backup-dir=back_dir validate


To validate compressed backup dir following command line should be used

e.g. ./mysqlbackup --backup-dir=comp_back_dir validate

To validate image

e.g. ./mysqlbackup --backup-image=back_image validate

The error message expected from Validate operation over a corrupt data file is:

 "mysqlbackup: ERROR: <filename> is corrupt and has : N corrupt pages"

In order to validate each pages of an Innodb data file. We need an algorithm name which was being used by server while we took backup.

In backup_dir/backup-my.cnf has parameter named "innodb_checksum_algorithm" along with other parameters. We use this parameter from "backup-my.cnf" file and initialize server checksum algorithm for validate of backup directory.

We have several algorithms like none, which stores magic value on each page, crc32, innodb as well as strict mode. Strict algorithm mode will try to validate checksum on given algorithm only. If checksum of a page is calculated with some other algorithm then it'll fail to validate. But, if algorithm given is not in strict mode it will try to validate page by trying all algorithm.

Validate operation involves no write sub-operation and hence no write threads required.

PAGE_CORRUPT_THRESHOLD is a constant, which specifies threshold/upper limit of corrupt pages per .ibd file. To avoid scanning through all the pages in ibd file we have an internal "PAGE_CORRUPT_THRESHOLD" for each .ibd file. When "validate" reaches this threshold it skips current .ibd file and moves to the next .ibd file.


Limitations:
------------------
Due to the limitations of checksum algorithms in principle, a 100% safe detection of each and every corruption is not guaranteed. But if MEB does not find a corruption, the server won't either since MEB uses the same algorithm. However, the algorithms used by server was theoretically proven solid in terms of detecting corruption.


MEB "validate" feature validates files of Innodb storage engine like .ibd, .par (Partitioned Innodb table file) etc. MEB can't validate Non-Innodb files as server don't have support of checksum for these files.


Reason for above problem:
--------------------------------------
For Non-Innodb files like .frm, .MYD, .MYI etc. no checksum is added by the server. InnoDB adds checksums before it writes data to the disk. So the data is protected for its whole life time: write to disk by server, stay on disk, read from disk by backup, write to disk by backup, stay on disk, read from disk by validate and even the copy-back cycle.

Monday Nov 25, 2013

MEB integration with Workbench

This blog talks about MySQL Enterprise Backup integration with Workbench and how the Workbench UI can be used to configure and operate MEB.[Read More]

Sunday Sep 29, 2013

Backing up full server instance using MySQL Enterprise Backup

Introduction:

MySQL Enterprise Backup(MEB) takes fast, consistent backups of MySQL server data, and helps in restoring the server to source server's data at the time of backup. But most of the times it is as much important to have same source server's state(server configuration like server global variables, plugins), as data. As backups become more frequent, server variables modified, plugins added or removed, it is very difficult to keep track of this changing server states for every backup. MEB 3.9.0 helps user in providing a complete backup so that the restored server can run with exactly the same state as that of the source server at the time of the backup.

MEB 3.9.0 performs full server instance backup,  which, on top of the log files and data files, also includes all the global variables and plugins(both internal and external) details. With this feature, backup-content.xml, a meta file under "meta" folder of the backup directory , now additionally contains all the plugins details like name, status, type etc under <plugins> section. In addition, there are two new files created under backup directory

  • server-my.cnf - contains all the global variables with non-default values for that server environment(MySQL server version, Operating System, Hardware Architecture etc).
  • server-all.cnf - contains all the global variables, that includes all the variables with non-default values and the other global variables with default values.

Advantages of Full Server Instance Backup:

  • Create replica - User can clone source server state by using either server-my.cnf or server-all.cnf file in the backup as defaults-file for starting the target server. As most of the global variables default value depend on the server environment, user can create a server with same state even if the target environment is different from source by using server-all.cnf as defaults file. If the target environment is same as source, user can use either of the files.
  • Keep a history of global variables - This new feature reduces the user's task for storing the state of the server, if changed, before every backup or incremental backup, so user need not worry about keeping track of server global variables. With this feature, now we can figure out non-default values of global variables for the running server with server-my.cnf.
  • Full plugins information - With all the plugins information backed up, this information will be used while installing missing plugins on restored server. Plugin details like type, status, library can be used to install missing plugins with the same configuration of the source.

Using Full Server Instance Backup:

Backup:

From MEB 3.9 onwards, this feature is enabled by default for all kinds of backup(i.e normal, incremental, image, compressed etc). That is there is no need to turn on any feature or use any option and all backups are full server instance backups.

Note: Binary logs and *info files used for replication and Innodb buffer pool details, which are also part of server instance, are not included in backups. Server plugins details are copied, but actual plugins binaries are not copied.

Restore:

After copy-back operation, server-all.cnf, server-my.cnf will be present in restored data directory. If there were any external plugins exist in source server, copy-back operation throws a warning about the missing plugins to install.

Starting Server:

The files server-my.cnf, server-all.cnf, which could be used as defaults-file to start the server on the restored data directory. When source and target environments are same, restoring server using server-my.cnf will be easier than using server-all.cnf as server-my.cnf has fewer global variables to verify or modify.

Note: User has to be careful while starting another server instance on the same host using server-my.cnf, server-all.cnf files without changes. There is a possibility of modifying source server settings or data, as some file paths like innodb_log_group_home_dir, innodb_log_group_home_dir,tmpdir,general-log etc are related to source server.

Incremental Backup:

The files server-my.cnf,server-all.cnf reflect the state of the server at the time of a incremental backup and it is desirable to have the same state after applying the incremental backup. So after apply-incremental-backup operation, full backup's server-my.cnf and server-all.cnf will be overwritten by corresponding incremental backup's server-my.cnf and server-all.cnf files.

Wednesday Sep 18, 2013

Skip Unused Pages with MySQL Enterprise Backup 3.9.0

Disclaimer
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Introduction

There are database usage patterns, where tables grow big at times and many rows are deleted from them later. InnoDB does never shrink a table space. In these cases we can end up with big data files, which contain a lot of unused pages. It is a waste of disk- and I/O- resources to back them up.

Users have manifold requested that MySQL Enterprise Backup does not back up unused InnoDB data pages. Some want smaller backups, some want less I/O, some want shrinked table spaces.

MySQL Enterprise Backup 3.9.0 can help with smaller backups. The effect on I/O is not that remarkable. InnoDB data files must be expanded to their original size when they are restored. Backup cannot accomplish a shrinkage of InnoDB table spaces.

In the following I will try to explain, how things work, and why not all wishes can be satisfied. I will also try to show the complexity of the feature. This should clarify, why it comes so late. Because my explanations might be a bit technical, I'll try to summarize important facts in advance.

Administrative Summary

You can use the command line option --skip-unused-pages with any backup operation, but it will be ignored and give a warning for the following cases:
  • backup-and-apply-log. The skipped pages must be re-inserted at the beginning of the apply-log operation. So it would be a waste of resources to do this when backup and apply-log are combined in one operation.
  • incremental-with-redo-log-only. This operation copies log pages only. No data pages are copied. So there is no data page to be skipped.
  • incremental backup: This is simply not implemented in MEB 3.9.0.
Depending on the amount of unused pages in the table spaces, the resulting backup can be much smaller. The saving of I/O resources is moderate. For the same reason don't expect a reduced backup time.

At the beginning of the apply-log operation, the skipped pages must be re-inserted. This means that every backed-up InnoDB data file must be copied over with the skipped pages stuffed in. This means that the total apply-log operation will take at least as long as the backup operation took.

Be prepared that the backup directory grows up to the original database size. At the end of each file, the backup file will be removed, but right before that point, the file exists in the compact and expanded form. You need to have the space for one extra copy of your biggest table.

You can combine --compress and --skip-unused pages. However, in MEB 3.9.0, the uncompression and expansion operations are executed in separate steps. Since both operations copy over all data files, it will take about twice the time as the backup operation took.

In MEB 3.9.0, the single-step restore operation copy-back-and-apply-log cannot be used on backups that have been taken with --skip-unused-pages.

Technical details

Prerequisite

The InnoDB table spaces contain bitmap pages, which indicates pages as free or not free (in-use, that is). The bitmap pages occur every page_size pages in the tablespace. For uncompressed tablespaces, that would be on pages 0, 16384, 32768, 49152, 65536, and so on. For a compressed tablespace with a 2K page size, the bitmap would be on pages 0, 2048, 4096, and so on. The bitmap page always covers the following page_size pages.

Nomenclature

In the following, the set of pages, which are covered by a bitmap page, is called a "map-set".

In the following, the terms "free page" and "unused page" are used interchangeably.

In the following, the term "zero page" means an InnoDB page, which has all its bytes set to zero.

The "free limit" is an InnoDB internal term. It is a page number. It is less or equal to the table space size in pages. If the free limit is less than the table space size in pages, itself and all pages above it are free.

A LSN is a Log Sequence Number. It is the offset in bytes from the logical start of the redo log. The LSN marks the start of a log entry.

The general problem

The task sounds simple. Read the bitmaps. Identify unused pages. Omit them from backup. Replace them with empty pages on restore. All over.

It would be almost as easy as this if we would take backups from cleanly shut down databases. That is, if all data files would be in a consistent state and not be modified during the backup operation.

But if a backup is taken from a hot database, the data files are modified while the backup operation is reading them. Since we cannot read all pages at once, we likely read pages that are modified at different times. When we read a bitmap page and later the pages, which are covered by this bitmap, then they may not be consistent. The bitmap could declare some page as free, but while we read other pages in between, the page may no longer be free when we read it. And vice versa.

Another problem is the InnoDB data cache. Pages are written from the cache to disk at different times. The main constraint is, that a page is never written before all log entries, which describe its modifications, are on disk. Another constraint is that at the time, when an InnoDB checkpoint is noted in the redo log, all page modifications up to that log entry are on disk. Since we start copying the redo log from the latest checkpoint, each page on disk can have any state between that
checkpoint and the current state.

Sure, we have the redo log. We keep copying it in parallel to the data file copy. It should be able to replay all changes that are done on the data pages during the backup operation. But the replay algorithm requires each modified page to be in a certain state. That is, it expects that there are certain data in the page. Each redo log entry describes a transformation of the page contents from one state to another. If the page doesn't have the expected contents, the algorithm fails.

This means that we need to take care, what page contents to restore for pages that were marked free when the corresponding bitmap page was copied. The following shall show, which problems needed to be resolved.

Empty pages

Above we said that we will replace skipped unused pages by empty pages. However, the term "empty page" does not mean a zero page. This won't work because of the following reasons.

The redo algorithm is an idempotent algorithm. The idem-potency is based on the page LSN. Every change to a data page is logged in a log entry of the redo log. That log entry's LSN is written into the page. When the redo log is applied to a database or backup, a log entry is applied to a page only if the page's LSN is lower than the log entry's LSN.

The redo algorithm does not use a pure physical logging. Most log entries do not set a certain number of bytes at a certain offset in a page, but transform a page from one state to another. In other words, the algorithm relies on the correct page contents.

We copied the redo log from the latest checkpoint onward. Usually this contains log entries from before the backup start. So it could happen that there were log entries, which modified a page, before it became unused and before the backup started.

If the unused pages had been recreated as zero pages, their LSN would be zero. Every log entry's LSN is greater than zero. So every log entry for that page would be tried to apply. But the page contents is not, what the log entry expected. The redo log algorithm would fail.

If the unused pages had been recreated as zero pages, but with their LSN set to infinite, no log entry would be applied to unused pages. The apply-log algorithm would finish without errors and warnings. But if there are log entries, which re-initialize and modify the page after the backup operation read the bitmap page, they would not be applied due to the high LSN. The result would be an inconsistent table space.

The correct solution is to use empty pages with the LSN of the bitmap page, which claims it to be an unused page. We know that the page was unused at that LSN. If a page gets freed or re-used, the bitmap is changed and that makes an redo log entry for the bitmap page. The bitmap page gets that LSN. When we copy the bitmap page, we get the bitmap and the corresponding LSN into the backup. Our algorithm assures that each page is consistent in itself. So we have assured knowledge, which pages were free at that LSN. The next higher LSN that affects an unused page must consequently be a log entry, which re-initializes the page. Any log entry with an LSN lower than the bitmap LSN is irrelevant to the pages that are marked free by that bitmap.

Besides the LSN, empty pages will also have the page number, which corresponds to their location in the table space, the table space id, and the checksums set. The remaining part of empty pages are all zero bytes.

Non-free zero pages

The page number of a zero page is zero. Usually one would expect that pages with a page number of zero have never been used and are marked free (except of the page at offset zero - the table space header). But sometimes a page like this is not marked free.

One possible situation in which a page could have a zero page number and not be marked free could occur if a page did become used for the first time short before the backup started, and was not flushed to disk until backup read the page. When the page became used, the bitmap was updated and flushed and the page was updated in memory but not flushed yet. It is possible, yes even probable, that there are redo log entries in the log file, which manipulate the page. The corresponding log entries could have LSNs below the bitmap page's LSN if later more changes were done to the bitmap page. If we would skip the page, and thus effectively declare it as free, the expansion algorithm would insert a page with the bitmap page's LSN, which could be too high.

To be safe, we include non-free zero pages in a backup. They are rare and thus don't make a big difference.

Skip unused pages when reading from the original data files

If we use the bitmaps to avoid reading of unused pages, we turn sequential reading into random reading. Depending on the distribution of unused pages among used pages, this could even drop the read performance. Only if unused pages occur in big contiguous chunks, skipping those could give a speed increase.

Since a bitmap page occurs every page_size pages, only page_size - 1 pages can be skipped in one go at best. Hence, the optimal performance enhancement cannot be reached. All bitmap pages are used pages and must be copied to the backup, even if they mark all "their" pages as free. After all, we could detect such case only after having read the bitmap page. Anyway, it will be rare cases that multiple bitmap pages in a table space mark all their pages free. In theory this could happen in a new table space, which was created way too big for the data. But then InnoDB maintains the free limit. Bitmap pages at and above the free limit are not initialized and don't need to be backed up.

At the beginning of each map-set, only the bitmap page needs to be read. Then unused pages can be identified in contiguous chunks. If a chunk is big enough, then that chunk can be skipped from reading.

It does not seem desirable to read in different chunk sizes. So the read algorithm is now designed so that the read size is the data buffer size. At the beginning of a map-set, a data buffer is always read. Skipping of read pages is done in multiples of the data buffer size. A buffer can only be skipped, if all its pages are unused. Since MEB does currently use a fixed buffer size of 16 MB, it contains at least 1024 pages, depending on the table's page size. The probability for 1024 contiguous pages to be free isn't that high. That's why we won't reduce the I/O load much. And so we haven't implemented this part yet.

Skip unused pages when writing to the backup files

On the write side it is advantageous to suppress even single unused pages. It doesn't break sequential writes.

The algorithm is similar to incremental backup. From the read side we get a buffer, which could contain unused pages. For the write side we produce a buffer with only the used pages from the read buffer. For each page, we decide independently, if we copy it to the output buffer. Every bitmap page needs to be included.

Compressed files must not be empty. If the backup files are compressed before they are written, we must assure that they contain some contents. This is a requirement of our compression algorithm. Since we include all bitmap pages, this is not a problem for file-per-table table spaces, nor for the first file of the system table space. But it can happen if a follow-up data file of the system table space has all its pages free, and does not have a bitmap page below the free limit. To work around this problem, we do always include the first page of a file.

A map-set can cover pages from multiple buffers. We need to keep them in memory until all covered buffers are written.

Restore

On restore, MEB has to recreate unused pages at the right places. The reason for this is explained below in the section "Backup cannot shrink table spaces". Since the contents of the inserted pages do not matter, except of page number, space id, LSN, and checksums, empty pages are written.

In the following, the algorithm to recreate the skipped pages is called "expansion".

Please note that the expansion must take place before an apply-log operation. The apply-log algorithm works on a data file where all pages are at their correct places. Apply-log can modify initially unused pages. Hence those must also be present at the right places in the data file, and have the right LSN.

In MEB 3.9.0, a sequential algorithm is used, which detects skipped pages by a mismatch of a page number and the current write position in the expanded file. Empty pages are inserted until the page number matches the current write position.

If the last page from the backup file is below the free limit from the table space header, empty pages are appended up to the free limit. If the resulting page is below the table space size from the table space header, zero pages are appended up to the table space size.

Backup cannot shrink table spaces

What users really want, is a feature to remove unused pages from a table space, and make the data file(s) smaller.

MEB cannot help with it. Every InnoDB page has a page number, which corresponds to its position in a data file. The InnoDB tablespaces contain tree structures, where a page can reference one or more other pages. These references are done by page numbers. Hence it is vital that every page retains its position in a data file. If there are unused pages among used pages, the used pages cannot be simply shifted down in a file to take the place of an unused page.

If one wants to change a position of a page, one must assign it a new page number (the one that corresponds to its new place) and modify the page number in all places that reference the page. This means that a bunch of random-access page-read and page-write operations can be necessary for each shifted page.

If a table has a single-file table space (innodb-file-per-table=ON) then an OPTIMIZE TABLE statement would create a new tablespaace with freshly constructed trees and thus take the minimum amount of space. Unfortunately that operation can put a too big burden on the database. With MEB, one could think of a workaround. Do a backup of the table, restore it to a temporary place, run a server on it, OPTIMIZE TABLE, and transport the resulting tablespace to the original server.

Wednesday Jun 26, 2013

MySQL Enterprise Backup 3.8.2 has been released!

MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.


  A. Functionality Added or Changed: 

  • MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page
  • MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here

  B. Bugs Fixed:

  • When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973) 
  •  MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499) 
  • apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations. 
  • If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)
  •  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609)
  • The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529)
  • When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)
  •  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645)

Please post your questions / comments about Backup in forums.

Thanks,

MEB Team


Friday May 03, 2013

MEB : The journey so far 2010-2013

MySQL Enterprise Backup (MEB) was born 3 years ago as a newly branded avatar of InnoDB Hot backup. Wanted to share what has gone on so far, how we at Oracle think about backup, the milestones that we have achieved and the road ahead. The idea for this blog came to me after looking at Mikael's latest blog. While Mikael talks about MySQL, I want to talk about MEB.

When we started with InnoDB Hot backup the first challenge was to have it adhere to the development, quality and release processes for MySQL. This meant creating a quality plan, getting it into the development trees of MySQL and ensuring that each piece of new code went through architecture and code review. Though the initial implementer and architect of Hot backup continues to work with the MEB team, there were a host of new engineers to be trained. We also needed to ensure that the new (at that time) Barracuda InnoDB file format and incremental backup was supported. MEB 3.5.1 was the release which got these things along with the adherence to the development and quality model of MySQL.

The next challenge we faced was that of ensuring that MEB was on equal footing for both Linux and Windows. InnoDB hot Backup consisted of 2 programs - ibbackup and innobackup; innobackup is a Perl module. The main issue with using the program on Windows was the requirement to install Perl. With multiple Perl implementations and changing Perl versions, we did not want to check MEB compatibility for every implementation and new version of Perl when it was released. Even though the problem is the similar for Linux; the users of Linux are used to hacking around, changing paths and managing multiple versions of software like Perl. Windows users however expect things to just work. So we set about removing the Perl code altogether. This meant that the innobackup functionality had to be re-coded as a C program. Merging these 2 programs meant a major re-think on how the combined command line interface needed to look. The solution we came up with was to let the ibbackup and innobackup command line syntax remain as is, while the combined program had a similar but more logical "mysqlbackup" command line syntax. We were very happy with the new syntax because it freed us from history and MEB syntax became very much in line with the syntax of other MySQL clients. With the release of 3.6 we had a single C program, a more logical syntax , a product which was easier to install and worked exactly the same for all platforms.

We were getting to 2012 and database sizes were commonly approaching the 1 TB size. Such large databases meant the backup should ideally be streamed to tape. Interfacing with tape drives is a complicated and specialized activity. We neither had the bandwidth nor the expertize to handle tapes in MEB. The best solution was to adhere to a good common standard interface that was adopted by software which dealt with tapes. The interface we decided to support was Oracle's System Backup to Tape (SBT) . MEB was modified to be able to to stream the backup output to this interface. A common requirement for these interfaces is that they ideally want to deal with the backup as a single file. A single file can be streamed and restored by any software that speaks SBT. There is a whole ecosystem around SBT because it is the preferred way to backup the Oracle database. Changing MEB to think streaming instead of random access directory output was the challenge we overcame with the release of version 3.7 of MEB. With version 3.7, MEB could interface with Oracle Secure Backup, Symantec Net Backup and Tivoli Storage Manager and any other backup software that understood SBT.

After having resolved what we saw as the "basic" requirements for backup , our customers were demanding more performance and usability. We took up the challenge of performance for the 3.8 version of MEB. MEB was a monolithic single threaded program. We decided to internally break up MEB into 3 separate modules. The read phase, the process phase and the write phase. Each of these 3 phases could be multi-threaded. The number of threads dedicated for each phase were also made user configurable. All operations of backup including the "Applylog" and "copyback" were made multi-threaded. Read more details about this design approach and the performance gains in my blog - Truly Parallel backup. Meanwhile the new release of the MySQL Server 5.6 was also out. It was an interesting challenge to ensure that MEB understood the new MySQL 5.6 features and was able to take advantage of them. As on date MEB 3.8.1 is the only online backup solution that is compatible with the new features of MySQL 5.6.

Backup is like buying insurance. When all else fails you need to be sure that there is a working backup that is available to bring back your database. Backup is not something that can fail when it is needed. It is required that we are surefooted when dealing with such a critical activity. We take your trust in our solution very seriously. Thanks for being a part of the MEB journey (and for reading this blog) so far. The MySQL landscape is ever changing and we know that you desire more usability, performance and flexibility from MEB. We will try and ensure that we meet these expectations with the best possible quality. With every new MEB release you will see a more usable, flexible and performant MEB.

Friday Feb 08, 2013

Truly Parallel backup (MySQL Enterprise Backup 3.8 and later)

How do you implement a parallel algorithm for a software which needs to be streamed to tapes?
How do you ensure that you have the capability to be able to tune the level of parallelism for varying input and output devices and varying levels of load?
These were some of the questions that we needed to answer when we were trying to implement multi-threading capability for MySQL Enterprise Backup (MEB).
The trivial way of achieving parallelism is by having the multiple threads pick up the different files (in a file per table) scenario. But this did not seem adequate because:
a) The sizes of these files (corresponding to the tables) could be different and then one large file would limit the level of parallelism since it would be processed by a single thread.
b) If you have to stream the backup how do you reconcile these multiple files being streamed by separate threads? Large backups are streamed directly to tape so it is better to have a single file being output and not multiple files.
c) If you buffer each file and wait for a file to be completely processed and then push it to tape then it is not true streaming because you are using intermediate disk space to save the incomplete portions of all the files.
The answer that we found was to implement the parallel algorithm using a horizontal strategy instead of a vertical strategy.


In the vertical strategy, each thread acts on a separate file. This limits streaming since the file sizes can vary.
In the horizontal strategy, each file is broken into a sections (denoted by multiple colors). A separate thread is assigned to operate on a single section.
Parallel operations are then possible for reading , processing and writing of these file subsections because no two threads will be operating on the same section of the file.
This setup is especially useful when using compression since there can be multiple threads performing compression while the read and write continues in parallel.
There may be additional overhead of ensuring that the buffers are in the correct order when they are written out, but since most of the buffers of the same size and having similar operations being performed, the overhead is minimal.
You get truly serialized output that is streamed to tape as it gets processed. If you are streaming to a remote host or to tape, there is almost no additional space required on your main server. We call this new mechanism parallel backup because we are achieving parallelism thereby making the backup faster. Indeed, using parallel backup may see up to 10 times the speed of a normal backup in certain scenarios.
The graph below shows the time it took for backup for MEB 3.7.1 v/s MEB 3.8 using varying number of threads.



Note : This is a 16 GB, 2 x 2000 MHz, 2 RAID DISKS (1027 GB,733.9GB) machine running Oracle Linux.

As you can see above; MEB 3.8 provides options to configure the number of threads you use for reading, writing and processing. Lets denote RT, PT and WT as number of Read, Process and Write threads respectively. Default values for MEB 3.8 is RT=3,PT=3, WT=3 which is changing in MEB 3.8.1 to RT=1, PT=6, WT=1.

This is close to the fastest backup we get in the graph above. The reason for not choosing RT=1, PT=12, WT=1 (which is the fastest) is because the CPU gets very highly utilized in the 1,12,1 configuration.

Remember, the read write throughput depends on your input and output devices. It is possible that multiple threads do not give you a better performance for read or write v/s a single thread.

There are also options available to have a configurable number of buffers used by these threads.

Each buffer is of size 16MB. You should have at-least [RT+PT+WT+ (MAX(RT,PT,WT) ] number of buffers so that you get optimal parallelism.

For Example if RT=1, PT=6, WT=1 then you should configure 1+6+1+6 = 14 buffers (default in MEB 3.8.1)

If for example you configure multiple threads but configure only 1 buffer then your backup is not taking advantage of parallelism at all. The read thread reads into the single buffer, buffer is then processed, written and then freed. The read thread is waiting for a buffer to be free to read into it; so it is like a serial process.

One more thing to note is that the number of buffers is limited by the memory limit configured for backup (default 300MB). Please ensure that you configure enough memory to be able to distribute it to the buffers you have configured. If the memory limit configured is less then what is required for the configured number of buffers; MEB will automatically decrease the number of buffers to fit into the memory limit. Based on the default values, if you are configuring more than 18 buffers you will need to increase the memory limit.

Please look at the previous 3.8 blog for detailed configuration examples :

https://blogs.oracle.com/mysqlenterprisebackup/entry/parallel_backup_in_mysql_enterprise

or into our documentation of this feature at

http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-capacity-options.html

Cheers 

and remember the wise DBA advise:

If you don't verify your backups periodically it is like not having backups at all


Thursday Feb 07, 2013

MySQL Enterprise Backup 3.8.1 release for 5.6 Server

The MySQL Enterprise Backup 3.8.1 release's main goal was support MySQL 5.6 server. But also beyond that primary goal MEB team added some valuable new options and features to ensure you'll get most from the new features in 5.6 as well. At a glance, here are some of the highlights,

MEB copy of InnoDB undo log tablespaces

MySQL 5.6 introduces a new feature to store undo logs in separate files called as undo tablespaces for improved performance. These undo tablespaces are logically part of system  tablespace. All the commands associated with MEB - "backup", "apply-log" and "copy-back"  now take care of the undo tablespaces in the same way as they process the system tablespace. MEB now supports innodb_undo_directory[logs][tablespace] option variables. When backup is executed, undo datafiles (up to number specified by innodb_undo_tablespaces) are stored in same directory as the datafiles of system tablespace. During copy-back, files can be stored in a location specified by the user using option --innodb-undo-directory.

MEB support for Global transaction ID's

GTID feature is newly introduced in MySQL 5.6 server. GTID's help to track the data being replicated particularly with the automatic slave promotion when a master fails.
When server is started with GTID's enabled and backup is performed on the master server, mysqlbackup produces a new file called as gtid_executed.sql in meta backup directory . This file is used after restoring the backup data on slave server and contains GTID_PURGED option. This provides information from the server at the end of the backup, thereby ensuring that replication starts from the point in time when backup was taken.

UNC Path name support

MEB now supports UNC path names by specifying a location of network resource such as shared file, directory or printer. This feature helps to start backups using windows task scheduler when shared drives cannot be mapped to a drive letter. Support for UNC path names also allows MEB to take backups when user is not logged in.
eg: ./mysqlbackup --defaults-file=/home/my/my.cnf  --backup-dir="\\mysql\\testmeb\" backup

Where testmeb is shared network directory on windows.

When the shared name is corrupt / invalid, MEB detects and then tries to access the files pointed to by the path and prints an error message.

MEB support for different page size settings for InnoDB

InnoDB page size is the server parameter that is associated with all the innodb tablespaces in the MySQL instance. By default the value of this size used to be 16K in the versions less than MySQL 5.6. But from MySQL 5.6, this option is made user configurable to 4k, 8k, 16k etc. Starting from MEB 3.8.1, Backup will work successfully when server is started with different innodb_page_size values. The innodb-page-size option can also be specified in the mysqlbackup command line but MEB will ignore the option provided the connection to server is available. If innodb_page_size option is not specified in command line or if connection to server is not available, then the value of innodb-page-size is read from the header of the innodb data files.

 InnoDB Checksum Algorithm Support

MEB 3.8.1 introduces new option support of --innodb-checksum-algorithm in MySQL 5.6. This option can also be specified on the command line. A default option is used if its not  specified in the command line and also if the connection to the server is offline. Without the support for this new option, MEB could not start the server after sequence of backup, apply-log and restore operations. One thing to be noted here is,
a. Server backed with strict_crc32, strict_innodb or strict_none checksum algorithms should be restored with the same algorithm
b. Server backed with mixed algorithms should not be restored to a server with strict_* algorithms.

Backup of system tablespace with fractional megabyte.

It sometimes happens that InnoDB engine extends datafiles of system tablespace by few megabytes. But if the disk is full, then system tablespace will actually extend to  fractional megabytes. During such cases, MEB performs a consistency check on the sizes of InnoDB datafiles and if the size does not match the size of the file on the disk, a warning is reported.That is MEB does not backup the fractional datafile in the system tablespace.

Backup restore file per table tablespaces at different locations.

In MySQL 5.6, it is possible to create new InnoDB table with per-table tablespace outside of data directory where .ibd file should be created instead of default location in the database sub directory. For each .ibd file, a .isl file is created in the database subdirectory containing absolute path name acting like a symbolic link to actual tablespace file. All the MEB operations are now able to read the .isl files to locate the .ibd files during backup. During backup, both .isl and .ibd files are copied to the backup directory but .isl is renamed as .bl file. During copy-back, .ibd files are being copied to a location specified in .bl file. But if the target location is changed where the restore of the backup should be performed, then one needs to manually edit the .bl file before doing the restore and specify the abs path name where .ibd files should go.

The above mentioned are new features added to backup code but this release also includes various bug fixes, please take a look at the MEB 3.8.1 reference manuals for more details.

The MEB team has put great deal of efforts to ensure that latest release of MEB 3.8.1 is compliant with MySQL 5.6 server. Please try this new MEB 3.8.1 version with MySQL 5.6 server and as always send us your feedback / comments here. MEB 3.8.1 is now available in My Oracle Support site and will very soon be available in Oracle's Cloud delivery site.

Once again, I would like to thank entire MEB team to deliver this release on time and with many valuable new additions.


Friday Oct 05, 2012

MySQL Connect Conference: My Experience

It was a great experience to attend the MySQL Connect Conference for the first time ever. Personally I was very much enthralled to present about "How to make MySQL Backups" besides attending different sessions to absorb more knowledge about the technical prospects of MySQL. One of the agenda items in my presentation was "MySQL Enterprise Backup" functionality and features. There were total of 40 attendees in the session, who were very much interested about the MySQL Enterprise Backup product and gave positive feedback as well as areas of improvements on our product. Some of our features brought lot of excitement and smile amongst our customers including,

1. Performance improvements in MEB 3.8.0
2. Incremental Base option from MEB 3.7.1 where there is no need to specify the directory name of the previous backup to fetch the lsn values and instead can directly fetch from backup_history table using --incremental-base=history: last_backup
3. only-innodb-with-frm option introduced in MEB 3.7 version. A true online hot backup of InnoDB tables.

I also attended a session with similar topic "MEB Best Practices" conducted by Sanjay Manwani, where he double clicked all the features and best strategies of backup & restore. I also got an opportunity to attend other sessions including,

* Enabling the new generation of web and cloud services with MySQL 5.6 replication
* Getting the most out of MySQL with MySQL Workbench
* InnoDB compression for OLTP
* Scaling for the Web and Cloud with MySQL replication.

Above all, had some special moments in the conference including meeting some of the executives / colleagues for the first time f2f. On a whole, the first MySQL Connect conference was a great success in terms of manifesting the features of our products, direct feedback from customer and team building.  We also had some applauding yahoo moments when Tomas Ulin announced different releases including MySQL 5.6 RC, Connector Python 1.0 and ODBC 5.2 release, MySQL Cluster 7.3, additions to MySQL Enterprise edition etc.

Saturday Sep 29, 2012

MEB Support to NetBackup MMS

In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0

In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible.

Test Setup

1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)
2. Install MEB 3.8 also in Linux OS.
3. Install NB admin console in your windows desktop and configure the NB master server from there.

Note: Ensure that you have root user permission to install NetBackup.

Configuration Steps for MEB and NB

Once MEB and NB are installed,
  • Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path.
  • Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.
  •  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".
  •  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached.

Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.

MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up.

 The following diagram shows how MEB interacts with MMS through SBT interface.

Backup

The following parameters should also be ready for the execution,
    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64
    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use
    NB_ORA_SERV=myserver.com,
    NB_ORA_CLIENT=myserver.com,
    NB_ORA_POLICY=NBU-MEB
    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.

For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied.


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir
-----------------------------------------------------------------------------------------------------------------------------------
Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents.

  • ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log
  • ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back

The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.

You can also refer to more detailed steps of MEB and NB integration in whitepaper
here




About

MySQL MEB Team Blog

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today