If you are an "accidental MySQL DBA" this post is for you. I assume you know almost nothing about MySQL, or its derivatives...such as MariaDB and Percona. This post will give you just enough to get up-to-speed quickly and will give you enough knowledge to perform information google searches to find more detailed information. I assume you have a working knowledge of MS SQLServer and I'll compare and contrast the two.
This is just a braindump to help a non-MySQL guru better navigate his MySQL/MariaDB/Percona server. This is not meant to be definitive...just a quick primer to get you the basics quickly if you find yourself in the role of "accidental DBA". Using this info you should be able to move around MySQL (on Linux) and be able to google for more information as needed. Since most of the readers of my blog are MS SQL Server professionals I'll try to compare and contrast the two database managers.
- a MySQL instance can contain many databases/schemas.
- the term "database" and "schema" can be used interchangeably. This is unlike SQL Server where the hierarchy is Instance --> Database --> Schema. MySQL is missing one of these object containers.
- Therefore two-part naming convention queries work fine, but 3-part do not.
- MySQL has many different "storage engines" whereas MS SQL Server and Oracle do not. The storage engine handles more than just basic data persistence. It also handles basic transaction management duties, the types of indexes that can be used, and even whether data is transient (doesn't survive a restart) or "permanent". There are literally new storage engines being invented daily for different, specific use cases. Here is a short-list:
- MyISAM: this is the first and default. It doesn't really support "logging" and most folks will never use it for a production system.
- InnoDB: probably the most popular and supports the most features. When in doubt, this is probably your best choice.
- performance_schema: I've written about this before in [[Introduction to the MySQL Performance Schema]]. It's not just how you monitor MySQL performance, it's actually its own storage engine that is optimized for performance, yet the bulk of its metrics are not persisted to disk.
Basic Configuration Management
|What is the default mysql listener port?||3306|
|Where are my configuration files?|
Try the following:
Defaults are read from the following files in the given order:
|How do I start/stop the service?||Depends on your distro. But generally these will work:|
|Where are my error logs?|
I'm stuck at a command line in Linux without any GUI tools, what do I do?
You want to use the
mysql cli. Here are some useful commands:
|connect to the given db using your mysql user/pwd|
|cancels the current command|
|Ctl + C||attempts to cancel the currently executing query.|
|run a .sql file (or any file with valid sql statements)|
|run a .sql file while mysql cli is running|
|run a zipped .sql file|
|start mysql client and log all commands and output to output.txt|
|run a file and pipe the output to a file|
|run a query against the given db|
MySQL backups are much like Oracle backups. There are two types:
- logical backups/dumps. Essentially you are creating a series of SQL commands in a files that will reconstruct your database as of the point-in-time when the logical backup was taken.
- "raw backups". Essentially you are backing up, safely, the actual data files. Oracle calls these "hot backups"
This is much different from MS SQL Server where backups are binary files and do not map to the underlying mdf/ldf/ndf files.
There are various ways you can do this. I suggest, in order,
- Percona XtraBackup. Free and open source.
- if using innodb then try ibbackup.
- innobackup is another helper script
- mylvmbackup: useful if you are using LVM (logical volume manager) partitions in Linux. Essentially this is a Perl script that automates backups using LVM snapshots.
- ZRM (http://www.zmanda.com). Very comprehensive.
- Roll your own. I suggest you google something like: MySQL hot backup shell script. Assuming you are using Linux you will want to use a crontab to schedule the backup.
Let's take a step back. In SQL Server we have the transaction logs and in Oracle we have the redo logs. MySQL is no different, the term is "binary logs". You'll want to back these up so you perform point-in-time recovery if needed. The binary logs store the actual SQL statements that are needed to get your database back up if you need to, for instance, restore the previous day's raw backups. The "binary log" is a bit of a misnomer because it is human-readable depending on your configuration options. You can apply binary logs at any time, to any database, if you really wanted to. So, unlike SQL Server you can recover a table by restoring last night's table (an innodb file = table, usually, depending on your configuration), searching the binary logs for the relevant SQL statements, and then replaying them. This becomes quite flexible. Individual commands in the binary logs are timestamped.
mysqlbinlog is the tool that helps you analyze and replay the binary logs.
A Recovery Scenario Using Raw Backups
Let's say someone wiped out a table and you need to recover it. Here is one way to do that, without using the binary logs. This is merely illustrative of how raw backups work.
/etc/init.d/mysql stop ##stop mysql processes
mv /var/lib/mysql/file /var/lib/mysql/file.tmp ##rename the innodb file, just in case
cp -a /backup/file /var/lib/mysql ##copy your backup file to the correct location
vi my.conf ##add the following lines which essentially starts MySQL in "recovery" mode
/etc/init.d/mysql start ##start mysql in recovery mode
##check error logs to ensure everything is back up.
##replay bin logs if desired
##consider running SHOW TABLE STATUS
##check error logs again
vi my.conf ##remove the two entries above
##stop and start mysql again
Most database managers have the the concept of a "logical" backup, which essentially means that the database is scripted using standard SQL. A recovery using a logical backup is merely a matter of editing the dump file and running it.
mysqldump is the binary that allows you to "dump", or generate the SQL commands, for the given database(s) and options. There are tons of options, too many to go over here. There are even options to script your MySQL database into a format suitable for loading into PostgreSQL. Gotta love open source!!
For the most flexibility, IMHO you should script each table into its own file. It makes life a lot easier later.
Some example commands to whet your appetite:
mysqldump --all-databases > dump.sql ##dumps all databases/schema to a single .sql file
mysqldump --databases db > dump.sql ##dumps a single database/schema to a file.
Now that you know a little bit about how these tools work, and if you understand just a tad about Linux and redirection, you can start to do really creative things that you really can't do in MS SQL Server. For instance, this command copies a table from one server to another:
mysqldump --host=MySQLServer SomeTable | mysql --host=Server2 SomeTable
There are primarily two ways to do this natively in MySQL.
The equivalent of "bcp in" in MySQL is the
mysqlimport command. Example:
mysqlimport --local MyDB datafile.txt
The alternative SQL language extension, equivalent to BULK INSERT is
LOAD DATA. Example:
mysql>LOAD DATA LOCAL INFILE 'myfile.txt' INTO TABLE mytbl;
The native MySQL method to do this is
SELECT ... INTO outfile FROM. This is similar to MS SQL Server's
SELECT...INTO syntax except it works with txt files as well. Example:
SELECT col1,col2 INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
Hopefully the purpose of these commands is obvious:
SELECT version(), database();
SHOW TABLES FROM <database>;
SHOW TABLE STATUS FROM <database>;
If you find yourself working with MySQL and you need to get up-to-speed quickly on the basics, then this post was for you. I hope you found it useful.