Archives par mot-clé : MySQL

MySQL SELECT query with LIKE case sensitive ?

Today at work I helped an intern with an interesting problem I would like to share.
he was doing this kind of query on a MySQL server :

SELECT description FROM service WHERE description LIKE '%cloud%';

It returned these lines :

cloud customer 1
cloud customer 2

but does not return these two lines he was expecting :

new Cloud infra
Cloud customer 2

LIKE should be case insensitive … What was wrong ?

Continuer la lecture de MySQL SELECT query with LIKE case sensitive ?

Story of a MySQL crash (memory issue)

Today we faced an important issue with MySQL 5.5
Server stalled after aborting a TRUNCATE on a big table. Yes, this is bad to abort a TRUNCATE 😉 But this is not the issue I want to speak about. Server crashed, that’s a point. mysqld process restarted automatically, and started loading data into memory (massive use of InnoDB tables). This server has 128GB of memory, so buffer pool was set to 100GB (yes, that’s quite huge). Data set is ~ 100GB. After a few minutes, server crashed again, but this time complaining about memory.

Continuer la lecture de Story of a MySQL crash (memory issue)

Migrating a MySQL database to another server

A customer asked me to copy a whole database from one mysql server to another.
A few years ago, I would go with the classic mysqldump + import solution, but it is very slow, especially the import part (because MySQL insert buffer is monothread). One can also use mysqlimport (LOAD DATA INFILE), but it is still quite slow… When using a standard SQL dump, I measured a speed of 1MB/sec for reimport … Quite long if you have gigabytes of data !

So I tested xtrabackup for this. This tool is already in heavy tests internally but to my mind, is not quite ready for production. But let’s try this for this specific task of migrating a database.

First, you need to install xtrabackup and all other binaries included (especially xbstream). You’ll also need at least version 5.5.25 on remote host (you’ll see why). And last but not least, InnoDB must run with

innodb_file_per_table=1

In this blog post, I’ll name « server A » the source, and « server B » the destination.
My SQL data is stored in /srv/mysql.
On server B, create a destination folder, for example /tmp/test
This is because you need to import data to a temporary directory, where there is .
On server A, launch the following command :

time innobackupex --export --databases 'mydb' --no-lock \
--stream=xbstream --tmpdir=/tmp --use-memory=128MB \
--user=backup --password=XXXXX --parallel=4 /srv/mysql \
| ssh root@10.0.0.224 "xbstream --directory=/tmp/test -x"

A few explanations :

  • –export : add specific data that would be useful for reimport
  • –databases : specify database to copy.
  • –no-lock : by default, a FLUSH TABLES WITH READ LOCK is emitted, to ensure the whole backup is consistant. I chose not to use it, as I do not care about binary log position of the backup (used in replication).
  • –stream=xbstream : use xbstream as streaming software (more powerful than tar)
  • –use-memory : memory that can be used for several tasks
  • –parallel=4 : dump 4 tables in parallel
  • I pipe stream to ssh and execute xbstream on the remote host.
  • I do not use –compress as network is not a limit in my case.

Dump is complete ! I achieved a rate of 11.3 MB/sec with this : far better than mysqldump / mysqlimport ! I’m sure we can be faster by tuning a few things.
Take care of file owner : copy was done with root, but I’m sure your mysql data is owned by someone else (or it should be !).

Then, prepare data for export :

xtrabackup_55 --prepare --export --target-dir=/tmp/test

This will « prepare » data (applying innodb log …), and especially will create .exp files that contains table metadata. This is mandatory for import in the next step !

We can see xtrabackup working :

[...]
xtrabackup: export option is specified.
xtrabackup: export metadata of table 'mydb/performer' to file `./mydb/performer.exp` (1 indexes)
xtrabackup: name=PRIMARY, id.low=443, page=3
[...]

Ok, now we need to create the new database, and import scheme.
On server B, do :

CREATE DATABASE mydb;

We also need a temporary database (you’ll see why later …)

CREATE DATABASE test;

Then, we have two choices :

  1. create all tables, discard all tablespaces, and import all
  2. do the same operation, but for each table

I first thought that there were a bug with method 1 (https://bugs.launchpad.net/percona-server/+bug/1052960) but it appears this bug is also hitting if you do method 2. Nevertheless, I’ve created a script that handle method 2 all by itself.

NOTE THAT YOU NEED Percona Server 5.5.25 v27.1 if you don’t want to hit the bug I was talking about. This bug crashes MySQL, and leaves it in a state where it cannot start again … You’ve been warned.
First, let’s dump all tables schemas. On server A, do :

mysqldump --routines --triggers --single-transaction --no-data \
--host=serverA --user=root --password=xxx \
--result-file=schemas.sql mydb;

And import it ON THE TEMPORARY DATABASE :

cat schemas.sql |mysql -uroot -hSERVER_B -p test

Execute following statement in MySQL (server B) to tell MySQL that we will import data :

SET GLOBAL innodb_import_table_from_xtrabackup = 1;
/** If you have a version < 5.5.10, despite what I just
 * said about minimum version required, query is : **/
SET GLOBAL innodb_expand_import = 1;

Then, let’s use following bash script : https://www.olivierdoucet.info/blog/wp-content/uploads/2012/09/expand_import.sh.txt

A few explanations :

For each table, we do the following :

ALTER TABLE xxx DISCARD TABLESPACE;
mv xxx.ibd xxx.exp /srv/mysql/mydb;
ALTER TABLE xxx IMPORT TABLESPACE;

This bash script is using .my.cnf file in your user dir for credentials (or other default values). Please ensure you have access to the destination database with these credentials.
ALL STEPS ARE REQUIRED. If you miss something (the set global, chown, chmod …) you’ll probably get an error (like ‘got error -1 from storage engine’). At this point, you’d better start the process over (and drop the destination database, which is incomplete).

If import works, you can see lines like this in log :

[...]
InnoDB: Import: The extended import of mydb/mytable is being started.
InnoDB: Import: 2 indexes have been detected.
InnoDB: Progress in %: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 done.
[...]

Note that when importing huge InnoDB tables, there is (was) a lock on the dictionary while scanning the ibd file (the data file), so it may lock the whole server … If operation took too long, server crashed. This is bug https://bugs.launchpad.net/percona-server/+bug/684829
This bug has been fixed since, and as you need at least version 5.5.25, you should not hit this problem.

 

Conclusion

This method is 10 times faster than mysqldump / mysqlimport. But As you can see, there are huge risks and still bugs remaining. The dump part is really safe, so I would recommend you to first test the import on a dev server before doing this in production.

Xtrabackup is really an amazing tool, but it still suffers somme nasty bugs. I’m sure I’ll use it in production in a few months, when it will be perfectly stable for all tasks.

 

Customer case : finding an unusual cause of max_user_connections

The last few days were very busy dealing with a problem on the MySQL server of a customer. My company is offering fully managed hosting services, so it was up to us to investigate the troubles. I’ll try to explain some of the checks I’ve done ; maybe this can give you some ideas when you also deal with mysql troubleshooting.

Continuer la lecture de Customer case : finding an unusual cause of max_user_connections

Stockage de nombre à virgules dans MySQL

Pour stocker un nombre à virgule (un flottant, ou float en anglais) dans MySQL, il existe plusieurs types de colonnes. Mais attention : ils ne sont pas tous identiques. Petite démonstration simple : %%% * Prenons une table ‘test’, avec entre autre un champ de type FLOAT(8,2). * Executez la requete suivante : %%% @@INSERT INTO `test` (id, flottant) VALUES(4,’446351.74′);@@ %%% * Puis relisez la ligne : @@SELECT * FROM `test` WHERE id=4@@ Voici le résultat:%%% @@446351.75@@ Comment ?? ,75 et non ,74 comme je l’ai demandé ? Et oui, normal vu la méthode de stockage qu’utilise MySQL.
Continuer la lecture de Stockage de nombre à virgules dans MySQL

Moteur de stockage Falcon pour MySQL

Pour stocker vos tables, MySQL utilise ce qu’on appelle un « moteur de stockage ». C’est ce moteur qui est chargé de définir comment vos données vont être stockés sur le disque, en mémoire, et surtout comment MySQL va y accéder (en lecture, mise à jour ou suppression). Les plus connus sont MyISAM et InnoDB. Seul ce deuxième est « transactionnel », c’est à dire qu’il peut permettre d’effectuer une série de mises à jour sur la base en même temps, ou annuler dse modifications. Mais le but de mon billet n’est pas d’expliquer tout cela, mais d’aller bien plus loin et de vous dire comment Falcon fonctionne globalement.
Continuer la lecture de Moteur de stockage Falcon pour MySQL