Category Archive Базы данных

Автор:human

Установка и настройка mysqlcluster на centos7

MySQL Cluster is designed to provide a MySQL compatible database with high availability and low latency. The MySQL Cluster technology is implemented through the NDB (Network DataBase) and NDBCLUSTER storage engines and provides shared-nothing clustering and auto-sharding for MySQL database systems. In the shared-nothing architecture, each of nodes has its own memory and disk, the use of shared storage such as NFS, SANs is not recommended and supported.

To implement a MySQL Cluster, we have to install three types of nodes. Each node type will be installed on it’s own server. The components are:

1. Management Node — NDB_MGMD/MGM
The Cluster management server is used to manage the other node of the cluster. We can create and configure new nodes, restart, delete, or backup nodes on the cluster from the management node.

2. Data Nodes — NDBD/NDB
This is the layer where the process of synchronizing and data replication between nodes happens.

3. SQL Nodes — MySQLD/API
The interface servers that are used by the applications to connect to the database cluster.

In this tutorial, I will guide you trough the installation and configuration of a MySQL Cluster with centOS 7. We will configure the management node, two data nodes, and two SQL nodes.

Prerequisites

  • The OS is CentOS 7 — 64bit.
  • 5 CentOS servers or virtual machines. I will use the hostnames and IP addresses as shown below:
    • Management Node
      db1 = 192.168.1.120
    • Data Nodes
      db2 = 192.168.1.121
      db3 = 192.168.1.122
    • SQL Nodes
      db4 = 192.168.1.123
      db5 = 192.168.1.124

Step 1 — Setup Management Node

The first step is to create the «Management Node» with CentOS 7 db1 and IP 192.168.1.120. Make sure you are logged into the db1 server as root user.

A. Download the MySQL Cluster software

I’ll download it from the MySQL site with wget. I’m using the «Red Hat Enterprise Linux 7 / Oracle Linux 7 (x86, 64-bit), RPM Bundle » here which is compatible with CentOS 7. Then extract the tar file.

cd ~
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar
tar -xvf MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar

Install MySQL Cluster package rpm.

B. Install and Remove Packages

Before you install the rpm package for MySQL Cluster, you need to install perl-Data-Dumper that is required by the MySQL-Cluster server. And you need to remove mariadb-libs before we can install MySQL Cluster.

yum -y install perl-Data-Dumper
yum -y remove mariadb-libs

C. Install MySQL Cluster

Install MySQL Cluster package with these rpm commands:

cd ~
rpm -Uvh MySQL-Cluster-client-gpl-7.4.10-1.el7.x86_64.rpm
rpm -Uvh MySQL-Cluster-server-gpl-7.4.10-1.el7.x86_64.rpm
rpm -Uvh MySQL-Cluster-shared-gpl-7.4.10-1.el7.x86_64.rpm

Make sure there is no error.

D. Configure MySQL Cluster

Create a new directory for the configuration files. I will use the «/var/lib/mysql-cluster» directory.

mkdir -p /var/lib/mysql-cluster

Then create new configuration file for the cluster management named «config.ini» in the mysql-cluster directory.

cd /var/lib/mysql-cluster
vi config.ini

Paste the configuration below:

Save the file and exit.

E. Start the Management Node

Next start the management node with the command below:

ndb_mgmd —config-file=/var/lib/mysql-cluster/config.ini

The result should be similar to this:

MySQL Cluster Management Server mysql-5.6.28 ndb-7.4.10
2016-03-22 19:26:08 [MgmtSrvr] INFO     — The default config directory ‘/usr/mysql-cluster’ does not exist. Trying to create it…
2016-03-22 19:26:08 [MgmtSrvr] INFO     — Successfully created config directory

The management node is started, now you can use command «ndb_mgm» to monitor the node:

ndb_mgm
show

Check cluster status.

You can see the management node has been started with: mysql-6.6 and ndb-7.4.

Step 2 — Setup the MySQL Cluster Data Nodes

We will use 2 CentOS servers for the Data Nodes.

  1. db2 = 192.168.1.121
  2. db3 = 192.168.1.122

A. Login as root user and download the MySQL Cluster software

Login to the db2 server with ssh:

ssh root@192.168.1.121

Then download the MySQL Cluster package and extract it:

cd ~
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar
tar -xvf MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar

B. Install and Remove Packages

Install perl-Data-Dumper and remove the mariadb-libs:

yum -y install perl-Data-Dumper
yum -y remove mariadb-libs

C. Install MySQL Cluster

Now we can install the MySQL Cluster packages for the Data Nodes with these rpm commands:

cd ~
rpm -Uvh MySQL-Cluster-client-gpl-7.4.10-1.el7.x86_64.rpm
rpm -Uvh MySQL-Cluster-server-gpl-7.4.10-1.el7.x86_64.rpm
rpm -Uvh MySQL-Cluster-shared-gpl-7.4.10-1.el7.x86_64.rpm

Make sure there is no error.

D. Configure Data Node

Create a new configuration file in the /etc directory with the vi editor:

vi /etc/my.cnf

Paste configuration below:

Save the file and exit.

Then create the new directory for the database data that we defined in the management node config file «config.ini».

mkdir -p /var/lib/mysql-cluster

Now start the data node/ndbd:

ndbd

results:

2016-03-22 19:35:56 [ndbd] INFO     — Angel connected to ‘192.168.1.120:1186’
2016-03-22 19:35:56 [ndbd] INFO     — Angel allocated nodeid: 2

MySQL cluster node is online.

Data Node db2 connected to the management node ip 192.168.1.120.

E. Redo step 2.A — 2.D on db3 server.

As we have 2 data nodes, please redo the steps 2.A — 2.D on our second data node.

Step 3 — Setup SQL Node

This is step contains the setup for the SQL Node that provides the application access to the database. We use 2 CentOS servers for the SQL Nodes:

  1. db4 = 192.168.1.123
  2. db5 = 192.168.1.124

A. Log in and Download MySQL Cluster

Login to the db4 server as root user:

ssh root@192.168.1.123

And download MySQL Cluster package:

cd ~
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-7.4/MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar
tar -xvf MySQL-Cluster-gpl-7.4.10-1.el7.x86_64.rpm-bundle.tar

B. Install and Remove Packages

Install perl-Data-Dumper and remove the mariadb-libs that conflict with MySQL Cluster.

yum -y install perl-Data-Dumper
yum -y remove mariadb-libs

C. Install MySQL Cluster

Install the MySQL Cluster server, client and shared package with the rpm commands below:

cd ~
rpm -Uvh MySQL-Cluster-client-gpl-7.4.10-1.el7.x86_64.rpm
rpm -Uvh MySQL-Cluster-server-gpl-7.4.10-1.el7.x86_64.rpm
rpm -Uvh MySQL-Cluster-shared-gpl-7.4.10-1.el7.x86_64.rpm

D. Configure the SQL Node

Create a new my.cnf file in the /etc directory:

vi /etc/my.cnf

And paste configuration below:

Save the file and exit the editor.

Start the SQL Node by starting the MySQL server:

service mysql start

E. Redo step 3.A — 3.D on db5 server.

Please redo the steps 3.A — 3.D on the second SQL server (db5).

Step 4 — Monitor the Cluster

To see the cluster status, we have to log into the management node db1.

ssh root@192.168.1.120

We can use the ndb_mgm command to see the cluster status:

ndb_mgm
ndb_mgm> show

Check the NDB clsuter state.

Another useful command is:

ndb_mgm -e «all status»
ndb_mgm -e «all report memory»

Step 5 — Testing the Cluster

To perform a test on our new MySQL Cluster, we have to login to the SQL Nodes db4 or db5 servers.

Login to the db4 server:

ssh root@192.168.1.123

Change the default MySQL password that stored in «.mysql_secret» file in root directory:

cd ~
cat .mysql_secret

this is my sample:

# The random password set for the root user at Tue Mar 22 19:44:07 2016 (local time): qna3AwbJMuOnw23T

Now change the password with command below:

mysql_secure_installation

Type your old mysql password and then type the new one, press enter to confirm all.

If all is done, you can login to the MySQL shell with your password:

mysql -u root -p

After you logged in, create a new root user with host «@«, so we will be able to access the MySQL from outside.

CREATE USER ‘root’@’%’ IDENTIFIED BY ‘aqwe123’;

Replace aqwe123 with your own secure password! Now you can see the new root user with host «@» on the MySQL user list:

select user, host, password from mysql.user;

And grant the new root user read and write access from the remote node:

GRANT ALL PRIVILEGES ON *.* TO ‘root’@’%’ IDENTIFIED BY PASSWORD ‘*94CC7BF027327993D738E11…(Encrypted PASSWORD)’ WITH GRANT OPTION;

Grant priveliges.

Now try to create a new database from db4 server and you will see the database on db5 too.

This is just a sample result for testing the cluster data replication.

All nodes are online.

The MySQL Cluster has been setup successfully on CentOS 7 with 5 server nodes.

Conclusion

MySQL Cluster is a technology that provides High Availability and Redundancy for MySQL databases. It uses NDB or NDBCLUSTER as the storage engine and provides shared-nothing clustering and auto-sharding for MySQL databases.  To implement the cluster, we need 3 components: Management Node(MGM), Data Nodes (NDB) and SQL Nodes (API). Each of node must have its own memory and disk. It is not recommended to use network storage such as NFS. To install MySQL Cluster on a CentOS 7 minimal system, we have to remove the mariadb-libs package, mariadb-libs conflict with MySQL-Cluster-server and you have to install the perl-Data-Dumper package, it’s needed by MySQL-Cluster-server. A MySQL Cluster is easy to install and configure on multiple CentOS servers.

Автор:human

Отказоустойчивые решения mysql — обзор

Отказоустойчивые решения mysql — обзор


1 вариант — pacemakerhearbeatdrbdMysq

2 вариант — Master-Master Manager

3 вариант — MHA (Master High Availibility Manager and Toold for MySQL)

4 вариант — Gtid (global Transaction ID).jpg

5 вариант — Mysql Cluster

Galera Cluster for MySQL vs MySQL (NDB) Cluster: A High Level Comparison

 
Автор:human

Установка и настройка MysqlRouter на debian/ubuntu — балансировщика нагрузки mysql

Установка Mysql Router на debian/ubuntu

Что такое mysqlrouter
Как мы види из названия, mysqlrouter это решения для балансировки mysql баз данных. Т.е. благодаря нему можно создать отказоустойчивое решения, которое будет бегать по базам, и, в зависимости от конфига, записывать-читать, только читать данные из баз данных. По данному решению мало информации. Решил написать малый туториал по сборке и быстрому запуску.

Установка mysql router

Устанавливаем зависимости

sudo apt-get install build-essential cmake doxygen libpolarssl-dev libnl-utils libmysqlclient-dev checkinstall

Скачиваем, собираем и устанавливаем mysqlrouter

Сборка и установка mysqlrouter

Установка закончена
Проверяем версию

human@router-2-74:~$ mysqlrouter —version
MySQL Router v2.0.3 on Linux (64-bit) (GPL community edition)

Настройа mysqlrouter

Создаем файл конфиг в папке /etc/mysql
sudo nano /etc/mssql/mysqlrouter.ini

Вставляем туда данный конфиг:

Где
bind_address— адрес и порт на котором будет доступен mysql (в кофниге указано 0.0.0.0 — значит будет доступен из сети, так же мы видим 2 порта 7001 и 7002, это значит что, что на ip машины с mysqlrouter по адресу, ну, допустим 10.4.1.74, и порте 7001 будет доступен mysql-1-81, а на 10.4.1.74:7002 — будет балансировать 2 сервера mysql-1-81,mysql-1-82.
destination — ноды на которых крутится mysql , я прописал их в хостах.(/etc/hosts)

Запуск mysqlrouter

Запуск осуществляется из консоли

sudo mysqlrouter -c /etc/mysql/mysqlrouter.ini
— запуск mysqlrouter с файлом конфига /etc/mysql/mysqlrouter.ini, т.е. вы можете запускать и тестировать mysqlrouter с другим конфигом, просто поменяв файл конфига.

Для подключения к базе данных через порт с сервера mysqlrouter используйте команду:
mysql -uroot —port 7001 —protocol tcp

Автор:human

Настройка репликации Mysql типа «master-master»

Настройка репликации Mysql типа «master-master»



#1. Конфигурация сред

Мы имеем 2 ноды
10.4.1.81 — mysql-1-81
10.4.1.82 — mysql-1-82
Операционная системы Debian 8 Jessie x64

#2. Установка mysql 5.5.

Устанавливаем сам Mysql

sudo su
aptitude install mysql-server -y

Редактируем конфиг для возможности подключения с сети:

nano /etc/mysql/my.cnf

Закоментим строку:

bind-address = 127.0.0.1

Найдем строки в конфиге и приведем к виду: server-id = 1 — на 1 сервере, на 2-ом server-id = 2, в остальном одинаково.

[…]
[mysqld]

Unique Server ID

server-id = 1

Do not replicate the following databases

binlog-ignore-db = mysql
replicate-ignore-db = mysql

Auto increment offset

auto-increment-increment = 2

Do not replicate sql queries for the local server ID

replicate-same-server-id = 0

Beginne automatisch inkrementelle Werte mit 1

auto-increment-offset = 1

Delete binlog data after 10 days

expire_logs_days = 10

Max binlog size

max_binlog_size = 500M

Binlog file path

log_bin = /var/log/mysql/mysql-bin.log

[…]

Рестартим mysql


#3.Перейдем к настройке репликации.

Сервер № 1 Подключаемся к Mysql.
mysql -u root -p
create user ‘replicator’@’%’ identified by ‘password’;
create database example_DB;
grant replication slave on . to ‘replicator’@’%’;
FLUSH PRIVILEGES;
show master status;
Запишем себе
+——————+———-+—————+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+—————+——————+
| mysql-bin.000010 | 1034 | | mysql |
+——————+———-+—————+——————+
1 row in set (0.00 sec)
quit
Не забываем поменять password на свой пароль.


Сервер № 2 Подключаемся к Mysql.
mysql -u root -p
create user ‘replicator’@’%’ identified by ‘password’;
create database example_DB;
grant replication slave on . to ‘replicator’@’%’;
FLUSH PRIVILEGES;
show master status;
+——————+———-+—————+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+—————+——————+
| mysql-bin.000011 | 514 | | mysql |
+——————+———-+—————+——————+
1 row in set (0.00 sec)
slave stop;
CHANGE MASTER TO MASTER_HOST = ‘10.4.1.81’, MASTER_USER = ‘replicator’, MASTER_PASSWORD = ‘password’, MASTER_LOG_FILE = ‘mysql-bin.000010’, MASTER_LOG_POS = 1034;
slave start;
quit
Не забываем поменять password на свой пароль.


Сервер № 1 Подключаемся к Mysql.
mysql -u root -p
slave stop;
CHANGE MASTER TO MASTER_HOST = ‘10.4.1.82’, MASTER_USER = ‘replicator’, MASTER_PASSWORD = ‘password’, MASTER_LOG_FILE = ‘mysql-bin.000011’, MASTER_LOG_POS = 514;
slave start;
Проверим статус репликации
show slave status\G
Получим

*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.4.1.82
Master_User: replicator
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000012
Read_Master_Log_Pos: 107
Relay_Log_File: mysqld-relay-bin.000003
Relay_Log_Pos: 253
Relay_Master_Log_File: mysql-bin.000012
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB: mysql
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 107
Relay_Log_Space: 556
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 2
1 row in set (0.00 sec)

На другом сервере будет похожий вывод.

На этом репликация законцена.

#4.Замечания.
****Мы предполагаем что репликация настраивается на «чистые сервера» без баз данных. В противном случае перед настройко необходимо остановить первый серврер, слить с него бузу в дамп и импортировать на 2-й сервер. Важно: не разблокирывайте базу данных пока репликация не настроена. Ниже представлены несколько команд, которые помогут вам скопировать все данные с сервера на новый сервер, если на вашем сервере были данные.**

FLUSH TABLES WITH READ LOCK;
SET GLOBAL read_only = ON;
Дампим данные в all_databases.sql.

mysqldump —defaults-file=/etc/mysql/debian.cnf -cCeQ —hex-blob —quote-names —routines —events —triggers —all-databases -r all_databases.sql

Теперь импортируем полученные данные на 2-й серврер

mysql —defaults-file=/etc/mysql/debian.cnf < all_databases.sql

Если что-то пошло нетак смотрим логи /var/log/syslog.

#Переходим к настройке балансировки и mysql-proxy.



На серверах mysql-proxy1 и mysql-proxy2 устанавливаем необходимые компоненты (здесь и далее, все команды выполняются от имени суперпользователя).

sudo su
apt-get install mysql-proxy

nano /etc/default/mysql-proxy

ENABLED=»true»
OPTIONS=»—proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua —proxy-address=10.4.1.81:3306 —proxy-backend-addresses=10.4.1.81:3306,10.4.1.82:3306″

$ sudo chmod 0660 /etc/mysql/mysql-proxy.cnf

启动mysql-proxy

$ sudo /etc/init.d/mysql-proxy start
* Starting MySQL Proxy daemon… [ OK ]

测试3307端口

$ mysql -hlocalhost -P3307 -uroot -p

mysql-proxy 软件包所含文件如下:

$ dpkg -L mysql-proxy
/.
/etc
/etc/default
/etc/default/mysql-proxy
/etc/init.d
/etc/init.d/mysql-proxy
/usr
/usr/share
/usr/share/mysql-proxy
/usr/share/mysql-proxy/active-queries.lua
/usr/share/mysql-proxy/active-transactions.lua
/usr/share/mysql-proxy/admin-sql.lua
/usr/share/mysql-proxy/admin.lua
/usr/share/mysql-proxy/analyze-query.lua
/usr/share/mysql-proxy/auditing.lua
/usr/share/mysql-proxy/commit-obfuscator.lua
/usr/share/mysql-proxy/histogram.lua
/usr/share/mysql-proxy/load-multi.lua
/usr/share/mysql-proxy/ro-balance.lua
/usr/share/mysql-proxy/ro-pooling.lua
/usr/share/mysql-proxy/rw-splitting.lua
/usr/share/mysql-proxy/xtab.lua
/usr/share/doc
/usr/share/doc/mysql-proxy
/usr/share/doc/mysql-proxy/README.TESTS.gz
/usr/share/doc/mysql-proxy/README
/usr/share/doc/mysql-proxy/copyright
/usr/share/doc/mysql-proxy/changelog.Debian.gz
/usr/lib
/usr/lib/libmysql-chassis-glibext.so.0.0.0
/usr/lib/libmysql-chassis-timing.so.0.0.0
/usr/lib/libmysql-chassis.so.0.0.0
/usr/lib/libmysql-proxy.so.0.0.0
/usr/lib/mysql-proxy
/usr/lib/mysql-proxy/lua
/usr/lib/mysql-proxy/lua/proxy
/usr/lib/mysql-proxy/lua/proxy/auto-config.lua
/usr/lib/mysql-proxy/lua/proxy/balance.lua
/usr/lib/mysql-proxy/lua/proxy/commands.lua
/usr/lib/mysql-proxy/lua/proxy/parser.lua
/usr/lib/mysql-proxy/lua/proxy/tokenizer.lua
/usr/lib/mysql-proxy/lua/proxy/test.lua
/usr/lib/mysql-proxy/lua/admin.lua
/usr/lib/mysql-proxy/lua/lfs.so
/usr/lib/mysql-proxy/lua/glib2.so
/usr/lib/mysql-proxy/lua/chassis.so
/usr/lib/mysql-proxy/lua/mysql.so
/usr/lib/mysql-proxy/lua/lpeg.so
/usr/lib/mysql-proxy/lua/posix.so
/usr/lib/mysql-proxy/plugins
/usr/lib/mysql-proxy/plugins/libadmin.so
/usr/lib/mysql-proxy/plugins/libproxy.so
/usr/lib/mysql-proxy/plugins/libreplicant.so
/usr/lib/mysql-proxy/plugins/libdebug.so
/usr/lib/pkgconfig
/usr/lib/pkgconfig/mysql-proxy.pc
/usr/lib/pkgconfig/mysql-chassis.pc
/usr/bin
/usr/bin/mysql-binlog-dump
/usr/bin/mysql-myisam-dump
/usr/bin/mysql-proxy
/usr/include
/usr/include/network-mysqld.h
/usr/include/network-mysqld-lua.h
/usr/include/network-mysqld-proto.h
/usr/include/network-mysqld-binlog.h
/usr/include/network-mysqld-packet.h
/usr/include/network-mysqld-masterinfo.h
/usr/include/network-conn-pool.h
/usr/include/network-conn-pool-lua.h
/usr/include/network-queue.h
/usr/include/network-socket.h
/usr/include/network-socket-lua.h
/usr/include/network-address.h
/usr/include/network-address-lua.h
/usr/include/sys-pedantic.h
/usr/include/chassis-plugin.h
/usr/include/chassis-log.h
/usr/include/chassis-keyfile.h
/usr/include/chassis-mainloop.h
/usr/include/chassis-path.h
/usr/include/chassis-filemode.h
/usr/include/chassis-limits.h
/usr/include/chassis-event-thread.h
/usr/include/chassis-gtimeval.h
/usr/include/glib-ext.h
/usr/include/glib-ext-ref.h
/usr/include/string-len.h
/usr/include/lua-load-factory.h
/usr/include/lua-scope.h
/usr/include/lua-env.h
/usr/include/network-injection.h
/usr/include/network-injection-lua.h
/usr/include/chassis-shutdown-hooks.h
/usr/include/chassis-exports.h
/usr/include/network-exports.h
/usr/include/network-backend.h
/usr/include/network-backend-lua.h
/usr/include/disable-dtrace.h
/usr/include/lua-registry-keys.h
/usr/include/chassis-stats.h
/usr/include/chassis-timings.h
/usr/include/chassis-frontend.h
/usr/include/chassis-options.h
/usr/include/chassis-win32-service.h
/usr/include/chassis-unix-daemon.h
/usr/include/my_rdtsc.h
/usr/lib/libmysql-chassis-glibext.so.0
/usr/lib/libmysql-chassis-glibext.so
/usr/lib/libmysql-proxy.so
/usr/lib/libmysql-chassis-timing.so.0
/usr/lib/libmysql-chassis-timing.so
/usr/lib/libmysql-proxy.so.0
/usr/lib/libmysql-chassis.so.0
/usr/lib/libmysql-chassis.so