How to Configure a Galera Cluster with MySQL

Share via:

Introduction

Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are already available to continue serving.

Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.

In this blog post , we will configure an active-active MySQL Galera cluster. For demonstration purposes, we will configure and test three nodes, the smallest configurable cluster.

 

Prerequisites

To follow along, you will need three servers, each with:

Step 1 — Adding the Galera Repository to All Servers

MySQL, patched to include Galera clustering, isn’t included in the default Ubuntu repositories, so we’ll start by adding the external Ubuntu repository maintained by the Galera project to all three of our servers.

First, we need to add the repository key with the apt-key command, which apt-get will use to verify that the package is authentic.

 

Now create a file galera.list as vim /etc/apt/sources.list.d/galera.list add below lines into it

Create one more file vi /etc/apt/preferences.d/galera.pref add following lines into it

Now run

Once the repositories are updated on all three servers, we’re ready to install MySQL and Galera.

 

Step 2 — Installing MySQL and Galera on All Servers

Run the following command on all three servers to install a version of MySQL patched to work with Galera, as well as Galera and several dependencies:

During the installation, you will be asked to set a password for the MySQL administrative user. No matter what you choose, this root password will be overwritten with the password from the first node once replication begins.

 

We should have all of the pieces necessary to begin configuring the cluster, but since we’ll be relying on rsync in later steps, let’s make sure it’s installed on all three, as well.

This will confirm that the newest version of rsync is already available or prompt you to upgrade or install it.

Once we have installed MySQL on each of the three servers, we can begin configuration.

 

Step 3 — Configuring the First Node

Each node in the cluster needs to have a nearly identical configuration. Because of this, we will do all of the configuration on our first machine, and then copy it to the other nodes.

By default, MySQL is configured to check the /etc/mysql/conf.d directory to get additional configuration settings for from ending in .cnf. We will create a file in this directory with all of our cluster-specific directives:

sudo nano /etc/mysql/conf.d/galera.cnf

Copy and paste the following configuration into the file. You will need to change the settings highlighted in red. We’ll explain what each section means below.

/etc/mysql/conf.d/galera.cnf on the first node

 

The first section modifies or re-asserts MySQL settings that will allow the cluster to function correctly. For example, Galera Cluster won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for local host. You can learn about the settings in more detail on the Galera Cluster system configuration page.

 

The “Galera Provider Configuration” section configures the MySQL components that provide a write-set replication API. This means Galera in our case, since Galera is a wsrep (write-set replication) provider. We specify the general parameters to configure the initial replication environment. This doesn’t require any customization, but you can learn more about Galera configuration options.

 

The “Galera Cluster Configuration” section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the addresses of your three servers. If your servers have private IP addresses, use them here.

 

The “Galera Synchronization Configuration” section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For our initial setup, we are using rsync, because it’s commonly available and does what we need for now.

 

The “Galera Node Configuration” section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you’re on, but you can choose any name you want in order to help you identify the node in log files.

Step 4 — Configuring the Remaining Nodes

On each of the remaining nodes, open the configuration file:

sudo nano /etc/mysql/conf.d/galera.cnf

Paste in the configuration you copied from the first node, then update the “Galera Node Configuration” to use the IP address or resolvable domain name for the specific node you’re setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

/etc/mysql/conf.d/galera.cnf

We’re almost ready to bring up the cluster, but before we do, we’ll want to make sure that the appropriate ports are open.

 

Step 5 — Starting the Cluster

To begin, we need to stop the running MySQL service so that our cluster can be brought online.

Stop MySQL on all Three Servers:

Use the command below on all three servers to stop mysql so that we can bring them back up in a cluster:

 

sudo systemctl stop mysql

 

Bring up the First Node:

 

The way we’ve configured our cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. A normal systemctl start mysql would fail because there are no nodes running for the first node to connect with, so we need to pass the wsrep-new-cluster parameter to the first node we start. However, neither systemd nor service will properly accept the –wsrep-new-cluster argument at this time, so we’ll need to start the first node using the startup script in /etc/init.d. Once you’ve done this, you can start the remaining nodes with systemctl.

 

 

When this script command, the node is registered as part of the cluster, and we can see it with the following command:

 

On the remaining nodes, we can start mysql normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

 

Bring up the Second Node:

Start mysql:

sudo systemctl start mysql

 

We should see our cluster size increase as each node comes online:

 

Bring up the Third Node:

 

Start mysql:

sudo systemctl start mysql

 

If everything is working well, the cluster size should be set to three:

 

At this point, the entire cluster should be online and communicating. Before we test replication, however, there’s one more configuration detail to attend to.

 

Step 6 — now test your replication

 

Thank you for giving your valuable time to read the above information. Please click here to subscribe for further updates

KTEXPERTS is always active on below social media platforms.

Facebook : https://www.facebook.com/ktexperts/
LinkedIn : https://www.linkedin.com/company/ktexperts/
Twitter : https://twitter.com/ktexpertsadmin
YouTube : https://www.youtube.com/c/ktexperts
Instagram : https://www.instagram.com/knowledgesharingplatform

Share via:
Note: Please test scripts in Non Prod before trying in Production.
1 Star2 Stars3 Stars4 Stars5 Stars (12 votes, average: 5.00 out of 5)
Loading...

3 thoughts on “How to Configure a Galera Cluster with MySQL

  1. Thanks Abdul for detailed explanation. It will really helpful to our career growth. Appreciated for the time you spent to prepare session in this busy world.

  2. hi abdul,
    i am getting the below
    2018-07-19 12:05:20 30826 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
    at gcomm/src/pc.cpp:connect():158
    2018-07-19 12:05:20 30826 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
    2018-07-19 12:05:20 30826 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1458: Failed to open channel ‘test_cluster’ at ‘gcomm://172.31.7.189,172.31.15.23’: -110 (Connection timed out)
    2018-07-19 12:05:20 30826 [ERROR] WSREP: gcs connect failed: Connection timed out
    2018-07-19 12:05:20 30826 [ERROR] WSREP: wsrep::connect(gcomm://171.31.7.189,171.31.15.23) failed: 7
    2018-07-19 12:05:20 30826 [ERROR] Aborting

Add Comment