High Availability WordPress Dedicated Server Hosting

October 31, 2015 – 12:51 pm

In this example we are going to look at building a high availability and resilient setup for hosting a busy website, such as WordPress. Our environment will consist of 5 servers. 2 will be used for load balancing with failover, with the other 3 being used to build a MariaDB Galera cluster and a distributed replica filesystem using Glusterfs. You can use as many servers as you require.

CentOS 7 – This guide is for CentOS 7 only and will not degrade well with older versions.

#Begin
We will start with the cluster.
The first thing we need is to update the systems and ensure you are running the latest patches.
NOTE: Until further notice the following needs to be repeated on every node you intend to run in your cluster – but not your load balancers we will cover the load balancers later.

#Update the systems
yum -y update

#Install the repositories
Once the system is up to date you need to install the required repositories.
The first repository we will need is the EPEL repo. This can be obtained from the following link at the time of writing this:
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

The next repository we need is the MariaDB repo. We can add this ourselves with the below command:

echo “[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1” > /etc/yum.repos.d/MariaDB.repo

#Install the required software
Now we are ready to install the software.
The following command should get the job done in one shot:

yum -y install socat MariaDB-Galera-server MariaDB-client rsync galera glusterfs-server glusterfs glusterfs-libs glusterfs-fuse php httpd mod_ssl php-gd php-mbstring php-xml php-common php-mysqlnd php-pecl net-tools

#Prepare the environment
Now that all software is installed we can begin our basic configuration tasks, starting with the environment. NOTE: We are still repeating these steps on each node.

#Security
Firstly we need to set SELINUX to permissive mode. Execute the following command:
setenforce 0
You should also edit the config file in /etc/selinux/config to ensure that permissive mode is retained through reboots. Failure to do so will result in a non-functional setup after a reboot.

#Firewall
To ensure everything can communicate we need to add a few rules. NOTE: Assuming that the firewalld configuration is default on your freshly installed system. Use the following commands to open the required ports.

firewall-cmd –permanent –zone=public –add-port=22/tcp #Change this to match your SSH port
firewall-cmd –permanent –zone=public –add-port=80/tcp
firewall-cmd –permanent –zone=public –add-port=443/tcp
firewall-cmd –permanent –zone=public –add-port=3306/tcp
firewall-cmd –permanent –zone=public –add-port=4567/tcp
firewall-cmd –permanent –zone=public –add-port=4568/tcp
firewall-cmd –permanent –zone=public –add-port=4444/tcp
firewall-cmd –permanent –zone=public –add-port=111/tcp
firewall-cmd –permanent –zone=public –add-port=111/udp
firewall-cmd –permanent –zone=public –add-port=2049/tcp
firewall-cmd –permanent –zone=public –add-port=24007/tcp
firewall-cmd –permanent –zone=public –add-port=38465-38469/tcp
firewall-cmd –permanent –zone=public –add-port=49152/tcp #Glusterfs Brick NOTE: that each brick can use its own port so be sure to adjust this when required. If you have a problem with communication then disable the firewall and use the command detailed below to determine the correct brick ports before enabling the firewall again.

In order to make these rules apply to runtime we will need to restart the firewall service:

systemctl restart firewalld

#Network
Now we need to ensure that each of our systems can quickly find each other without the need for any external resolver queries. To do this we need to edit the /etc/hosts file. Below is an example of what you can do.
echo “10.10.0.1 server1 server1.example.com
10.10.0.2 server2 server2.example.com
10.10.0.3 server3 server3.example.com” >> /etc/hosts

#Database
To start we first need to initialise and secure MariaDB. Run the following command and answer yes to all the questions:

/usr/bin/mysql_secure_installation

Once the above is complete you will need to configure the user privileges for the MariaDB cluster. You can do this from the MySQL console by typing the following commands. NOTE: You need to do this on each system.

mysql -u root –password=thesecurepassyoujustmade mysql
DELETE FROM mysql.user WHERE user=”;
GRANT ALL ON *.* TO ‘root’@’%’ IDENTIFIED BY ‘asecurepass’;
GRANT USAGE ON *.* to sst_user@’%’ IDENTIFIED BY ‘asecurepass’;
GRANT ALL PRIVILEGES ON *.* to sst_user@’%’;
FLUSH PRIVILEGES;
quit

NOTE: You are free to restrict the hosts (%) from where the users can connect; root was already disabled from remote logins by the secure installation step above.

The next step is to configure the actual MariaDB server variables where we can define the settings required for our cluster to work properly. Stop MariaDB-server on each node with the following command:

systemctl stop mysql

#Server1 (10.10.0.1) – Acting as the master MariaDB server in this phase of the setup.
Type the following command to input the below config. You are free to edit the config as you see fit; however, this will get you up and running. NOTE: Adjust IP’s, names and password to the correct settings for your environment.

echo “binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=”gcomm://10.10.0.1,10.10.0.2,10.10.0.3″
wsrep_cluster_name=’galera_cluster’
wsrep_node_address=’10.10.0.1′
wsrep_node_name=’server1′
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:asecurepass” >> /etc/my.cnf.d/server.cnf

Now that the config is set you need to start the server. In order to start everything required you need to append a flag to the start up process as shown below:

systemctl start mysql –wsrep-new-cluster

#Server2
The only difference in the next two servers is the IP addresses, names and the initial start up of MariaDB.

echo “binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=”gcomm://10.10.0.1,10.10.0.2,10.10.0.3″
wsrep_cluster_name=’galera_cluster’
wsrep_node_address=’10.10.0.2′
wsrep_node_name=’server1′
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:asecurepass” >> /etc/my.cnf.d/server.cnf

Now start MariaDB in the normal way:

systemctl start mysql

NOTE: The start up process will take longer than usual whilst MariaDB joins the cluster.

#Server3

echo “binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=”gcomm://10.10.0.1,10.10.0.2,10.10.0.3″
wsrep_cluster_name=’galera_cluster’
wsrep_node_address=’10.10.0.3′
wsrep_node_name=’server3′
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:asecurepass” >> /etc/my.cnf.d/server.cnf

systemctl start mysql

Now that we have completed the setup of MariaDB Galera cluster we should do a little testing to ensure that it is working as expected. Run the following command from server1 console:

mysql -u root –password=yoursecurepass -e “show status like ‘wsrep%'”

The output we are interested in is in the following lines:

wsrep_local_state_comment | Synced
wsrep_incoming_addresses | 10.10.0.1:3306
wsrep_cluster_size | 3
wsrep_ready | ON

As you can see above the cluster size states 3 which means each of the nodes we prepared have joined and are operational. If you re not seeing the output as above then it is likely that there is a communication issue between the servers. Check the firewall.

#Filesystem -GlusterFS
Creating our distributed filesystem is a relatively painless task. With the required software already installed we simply need to execute a few commands
Firstly start the GlusterFS daemon on each box:

systemctl start glusterd

On server1 type the following commands:

gluster peer probe server2
gluster peer probe server3

This should have found the second and third peers. To confirm type:

gluster peer status

It will show you the number of peers. If it does not, check your firewall configuration to ensure the required ports are open. You can disable the firewall temporarily if required to allow you to probe the peers and determine the ports they are using.

Now that we have our peers we can proceed to creating a GlusterFS volume. The following command can be edited to suit your own naming conventions. NOTE: These next set of commands relating to volume creation should only be run on server1.

gluster volume create wordpress replica 3 transport tcp server1:/wordpress server2:/wordpress server3:/wordpress force

NOTE: I have used the force flag which enables the creation of the volume inside the root / partition. Omit the force flag if you intend to create your own partition specially for this volume. It is up to you.

Now that we have created the volume we need to start it.

gluster volume start wordpress

We should now consider security and add an acl.

gluster volume set wordpress auth.allow 10.10.0.1,10.10.0.2,10.10.0.3

In order to use the newly created volume we first need to mount it to a directory. The following command should be typed on each of the servers 1 2 and 3.

mount -t glusterfs localhost:/wordpress /home/your_user_folder/webroot

NOTE: change the path according to your preference.

Now that we have an active distributed filesystem, any of the files we put into the mount directory will replicate to the other nodes.
In order to ensure these mounts come back automatically upon reboot you should add them to the /etc/fstab file on each node. It will look something like this:
localhost:/wordpress /home/your_user_folder/webroot glusterfs _netdev,fetch-attempts=10 0 0

#Setup your website
Now is the time to setup your website. Configure Apache (on each node) in the normal way to ensure it is pointing to the webroot directory in the step above. On server1 login to MariaDB and create/upload your WordPress database in the usual way. Be sure that your WordPress database is configured to use the innoDB engine. If it is not phpmyadmin has a decent little conversion utility – use it and save yourself some time and pain. NOTE: When placing your website files and folders on server1 you may need to ensure that permissions are setup properly on your folders. I also strongly recommend you disable any WordPress cache and database cache plugins at this stage. To ensure that files and folders have the correct permissions go to server1 and cd to the mount directory and type the following commands:

chmod 755 $(find . -type d)
chmod 644 $(find . -type f)
chmod 777 wp-content/uploads

NOTE: These commands can take a considerable amount of time to complete depending on the number and size of the files in the webroot. Be patient.

This completes the guide for the cluster.

Next we will need to build the load balancers.
In this scenario we are going to use HAProxy as the load balancing software.

#Load balancer servers

To start with get the servers up to date

yum -y update

Now you need to again install the EPEL repository.

rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

#HAProxy

To install it type the following:

yum -y install haproxy

Next we need to configure it.

You must be logged in to post a comment.