Moving a WordPress site – much ado about nothing !

Have you noticed, there is all kinds of advise on the internet about the best way to move WordPress websites? There is literally a myriad of ways to achieve this. One of the methods I read on
wp.com was:

Changing Your Domain Name and URLs

Moving a website and changing your domain name or URLs (i.e. from http://example.com/site to http://example.com, or http://example.com to http://example.net) requires the following steps - in sequence.

    Download your existing site files.
    Export your database - go in to MySQL and export the database.
    Move the backed up files and database into a new folder - somewhere safe - this is your site backup.
    Log in to the site you want to move and go to Settings > General, then change the URLs. (ie from http://example.com/ to http://example.net ) - save the settings and expect to see a 404 page.
    Download your site files again.
    Export the database again.
    Edit wp-config.php with the new server's MySQL database name, user and password.
    Upload the files.
    Import the database on the new server.

I mean this is truly horrifying steps to take, and I don’t see the point at all. This is how I achieved it for one my customers.

1. Take customer Database Dump
2. Edit the database searching for 'siteurl' with vi
vi mysqldump.sql
:?siteurl

And just swap out the values, confirming after editing the file;

[root@box]# cat somemysqldump.sql  | grep siteurl -A 2
(1, 'siteurl', 'https://www.newsiteurl.com', 'yes'),
(2, 'home', 'https://www.newsiteurl.com', 'yes'),
(3, 'blogname', 'My website name', 'yes'),

Job done, no stress https://codex.wordpress.org/Moving_WordPress.

There might be additional bits but this is certainly enough for them to access the wp-admin panel. If you have problems add this line to the wp-config.php file;

define('RELOCATE',true);

Just before the line which says

/* That’s all, stop editing! Happy blogging. */

And then just do the import/restore as normal;

mysql -u newmysqluser -p newdatabase_to_import_to < old_database.sql

Simples! I really have no idea why it is made to be so complicated on other hosting sites or platforms.

How to limit the amount of memory httpd is using on CentOS 7 with Cgroups

CentOS 7, introduced something called CGroups, or control groups which has been in the stable kernel since about 2006. The systemD unit is made of several parts, systemD unit resource controllers can be used to limit memory usage of a service, such as httpd control group in systemD.

# Set Memory Limits for a systemD unit
systemctl set-property httpd MemoryLimit=500MB

# Get Limits for a systemD Unit
systemctl show -p CPUShares 
systemctl show -p MemoryLimit

Please note that OS level support is not generally provided with managed infrastructure service level, however I wanted to help where I could hear, and it shouldn’t be that difficult because the new stuff introduced in SystemD and CGroups is much more powerful and convenient than using ulimit or similar.

Aaron Mehar’s CBS to VHD solution for Rackspace Cloud

Hey. So another one of my colleagues put together this really awesome article. Although I was aware this could be done, he’s done a really good job or putting together the procedures, of turning your CBS BFV (boot from (network) volume) disk into a VHD file.

Rackspace CBS disks works over iscsi and are presented via the network. The difference between instance store on the hypervisor, (utilized by cloud-server images), and the disk store on the CBS is that the CBS disk is not a VHD, but an disk presented over network via iscsi.

So, to take a VHD, or an equivalent cloud-server image snapshot, you need to image the disk manually, as well as convert it to VHD.

Taking an image of a volume is not possible, and would not be downloadable. However there are some workarounds that can be done.

*** Please NOTE ****
This is not supported, and we can not assist beyond these instructions. I could provide some clarity if required, however, my collegaues may not be able to help should I become unavailable.

If you just want the data, then you could just download the data to your local machine, however, if you a VHD to create a local VM, then the below instructions will achieve this.

Steps

Please take special care, making a mistake working with partitioner can wipe all your data

1. Shutdown the server
2. Clone the disk, by Starting a volume clone and start the server back up.
3. Attach the newly created clone to the server
4. create another new CBS volume of a slightly larger size (+5GB is OK)

Now that is done, we can image the disk. You will need to ensure you have the corrects disks. The second disk with data should be xvbd and the new CBS should be xvdc

Create partition and filesystem for xvdbc. Please see this guide: https://support.rackspace.com/how-to/prepare-your-cloud-block-storage-volume/

the image xvdb to xvdc

   dd if=/dev/xvdb of=/mnt/cbsvolume1/myimage.dd

The download the image to your workstation, and install VirtualBox, and run the below command

   VBoxManage convertfromraw myfile.dd myfile.vhd --format VHD

Please take special care, making a mistake working with partitioner can wipe all your data

Help! I can’t login to my cloud-server even though I’ve reset my root password

The most common cause of this is the permit root login is set to no, although there might be other causes, like a really broken sshd_config, instead of just one variable. The procedure for looking into this is pretty much the same regardless of the breakage that has occurred. Here is what you need to do:

Here’s the full procedure:

1) Put server into rescue mode.
2) Login to cloud-server on SSH port, please note rescue mode gives you a new temporary root password allowing you to reset the password for SSH on the ‘original disk’.
3) once logged in mount the /dev/xvdb devices, this may be /dev/xvdb1 or /dev/xvdb2 but is usually /dev/xvdb1 and chroot (change root to the ‘original disk’)

# Mount old disk
mnt /dev/xvdb1 /mnt

# Change to the ‘old disk’
chroot /mnt

# Set the new password for root on the old disk:

passwd
# enter the new password when prompted

and specifically ensure that /etc/ssh/sshd_config has this line:

PermitRootLogin no

changed to:

PermitRootLogin yes

Your developer or sysad won’t be able to login until you reset the root password here, and if you do not know the username to su to root from, it is absolutely critical to perform this work, otherwise you won’t be able to access the server.

Also, once you have allowed the root login, and changed the password to something you recognise you will be able to exit rescue mode thru the control panel and login to the machine as normal.

For more detail about how to do this (although all the steps are here pretty much, please see):

https://support.rackspace.com/how-to/rackspace-cloud-essentials-rescue-mode-on-linux-cloud-servers/

I hope this helps you folks out some,

DISASTER RECOVERY! Exporting a Broken Cloud-server image VHD from Rackspace and attempting to recover data

Thanks to my colleague Marcin for thie guestmount tools protip.

I wrote a previous guide which explains how to download/export a Cloud server image VHD from Rackspace Cloud, which is failing to build. It might allow you to perform data recovery, even if the image can’t be booted. Which I’m guessing someone is going to run into sooner or later, and will be pleased to see this article, it will at least give you a best shot at reading the VHD and recovering it, since as you might know already, just because boot or kernel is broken, doesn’t mean that the data isn’t there!

# A better article to use if you want to download via commandline
https://community.rackspace.com/products/f/25/t/3583

# My article doing this thru a web-browser which might be useful too for some customers
https://community.rackspace.com/products/f/25/t/7089

Once the image gets downloaded to your new cloud instance you can use ‘libguestfs-tools’ package (same name on Ubuntu and CentOS) which contains tools necessary for mounting .vhd image files.

The command would be (read-only mode):

guestmount -a {image-name}.vhd -i --ro {mount-point}

Fixing Innodb Registered as a storage Engine Failed

So you have a nice wordpress (or similar site) running, but then all of a sudden you got a nasty message telling you that, the InnobDB registration as a storage engine failed.

# tail /var/log/mariadb/mariadb.log
InnoDB: If that is the case, please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/error-creating-innodb.html
160914  6:45:29 [ERROR] Plugin 'InnoDB' init function returned error.
160914  6:45:29 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
160914  6:45:29 [ERROR] Failed to initialize plugins.
160914  6:45:29 [ERROR] Aborting
160914  6:45:29 [Note] /usr/libexec/mysqld: Shutdown complete

160914 06:45:29 mysqld_safe mysqld from pid file /var/lib/mysql/wpstack.localdomain.pid ended

# systemctl status mariadb
● mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/mariadb.service.d
           └─limits.conf
   Active: failed (Result: exit-code) since Wed 2016-09-14 06:25:38 UTC; 2min 32s ago
  Process: 1434 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=1/FAILURE)
  Process: 1433 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)
  Process: 1305 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 1433 (code=exited, status=0/SUCCESS)

Sep 14 06:25:35 wpstack.localdomain systemd[1]: Starting MariaDB database se....
Sep 14 06:25:37 wpstack.localdomain mysqld_safe[1433]: 160914 06:25:37 mysqld...
Sep 14 06:25:37 wpstack.localdomain mysqld_safe[1433]: 160914 06:25:37 mysqld...
Sep 14 06:25:38 wpstack.localdomain systemd[1]: mariadb.service: control pro...1
Sep 14 06:25:38 wpstack.localdomain systemd[1]: Failed to start MariaDB data....
Sep 14 06:25:38 wpstack.localdomain systemd[1]: Unit mariadb.service entered....
Sep 14 06:25:38 wpstack.localdomain systemd[1]: mariadb.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@wpstack ~]# systemctl start mariadb
Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details.
[root@wpstack ~]# systemctl start mariadb
Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details.

This is naturally a problem since until it is resolved you have no database service running. It is fairly easy to resolve in most cases, by removing the log from var/lib/mysql

mv /var/lib/mysql/ib_logfile0{,.bak}  	   	 	   
mv /var/lib/mysql/ib_logfile1{,.bak}  	   	 	   
mv /var/lib/mysql/ibdata1{,.bak} 

These simply move the files away, which allows InnoDB to continue operating. Please note that this may not always fix your issue and in some situations might result in data loss, so it is advisable to take a backup of the database filesystem before proceeding.

Testing Rackspace Cloud-server Service-net Connectivity and creating an alarm

So, the last few weeks my colleagues and myself have been noticing that there has been a couple of issues with the cloud-servers servicenet interface. Unfortunately for some customers utilizing dbaas instances this means that their cloud-server will be unable to communicate, often, with their database backend.

The solution is a custom monitoring script that my colleague Marcin has kindly put together for another customer of his own.

The python script that goes on the server:

Create file:

vi /usr/lib/rackspace-monitoring-agent/plugins/servicenet.sh

Paste into file:

#!/bin/bash
#
ping="/usr/bin/ping -W 1 -c 1 -I eth1 -q"

if [ -z $1 ];then
   echo -e "status CRITICAL\nmetric ping_check uint32 1"
   exit 1
else
   $ping $1 &>/dev/null
   if [ "$?" -eq 0 ]; then
      echo -e "status OK\nmetric ping_check uint32 0"
      exit 0
   else
      echo -e "status CRITICAL\nmetric ping_check uint32 1"
      exit 1
   fi
fi

Create an alarm that utilizes the below metric

if (metric["ping_check"] == 1) {
    return new AlarmStatus(CRITICAL, 'what?');
}
if (metric["ping_check"] == 0) {
    return new AlarmStatus(OK, 'eee?');
}

Of course for this to work the primary requirement is a Rackspace Cloud-server and an installation of Rackspace Cloud Monitoring installed on the server already.

Thanks again Marcin, for this golden nugget.

Checking requests to apache2 webserver during downtime

A customer of ours was having some serious disruptions to his webserver, with 15 minute outages happening here and there. He said he couldn’t see an increase in traffic and therefore didn’t understand why it reached maxclients. Here was a quick way to prove whether traffic really increased or not by directly grepping the access logs for the time and day in question and using wc -l to count them, and a for loop to step thru the minutes of the hour in between the events.

Proud of this simple one.. much simpler than a lot of other scripts that do the same thing I’ve seen out there!

root@anonymousbox:/var/log/apache2# for i in `seq 01 60`;  do  printf "total visits: 13:$i\n\n"; grep "12/Jul/2016:13:$i" access.log | wc -l; done

total visits: 13:1

305
total visits: 13:2

474
total visits: 13:3

421
total visits: 13:4

411
total visits: 13:5

733
total visits: 13:6

0
total visits: 13:7

0
total visits: 13:8

0
total visits: 13:9

0
total visits: 13:10

30
total visits: 13:11

36
total visits: 13:12

30
total visits: 13:13

29
total visits: 13:14

28
total visits: 13:15

26
total visits: 13:16

26
total visits: 13:17

32
total visits: 13:18

37
total visits: 13:19

31
total visits: 13:20

42
total visits: 13:21

47
total visits: 13:22

65
total visits: 13:23

51
total visits: 13:24

57
total visits: 13:25

38
total visits: 13:26

40
total visits: 13:27

51
total visits: 13:28

51
total visits: 13:29

32
total visits: 13:30

56
total visits: 13:31

37
total visits: 13:32

36
total visits: 13:33

32
total visits: 13:34

36
total visits: 13:35

36
total visits: 13:36

39
total visits: 13:37

70
total visits: 13:38

52
total visits: 13:39

27
total visits: 13:40

38
total visits: 13:41

46
total visits: 13:42

46
total visits: 13:43

47
total visits: 13:44

39
total visits: 13:45

36
total visits: 13:46

39
total visits: 13:47

49
total visits: 13:48

41
total visits: 13:49

30
total visits: 13:50

57
total visits: 13:51

68
total visits: 13:52

99
total visits: 13:53

52
total visits: 13:54

92
total visits: 13:55

66
total visits: 13:56

75
total visits: 13:57

70
total visits: 13:58

87
total visits: 13:59

67
total visits: 13:60

root@anonymousbox:/var/log/apache2# for i in `seq 01 60`; do printf “total visits: 12:$i\n\n”; grep “12/Jul/2016:12:$i” access.log | wc -l; done
total visits: 12:1

169
total visits: 12:2

248
total visits: 12:3

298
total visits: 12:4

200
total visits: 12:5

341
total visits: 12:6

0
total visits: 12:7

0
total visits: 12:8

0
total visits: 12:9

0
total visits: 12:10

13
total visits: 12:11

11
total visits: 12:12

30
total visits: 12:13

11
total visits: 12:14

11
total visits: 12:15

13
total visits: 12:16

16
total visits: 12:17

28
total visits: 12:18

26
total visits: 12:19

10
total visits: 12:20

19
total visits: 12:21

35
total visits: 12:22

12
total visits: 12:23

19
total visits: 12:24

28
total visits: 12:25

25
total visits: 12:26

30
total visits: 12:27

43
total visits: 12:28

13
total visits: 12:29

24
total visits: 12:30

39
total visits: 12:31

35
total visits: 12:32

25
total visits: 12:33

22
total visits: 12:34

33
total visits: 12:35

21
total visits: 12:36

31
total visits: 12:37

31
total visits: 12:38

22
total visits: 12:39

39
total visits: 12:40

11
total visits: 12:41

18
total visits: 12:42

11
total visits: 12:43

28
total visits: 12:44

19
total visits: 12:45

27
total visits: 12:46

18
total visits: 12:47

17
total visits: 12:48

22
total visits: 12:49

29
total visits: 12:50

22
total visits: 12:51

31
total visits: 12:52

44
total visits: 12:53

38
total visits: 12:54

38
total visits: 12:55

41
total visits: 12:56

38
total visits: 12:57

32
total visits: 12:58

26
total visits: 12:59

31
total visits: 12:60

Checking File integrity with Cloud Files, post upload file

So, as you may already be aware, I am working on a lightweight backup script called obscene redundancy’. An redundant backup software capable of 18 replicas of data to Rackspace Cloud Files API service. It’s so redundant… it’s obscene redundancy.

For more details visit the project URL:
https://github.com/aziouk/obsceneredundancy/

Today, I was discussing with my colleague, that it was all very well uploading your tar to cloud files, but, wouldn’t you really like to know if the file you uploaded is completely identical number of bits, and order? Enter, Cloud Files ‘HEAD’and Etag. Our MD5 friend.

What I did to improve the obscene redundancy script was quite simple here:

# We define a variable that takes the 'Etag' (MD5Sum) value for the cloud files archive
cfmd5sum=$(swiftly --conf swiftly-configs/swiftly-${SHORT_REGION,,}.conf head
"${BACKUP_DEST}/${FILE}" | grep -i Etag | awk '{print $2}')

# We Define a variable that generates an 'MD5Sum' for the local file archive
localmd5sum=$(md5sum "$BACKUP_DIR"/"$FILE")

echo "Checking Data integrity of Cloud Files upload to $REGION"
echo "Cloud Files Archive MD5:  $cfmd5sum  ....... Local File Archive MD5: $localmd5sum"

# If these values
if [[ "$cfmd5sum" -ne "$localmd5sum" ]];
then
echo "VALUES NOT EQUAL"
echo "$REGION CRC OK..."
else
echo "VALEUS EQUAL
echo "$REGION CRC missing, in error, or NOT OK..."
fi

After all this I found that the script wasn’t working properly… so I did some debugging about this to check, at least, first of all , the length of each variable.

   if [[ "$cfmd5sum" == "$localmd5sum" ]]; then
                        echo "VALUES EQUAL, (local md5sum length given first)"
                        echo "$localmd5sum"| wc -L
                        echo "$cfmd5sum"| wc -L


                        echo "$REGION CRC OK..."
                else
                        echo "VALUES NOT EQUAL"
                        echo "$localmd5sum"|wc -L
                        echo "$cfmd5sum"|wc -L
                        echo "$REGION CRC missing, in error, or NOT OK..."
                fi

The output shown me that the variable length was different. At this stage I’ve no idea why, but will add updates here. I’m going to commit this to obsceneredundancy because proof of concept is working and valid, as shown by the output of the script. (i.e. the method is fine, it’s just the way the string is compared in the if, statement, I suspect it is to do with special character or \n characters as I had before. So, when I made this addition to the multi-dc-backup.sh script.. the output now looks like:

Creating Container in LON for obsceneredundancy

LON: Backing up ...
Source: /var/www/ ---> Dest: cloudfiles://LON/obsceneredundancy/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz

Checking Data integrity of Cloud Files upload to BACKUP_TO_LON
Cloud Files Archive MD5:  65147eb66f8bbeff03a229570b0a1be7  ....... Local File Archive MD5: 65147eb66f8bbeff03a229570b0a1be7  /var/backup/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
VALUES NOT EQUAL
107
32
BACKUP_TO_LON CRC missing, in error, or NOT OK...
lon: COMPLETED OK 15504796/15504796
ORD: Not backing up ...



Creating Container in IAD for obsceneredundancy

IAD: Backing up ...
Source: /var/www/ ---> Dest: cloudfiles://IAD/obsceneredundancy/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz

Checking Data integrity of Cloud Files upload to BACKUP_TO_IAD
Cloud Files Archive MD5:  65147eb66f8bbeff03a229570b0a1be7  ....... Local File Archive MD5: 65147eb66f8bbeff03a229570b0a1be7  /var/backup/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
VALUES NOT EQUAL
107
32
BACKUP_TO_IAD CRC missing, in error, or NOT OK...
iad: COMPLETED OK 15504796/15504796
DFW: Not backing up ...

As we can see the 107 (localmd5size) and the 32 (cloudfilesmd5size) are different! I’ve no idea why, since when echoing the variables they look the same. I suspect gremlins and Trolls. A fresh head tomorrow will probably solve this in a few minutes!

Cheers &
Best wishes,
Adam