Setting X-Frame-Options HTTP Header to allow SAME or NON SAME ORIGINS

It’s possible to increase the security of a webserver running a website, by ensuring that the X-FRAME-OPTIONS header pushes a header to the browser, which enforces the origin (server) serving the site. It prevents the website then providing objects which are not local to the site, in the stream. An admirable option for those which wish to increase their server security.

Naturally, there are some reasons why you might want to disable this, and in proper context, it can be secure. Always be sure to discuss with your pentester or PCI compliance officer, such considerations before proceeding, especially making sure that if you do not want to use SAME ORIGIN you always use the most secure option for the required task. Always check if there is a better way to achieve what your trying to do, when making such changes to your server configuration.

Insecure X-Frame-Option allows remote non matching origins

Header always append X-Frame-Options ALLOWALL

Secure X-Frame-Option imposes on the browser to not allow non origin(al) connections for the domain, which can prevent clickjack and other attacks.

Header always append X-Frame-Options SAMEORIGIN

Moving a WordPress site – much ado about nothing !

Have you noticed, there is all kinds of advise on the internet about the best way to move WordPress websites? There is literally a myriad of ways to achieve this. One of the methods I read on
wp.com was:

Changing Your Domain Name and URLs

Moving a website and changing your domain name or URLs (i.e. from http://example.com/site to http://example.com, or http://example.com to http://example.net) requires the following steps - in sequence.

    Download your existing site files.
    Export your database - go in to MySQL and export the database.
    Move the backed up files and database into a new folder - somewhere safe - this is your site backup.
    Log in to the site you want to move and go to Settings > General, then change the URLs. (ie from http://example.com/ to http://example.net ) - save the settings and expect to see a 404 page.
    Download your site files again.
    Export the database again.
    Edit wp-config.php with the new server's MySQL database name, user and password.
    Upload the files.
    Import the database on the new server.

I mean this is truly horrifying steps to take, and I don’t see the point at all. This is how I achieved it for one my customers.

1. Take customer Database Dump
2. Edit the database searching for 'siteurl' with vi
vi mysqldump.sql
:?siteurl

And just swap out the values, confirming after editing the file;

[root@box]# cat somemysqldump.sql  | grep siteurl -A 2
(1, 'siteurl', 'https://www.newsiteurl.com', 'yes'),
(2, 'home', 'https://www.newsiteurl.com', 'yes'),
(3, 'blogname', 'My website name', 'yes'),

Job done, no stress https://codex.wordpress.org/Moving_WordPress.

There might be additional bits but this is certainly enough for them to access the wp-admin panel. If you have problems add this line to the wp-config.php file;

define('RELOCATE',true);

Just before the line which says

/* That’s all, stop editing! Happy blogging. */

And then just do the import/restore as normal;

mysql -u newmysqluser -p newdatabase_to_import_to < old_database.sql

Simples! I really have no idea why it is made to be so complicated on other hosting sites or platforms.

Less Ghetto Log Parser for Website Hitcount/Downtime Analysis

Yesterday I created a proof of concept script, which basically goes off and identifies the hitcounts of a website, and can give a technician within a short duration of time (minutes instead of hours) exactly where hitcounts are coming from and where.

This is kind of a tradeoff, between a script that is automated, and one that is flexible.

The end goal is to provide a hitcount vs memory commit metric value. A NEW TYPE OF METRIC! HURRAH! (This is needed by the industry IMO).

And also would be nice to generate graphing and mean, average, and ranges, etc. So can provide output like ‘stat’ tool. Here is how I have progress

#!/bin/bash
#
# Author: 	Adam Bull, Cirrus Infrastructure, Rackspace LTD
# Date: 	March 20 2017
# Use:		This script automates the analysis of webserver logs hitcounts and
# 		provides a breakdown to indicate whether outages are caused by website visits
#		In correlation to memory and load avg figures


# Settings

# What logfile to get stats for
logfile="/var/log/httpd/google.com-access.log"

# What year month and day are we scanning for minute/hour hits
year=2017
month=Mar
day=9

echo "Total HITS: MARCH"
grep "/$month/$year" "$logfile" | wc -l;

# Hours
for i in 0{1..9} {10..24};

do echo "      > 9th March 2017, hits this $i hour";
grep "$day/$month/$year:$i" "$logfile" | wc -l;

        # break down the minutes in a nested visual way thats AWsome

# Minutes
for j in 0{1..9} {10..60};
do echo "                  >>hits at $i:$j";
grep "$day/$month/$year:$i:$j" "$logfile" | wc -l;
done

done

Thing is, after I wrote this, I wasn’t really happy, so I refactored it a bit more;

#!/bin/bash
#
# Author: 	Adam Bull, Cirrus Infrastructure, Rackspace LTD
# Date: 	March 20 2017
# Use:		This script automates the analysis of webserver logs hitcounts and
# 		provides a breakdown to indicate whether outages are caused by website visits
#		In correlation to memory and load avg figures


# Settings

# What logfile to get stats for
logfile="/var/log/httpd/someweb.biz-access.log"

# What year month and day are we scanning for minute/hour hits
year=2017
month=Mar
day=9

echo "Total HITS: $month"
grep "/$month/$year" "$logfile" | wc -l;

# Hours
for i in 0{1..9} {10..24};

do
hitsperhour=$(grep "$day/$month/$year:$i" "$logfile" | wc -l;);
echo "    > $day $month $year, hits this $ith hour: $hitsperhour"

        # break down the minutes in a nested visual way thats AWsome

# Minutes
for j in 0{1..9} {10..59};
do
hitsperminute=$(grep "$day/$month/$year:$i:$j" "$logfile" | wc -l);
echo "                  >>hits at $i:$j  $hitsperminute";
done

done

Now it’s pretty leet.. well, simple. but functional. Here is what the output of the more nicely refined script; I’m really satisfied with the tabulation.

[root@822616-db1 automation]# ./list-visits.sh
Total HITS: Mar
6019301
    > 9 Mar 2017, hits this  hour: 28793
                  >>hits at 01:01  416
                  >>hits at 01:02  380
                  >>hits at 01:03  417
                  >>hits at 01:04  408
                  >>hits at 01:05  385
^C

Configuring SFTP without chroot (the easy way)

So, I wouldn’t normally recommend this to customers. However, there are secure ways to add SFTP access, without the SFTP subsystem having to be modified. It’s also possible to achieve similar setup in a location like /home/john/public_html.

Let’s assume that public_html and everything underneath it is chowned john:john. So john:john has all the access, and apache2 runs with it’s own gid;uid. This was a pretty strange setup, and you don’t see it every day. But actually, it allowed me to solve another problem that I’ve been seeing/seeing customers have for a long time. That problem is the problem of effectively and easily managing permissions. Once I figured this out it was a serious ‘aha!’ moment!. Here’s why.

Inside the /etc/group, we find the customers developer has done something tragic:

[root@web public_html]# cat /etc/group | grep apache
apache:x:48:john,bob

But fine.. we’ll run with it.

We can see all the files inside their /home/john/public_html , the sight is not good

]# ls -al 
total 232
drwxrwxr-x 27 john john  4096 Dec 20 15:56 .
drwxr-xr-x 12 john john  4096 Dec 15 11:08 ..
drwxrwxr-x 10 john john  4096 Dec 16 09:56 administrator
drwxrwxr-x  2 john john  4096 Dec 14 11:18 bin
drwxrwxr-x  4 john john  4096 Nov  2 15:05 build
-rw-rw-r--  1 john john   714 Nov  2 15:05 build.xml
drwxrwxr-x  3 john john  4096 Nov  2 15:05 c
drwxrwxr-x  3 john john 45056 Dec 20 13:09 cache
drwxrwxr-x  2 john john  4096 Dec 14 11:18 cli
drwxrwxr-x 32 john john  4096 Dec 14 11:18 components
-rw-rw-r--  1 john john  1863 Nov  2 15:05 configuration-live.php
-rw-r--r--  1 john john  3173 Dec 15 11:08 configuration.php
drwxrwxr-x  3 john john  4096 Nov  2 15:05 docs
drwxrwxr-x  8 john john  4096 Dec 16 17:17 .git
-rw-rw-r--  1 john john  1734 Dec 14 11:21 .gitignore

It gets worse..

# cat /etc/passwd | grep john
john:x:501:501::/home/john:/bin/sh

Now, adding an sftp user into this, might look like a nightmare, but actually with some retrospective thought it was really easy.

Solving this mess:

Install Scponly

yum install scponly

Create new ‘SFTP’ user:

scponlyuser:x:504:505::/home/john:/usr/bin/scponly

Create a password for user scponlyuser

 
passwd scponlyuser

Solution to john:john permissions

[root@web public_html]# cat /etc/group | grep john
apache:x:48:john,bob
john:x:501:scponlyuser

We simply make scponlyuser part of the john group by adding the second line there. That way, the scponlyuser will have read/write access to the same files as the shell user, without exposing any additional stuff.

This was a cool solution to fixing this customers insecure solution, that they wanted to keep it the way they had, and was also great way to add an sftp account without requiring root jail. Whether it’s better than the root jail, is really debatable, however scponly enforces that only this account can be used only for SCP, as well as achieving sftp user access, without a jail.

I was proud of this achievement.. goes to show Linux permissions are really more flexible than we can imagine. And, whether you really want to flex those permissions muscles though, should be of concern. I advised this customer to change this setup, remove the /bin/sh, among other things..

We finally test SFTP is working as expected with the new scponlyuser


sftp> rmdir test
sftp> get index.php
Fetching /home/john/public_html/index.php to index.php
/home/john/public_html/index.php                                                                                     100% 1420     1.4KB/s   00:00
sftp> put index.php
Uploading index.php to /home/john/public_html/index.php
index.php                                                                                                                100% 1420     1.4KB/s   00:00
sftp> mkdir test
sftp> rmdir test

Just replace ‘scponly’ with whatever username your setting up. The only part that you need to keep the ‘scponly’ bit, is /usr/bin/scponly, this is the environment logging into. Apologies that scponly is so similar to scponlyuser ;-D

scponlyuser:x:504:505::/home/john:/usr/bin/scponly

I was very pleased with this! Hope that you find this useful too!

Automating Backups in Public Cloud using Cloud Files

Hey folks, I know it’s been a little while since I put an article together. However I have been putting together a really article explaining how to write bespoke backup systems for the Rackspace Community. It’s a proof of concept/demonstration/tutorial as opposed to a production application. However people looking to create custom cloud backup scripts may benefit from the experience of reading thru it.

You can see the article at the below URL:

https://community.rackspace.com/products/f/25/t/7857