Setting X-Frame-Options HTTP Header to allow SAME or NON SAME ORIGINS

It’s possible to increase the security of a webserver running a website, by ensuring that the X-FRAME-OPTIONS header pushes a header to the browser, which enforces the origin (server) serving the site. It prevents the website then providing objects which are not local to the site, in the stream. An admirable option for those which wish to increase their server security.

Naturally, there are some reasons why you might want to disable this, and in proper context, it can be secure. Always be sure to discuss with your pentester or PCI compliance officer, such considerations before proceeding, especially making sure that if you do not want to use SAME ORIGIN you always use the most secure option for the required task. Always check if there is a better way to achieve what your trying to do, when making such changes to your server configuration.

Insecure X-Frame-Option allows remote non matching origins

Header always append X-Frame-Options ALLOWALL

Secure X-Frame-Option imposes on the browser to not allow non origin(al) connections for the domain, which can prevent clickjack and other attacks.

Header always append X-Frame-Options SAMEORIGIN

Increasing the Limits of PHP-FPM

It’s important to know how to increase the limits for php-fpm www pools, or any other named alias pools you might have setup.

You might see an error like

tail -f /var/log/php7.1-fpm.log
[24-Apr-2017 11:23:09] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 11 total

or

[24-Apr-2017 10:51:38] WARNING: [pool www] server reached pm.max_children setting (5), consider raising it

The solution is quite simple, we just need to go in and edit the php fpm configuration on the server and increase these values to safe ones that is supported by available RAM.

pm.max_children = 15

; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 2

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 1

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 8

Then monitor the site with

tail -f /var/log/php7.1-fpm.log

To ensure no further limits are being hit.

Obviously if you are using different version of fpm your log location might be different.

Setting 404 Error page in nginx

It’s easy to setup an custom error page in nginx. Just create a file in your documentroot like /404.html so if your documentroot is /var/www/html create a file called 404.html, then go into your /etc/nginx/nginx.conf, or /etc/nginx/conf.d/mysite.conf, and add this in your configuration between server { } directive.

Remember if your running https and http you will need the directive for each of them.

error_page 404 /404.html;

It’s possible to define a really custom location securely;


        error_page 404 /cust_404.html;
        location = /cust_404.html {
                root /usr/share/nginx/html;
                internal;
        }


Just make sure you don’t have a filename called 404.html in /404.html as I’m not sure this would work otherwise.

Migrating a Plesk site after moving keeps going to default plesk page

So today a customer had this really weird issue where we could see that the website domain that had been moved from one server to a new plesk server, wasn’t correctly loading. It actually turned out to be simple, and when trying to access a file on the domain like I would get the phpinfo.php file.

curl http://www.customerswebsite.com/info.php 

This suggested to me the website documentroot was working, and the only thing missing was probably the index. This is what it actually did turn out to me.

I wanted to test though that info.php really was in this documentroot, and not some other virtualhost documentroot, so I moved the info.php file to randomnumbers12313.php and the page still loaded, this confirms by adding that file on the filesystem that all is well, and that I found correct site, important when troubleshooting vast configurations.

I also found a really handy one liner for troubleshooting which file it comes out, this might not be great on a really busy server, but you could still grep for your IP address as well.

Visit the broken/affected website we will troubleshoot

curl -I somecustomerswebsite.com

Give all visitors to all apache websites occurring now whilst we visit it ourselves for testing

tail -f /var/log/httpd/*.log 

This will show us which virtualhost and/or path is being accessed, from where.

Give only visitors to all apache websites occurring on a given IP

tail -f /var/log/httpd/*.log  | grep 4.2.2.4

Where 4.2.2.4 is your IP address your using to visit the site. If you don’t know what your Ip is type icanhazip into google, or ‘what is my ip’, job done.

Fixing the Plesk website without a directory index

[root@mehcakes-App1 conf]# plesk bin domain --update somecustomerswebsite.com -nginx-serve-php true -apache-directory-index index.php

Simple enough… but could be a pain if you don’t know what your looking for.

Site keeps on going down because of spiders

So a Rackspace customer was consistently having an issue with their site going down, even after the number of workers were increased. It looked like in this customers case they were being hit really hard by yahoo slurp, google bot, a href bot, and many many others.

So I checked the hour the customer was affected, and found that over that hour just yahoo slurp and google bot accounted for 415 of the requests. This made up like 25% of all the requests to the site so it was certainly a possibility the max workers were being reached due to spikes in traffic from bots, in parallel with potential spikes in usual visitors.

[root@www logs]#  grep '01/Mar/2017:10:' access_log | egrep -i 'www.google.com/bot.html|http://help.yahoo.com/help/us/ysearch/slurp' |  wc -l
415

It wasn’t a complete theory, but was the best with all the available information I had, since everything else had been checked. The only thing that remains is the number of retransmits for that machine. All in all it was a victory, and this was so awesome, I’m now thinking of making a tool that will do this in more automated way.

I don’t know if this is the best way to find google bot and yahoo bot spiders, but it seems like a good method to start.

Installing Ioncube Zend Extension for PHP

A lot of customers note that sometimes, the exact version of ioncube is not available for their specific version of PHP in their repository for their OS.

This isn’t really a big deal, and is actually something that can be manually installed.

cd ~
_php=$(php -r "echo PHP_MAJOR_VERSION.'.'.PHP_MINOR_VERSION;"); echo $_php
wget http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz
tar -zxf ioncube_loaders_lin_x86-64.tar.gz ioncube/ioncube_loader_lin_$_php.so
chown -R root. ioncube
\mv ioncube/ioncube_loader_lin_$_php.so /usr/lib64/php/modules/
echo "zend_extension=/usr/lib64/php/modules/ioncube_loader_lin_$_php.so" > /etc/php.d/01a-ioncube-loader.ini 

Thanks to Alex Drapkin for this.

Enabling Automatic Security Updates in CentOS 6, 7 and RHEL 6 and RHEL 7 (and Debian and Ubuntu too)

yum -y install yum-cron

This can also be done on Debian and Ubuntu systems if you are feeling left out:

apt-get -y install unattended-upgrades

Configuration on the CentOS/RHEL side is:

da da da da da da da ! Actually that’s all you need to do to enable it, but there are a lot of things you can customise in /etc/yum/yum-cron.conf

[commands]
#  What kind of update to use:
# default                            = yum upgrade
# security                           = yum --security upgrade
# security-severity:Critical         = yum --sec-severity=Critical upgrade
# minimal                            = yum --bugfix update-minimal
# minimal-security                   = yum --security update-minimal
# minimal-security-severity:Critical =  --sec-severity=Critical update-minimal
update_cmd = default

# Whether a message should be emitted when updates are available,
# were downloaded, or applied.
update_messages = yes

# Whether updates should be downloaded when they are available.
download_updates = yes

# Whether updates should be applied when they are available.  Note
# that download_updates must also be yes for the update to be applied.
apply_updates = no

# Maximum amout of time to randomly sleep, in minutes.  The program
# will sleep for a random amount of time between 0 and random_sleep
# minutes before running.  This is useful for e.g. staggering the
# times that multiple systems will access update servers.  If
# random_sleep is 0 or negative, the program will run immediately.
# 6*60 = 360
random_sleep = 360


[emitters]
# Name to use for this system in messages that are emitted.  If
# system_name is None, the hostname will be used.
system_name = None

# How to send messages.  Valid options are stdio and email.  If
# emit_via includes stdio, messages will be sent to stdout; this is useful
# to have cron send the messages.  If emit_via includes email, this
# program will send email itself according to the configured options.
# If emit_via is None or left blank, no messages will be sent.
emit_via = stdio

# The width, in characters, that messages that are emitted should be
# formatted to.
ouput_width = 80


[email]
# The address to send email messages from.
email_from = root@localhost

# List of addresses to send messages to.
email_to = root

# Name of the host to connect to to send email messages.
email_host = localhost


[groups]
# NOTE: This only works when group_command != objects, which is now the default
# List of groups to update
group_list = None

# The types of group packages to install
group_package_types = mandatory, default

[base]
# This section overrides yum.conf

# Use this to filter Yum core messages
# -4: critical
# -3: critical+errors
# -2: critical+errors+warnings (default)
debuglevel = -2

# skip_broken = True
mdpolicy = group:main

# Uncomment to auto-import new gpg keys (dangerous)
# assumeyes = True

Checking a Website or Rackspace Load Balancers Supported SSL Ciphers

So, you may have recently had an audit performed, or have been warned about the dangers of SSLv3, poodle attack, heartbleed and etc. You want to understand exactly which ciphers your using on the Load Balancer, cloud-server, or dedicated server. It’s actually very easy to do this with nmap. Install it first, naturally.

# CentOS / RedHat
yum install nmap

# Debian / Ubuntu
apt-get install nmap

# Check for SSL ciphers

# nmap hostnamegoeshere.com --script ssl-enum-ciphers -p 443

Starting Nmap 6.47 ( http://nmap.org ) at 2016-10-11 09:12 UTC
Nmap scan report for 134.213.236.167
Host is up (0.0017s latency).
PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.2: No supported ciphers found
|_ least strength: strong

Nmap done: 1 IP address (1 host up) scanned in 1.57 seconds

In this case we can see that only TLS v1.1 and TLS v1.0 are supported. No TLSv1.2 and no SSLv3.

Cheers &
Best wishes,
Adam

Troubleshooting Rackspace CDN not serving files

A customer came to us with an issue with their CDN which was strange and odd. I wanted to document this so that it is understood why this happened.

The customer is using two TLS origins, and HTTP2. Why that is a problem will become evidently clear. This is a general method of troubleshooting in terms of replicating behaviour of the CDN and origin with host headers. This can be applied no matter what the problem, to understand the HTTP code given by the origin, which at least half of the time turns out to be the cause. The origin being the cloud-server your CDN is backed by.

Question
Hi,
We are currently experiencing some issues with the Cloud CDN. We are using this for our CSS and images and now everything is getting a HTTP/503 SERVICE UNAVAILABLE. If you want to test, you may test this url:
https://cdn.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css

This is supposed to deliver this file:
https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css

Is something mis-configured or are there some issues on the appliance?

Answer

First we confirm the origin is UP

# curl -I https://originserver.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css
HTTP/1.1 200 OK
Date: Tue, 11 Oct 2016 08:45:42 GMT
Server: Apache
Last-Modified: Tue, 11 Oct 2016 06:57:53 GMT
ETag: "ed26-53e91653c61d0"
Accept-Ranges: bytes
Content-Length: 60710
Vary: Accept-Encoding
Cache-Control: max-age=31536000, public
Expires: Wed, 11 Oct 2017 08:45:42 GMT
Access-Control-Allow-Origin: *
X-Frame-Options: SAMEORIGIN
Content-Type: text/css

The origin is the cloud-server where the CDN pulls from. As we can see the site is up. So what is causing the issue? The way CDN works is it provides a host header for the domain, so the site has to have a host for both domains. The reason is that the CDN uses CNAME hostnames to identify which CDN is which. I.e. which path like /media/ directs to which static origins subdomain.

The best way to look further at the situation now is to check the origin (the subdomain that you’ve associated with your CDN subdomain that raxspace gives you, when sending the host header for the CDN url we get:

root@myweb:~# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: cdn.cusomerdomain.no'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:17:38 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

As we can see we get this odd HTTP 421 misdirected request.

# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: mycdnname1.scdn4.secure.raxcdn.com'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:18:06 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

~# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: cdncustomercname.cusomerdomain.com'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:17:45 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

https://httpd.apache.org/docs/2.4/mod/mod_http2.html

Looking at the definition for HTTP 2, this issue was caused by different TLS configurations for your domains and mod http2 trying to reuse the same connection, which will not work if the TLS configurations are not the same on the origin cloud-server side.

You just need to disable HTTP2, or configure the TLS configurations to be the same on the apache2 side. I hope that this clarifies and makes sense to you, of course if you have additional questions, comments or concerns please don't hesitate to reach out to us, we are here to help!

As you can see the importance of debugging CDN by sending host header to the origin that the CDN uses, to replicate the issue the customer was experiencing, which was essentially, the CDN edgenodes (the machines around the world that pull from the origin for content distribution really worldwide), weren't able to retrieve the files from the origin with the host header domain that is defined in the control panel.

This customer needed to in this case adjust their Apache2 configuration. The problem was likely caused by updating Apache2 or similar.