Finding all of the php.ini Files and finding what the memory_limit in them is

Simple as it says.

root@aea-php-slave:~# grep -rnw '/etc' -e 'memory_limit'
/etc/php/5.6/cli/php.ini:393:memory_limit = -1
/etc/php/5.6/apache2/php.ini:393:memory_limit = 128M
/etc/php5/fpm/php.ini:406:memory_limit = 128M
/etc/php5/fpm/pool.d/www.conf.disabled:392:;php_admin_value[memory_limit] = 32M
/etc/php5/fpm/pool.d/site.conf:392:;php_admin_value[memory_limit] = 32M
/etc/php5/cli/php.ini:406:memory_limit = -1
/etc/php5/apache2/php.ini:406:memory_limit = 128M

p

Setting 404 Error page in nginx

It’s easy to setup an custom error page in nginx. Just create a file in your documentroot like /404.html so if your documentroot is /var/www/html create a file called 404.html, then go into your /etc/nginx/nginx.conf, or /etc/nginx/conf.d/mysite.conf, and add this in your configuration between server { } directive.

Remember if your running https and http you will need the directive for each of them.

error_page 404 /404.html;

It’s possible to define a really custom location securely;


        error_page 404 /cust_404.html;
        location = /cust_404.html {
                root /usr/share/nginx/html;
                internal;
        }


Just make sure you don’t have a filename called 404.html in /404.html as I’m not sure this would work otherwise.

Plesk website running FastCGI Timeout Gateway Errors and Slow Loads

So my friend Paul shown me how to troubleshoot fastCGI on plesk boxes, very easy..

# pstree | grep cgi
     |       `-httpd---20*[php-cgi]

We can see that 20 php-cgi processes run. If we check the maximum in the configuration file set for fastCGI of Apache2..

# cat /etc/httpd/conf.d/fcgid.conf  | grep MaxProc
  FcgidMaxProcesses 20

We see 20 is the maximum, so it’s definitely hitting the FastCGI limit, we need to increase the limits, so we just edit the file and increase the limits for that variable;

vi /etc/httpd/conf.d/fcgid.conf

echo "FcgidMaxProcesses 50" >> /etc/httpd/conf.d/fcgid.conf

Setting a seperate memory limit for PhpMyAdmin to the rest of the sites

A common issue I see Rackspace customers with is there PhpMyAdmin not having enough memory, often I ‘ll see countless tickets where the memory_limit is increased for phpmyadmin, and when one of their virtualhosts falls over, it is then decreased for all of the sites, until someone wants to use phpmyadmin again.

not very good really is it? Actually, fixing this is quite easy. Lets provide a php.ini for phpmyadmin that only phpmyadmin uses;


# Copy original php configuration
cp /etc/php.ini /usr/share/phpMyAdmin/php.ini

# Modify /usr/share/phpMyAdmin/php.ini so that the following variable is set as a higher value
memory_limit = 256M

Naturally if you now goto the customerwebsite.com/phpmyadmin/php.ini you’ll see a nice php file waiting for you… not good… we need to protect the php.ini file as it can expose stuff we don’t want to be seen; lets make it as hard to find out the server configuration and hide php.ini altogether.

# File to edit may differ but it can be any file inside conf.d make one if you like call it phpini.conf or something
vi /etc/httpd/conf.d/php.conf
<Files php.ini>
          Order allow,deny
          Deny from all
</Files>

Dont’t forget the most important step

# Check apache syntax
apachectl -t

# Restart the apache process
apachectl graceful

Another pretty simples thing to do. That isn’t pretty simple until you do it.

Controlling MySQL Queries, and killing if necessary.

Today we had a customer call in who had a problem with their magento site. What had happened was they had reached the maximum number of connections for their MySQL.

 250481 | someuser | localhost:7777 | somewebsite | Query   | 2464 | Writing to net               | SELECT /*!40001 SQL_NO_CACHE */ * FROM `mg_sales_flat_quote_address`                                 

| 250486 | someuser | localhost       | somewebsite | Query   | 2459 | Waiting for table level lock | INSERT INTO `mg_sales_flat_quote_address` (`quote_id`, `created_at`, `updated_at`, `customer_id`, `a |         0 |             0 |         0 |

.. and a whole load more waiting for table level lock were underneath

Solution is simple;

kill 250481

250481 is the processid of the select query causing these issues. I suspect it will go away, eventually but in the meantime, want to make sure the max connections don’t get reached, otherwise website will be down again!

How Rackspace CDN works and what are the requirements?

The Rackspace CDN isn’t too complicated to setup. If you wish to configure the CDN with an origin dedicated server, there is really only one main requirement;

For example, if the CDN domain rackspace gives you for use with the CDN product is cdn.customer.com.cdn306.raxcdn.com, you will be required to update the website configuration (origin) for Apache on this server.

i.e. you need to include the CDN domain cdn.customer.com.cdn306.raxcdn.com as ServerAlias for Apache or as Server_Name for Nginx. Then that server alias/virtualhost receives and handles the request for the CDN.

Basically for CDN request comes in to rackspace raxcdn.com domain, and if there is no cached version on a local edgenode, the CDN makes a call to your origin server IP configured, sending a HTTP header hostname. This is indeed why you need to setup a serveralias for the rackspace.raxcdn.com hostname, once it is created.

Provided that files can be accessed on the IP address of your origin dedicated server. i.e.

http://4.2.2.4/mycdn/images/profile.jpg

When request comes into http://cdn.customer.com.cdn306.raxcdn.com it will if not cached try the origin server.

If you wish, you can add an additional CNAME of your own for your own domain.

Example only:
i.e. cdn.mydomain.com CNAME cdn.customer.com.cdn306.raxcdn.com

This allows customer to make request to cdn.mydomain.com/mycdn/images/profile.jpg which forwards to Rackspace CDN, and if it has no cached file, it will forward to origin server the request, and cache the file around the edge nodes. The edge nodes are transparent, and the only thing you really need to worry about is configuring the virtualhost correctly.

The only reason you require the virtualhost, is because the CDN uses hostheader to allow your server to identify the virtualhost serving CDN. This naturally allows you to serve CDN on the same IP address. Naturally, if you intend to use SSL as well you may wish to consider a dedicated IP and find this community article of use.

https://support.rackspace.com/how-to/create-a-rackspace-cdn-service/

Rackspace can help you configure the virtualhost and potentially may even be able to help you configure CDN product with your dedicated server origin as well.

For pricing, please see the CDN calculator at;

www.rackspace.com/pricing
https://support.rackspace.com/how-to/rackspace-cdn-faq/

Cheers &
Best wishes,
Adam

Release: The following signatures were invalid: KEYEXPIRED

I’d recommend trying to remove the key, and let the GPG key get installed automatically, when you are prompted for the new key. Depending on the strict of the output of the error, for me it was:

W: GPG error: http://download.opensuse.org ./ Release: The following signatures were invalid: KEYEXPIRED 1489340837
sudo apt-key del 1489340837
sudo apt-get clean
sudo apt-get update

Less Ghetto Log Parser for Website Hitcount/Downtime Analysis

Yesterday I created a proof of concept script, which basically goes off and identifies the hitcounts of a website, and can give a technician within a short duration of time (minutes instead of hours) exactly where hitcounts are coming from and where.

This is kind of a tradeoff, between a script that is automated, and one that is flexible.

The end goal is to provide a hitcount vs memory commit metric value. A NEW TYPE OF METRIC! HURRAH! (This is needed by the industry IMO).

And also would be nice to generate graphing and mean, average, and ranges, etc. So can provide output like ‘stat’ tool. Here is how I have progress

#!/bin/bash
#
# Author: 	Adam Bull, Cirrus Infrastructure, Rackspace LTD
# Date: 	March 20 2017
# Use:		This script automates the analysis of webserver logs hitcounts and
# 		provides a breakdown to indicate whether outages are caused by website visits
#		In correlation to memory and load avg figures


# Settings

# What logfile to get stats for
logfile="/var/log/httpd/google.com-access.log"

# What year month and day are we scanning for minute/hour hits
year=2017
month=Mar
day=9

echo "Total HITS: MARCH"
grep "/$month/$year" "$logfile" | wc -l;

# Hours
for i in 0{1..9} {10..24};

do echo "      > 9th March 2017, hits this $i hour";
grep "$day/$month/$year:$i" "$logfile" | wc -l;

        # break down the minutes in a nested visual way thats AWsome

# Minutes
for j in 0{1..9} {10..60};
do echo "                  >>hits at $i:$j";
grep "$day/$month/$year:$i:$j" "$logfile" | wc -l;
done

done

Thing is, after I wrote this, I wasn’t really happy, so I refactored it a bit more;

#!/bin/bash
#
# Author: 	Adam Bull, Cirrus Infrastructure, Rackspace LTD
# Date: 	March 20 2017
# Use:		This script automates the analysis of webserver logs hitcounts and
# 		provides a breakdown to indicate whether outages are caused by website visits
#		In correlation to memory and load avg figures


# Settings

# What logfile to get stats for
logfile="/var/log/httpd/someweb.biz-access.log"

# What year month and day are we scanning for minute/hour hits
year=2017
month=Mar
day=9

echo "Total HITS: $month"
grep "/$month/$year" "$logfile" | wc -l;

# Hours
for i in 0{1..9} {10..24};

do
hitsperhour=$(grep "$day/$month/$year:$i" "$logfile" | wc -l;);
echo "    > $day $month $year, hits this $ith hour: $hitsperhour"

        # break down the minutes in a nested visual way thats AWsome

# Minutes
for j in 0{1..9} {10..59};
do
hitsperminute=$(grep "$day/$month/$year:$i:$j" "$logfile" | wc -l);
echo "                  >>hits at $i:$j  $hitsperminute";
done

done

Now it’s pretty leet.. well, simple. but functional. Here is what the output of the more nicely refined script; I’m really satisfied with the tabulation.

[root@822616-db1 automation]# ./list-visits.sh
Total HITS: Mar
6019301
    > 9 Mar 2017, hits this  hour: 28793
                  >>hits at 01:01  416
                  >>hits at 01:02  380
                  >>hits at 01:03  417
                  >>hits at 01:04  408
                  >>hits at 01:05  385
^C