Setting a seperate memory limit for PhpMyAdmin to the rest of the sites

A common issue I see Rackspace customers with is there PhpMyAdmin not having enough memory, often I ‘ll see countless tickets where the memory_limit is increased for phpmyadmin, and when one of their virtualhosts falls over, it is then decreased for all of the sites, until someone wants to use phpmyadmin again.

not very good really is it? Actually, fixing this is quite easy. Lets provide a php.ini for phpmyadmin that only phpmyadmin uses;


# Copy original php configuration
cp /etc/php.ini /usr/share/phpMyAdmin/php.ini

# Modify /usr/share/phpMyAdmin/php.ini so that the following variable is set as a higher value
memory_limit = 256M

Naturally if you now goto the customerwebsite.com/phpmyadmin/php.ini you’ll see a nice php file waiting for you… not good… we need to protect the php.ini file as it can expose stuff we don’t want to be seen; lets make it as hard to find out the server configuration and hide php.ini altogether.

# File to edit may differ but it can be any file inside conf.d make one if you like call it phpini.conf or something
vi /etc/httpd/conf.d/php.conf
<Files php.ini>
          Order allow,deny
          Deny from all
</Files>

Dont’t forget the most important step

# Check apache syntax
apachectl -t

# Restart the apache process
apachectl graceful

Another pretty simples thing to do. That isn’t pretty simple until you do it.

Controlling MySQL Queries, and killing if necessary.

Today we had a customer call in who had a problem with their magento site. What had happened was they had reached the maximum number of connections for their MySQL.

 250481 | someuser | localhost:7777 | somewebsite | Query   | 2464 | Writing to net               | SELECT /*!40001 SQL_NO_CACHE */ * FROM `mg_sales_flat_quote_address`                                 

| 250486 | someuser | localhost       | somewebsite | Query   | 2459 | Waiting for table level lock | INSERT INTO `mg_sales_flat_quote_address` (`quote_id`, `created_at`, `updated_at`, `customer_id`, `a |         0 |             0 |         0 |

.. and a whole load more waiting for table level lock were underneath

Solution is simple;

kill 250481

250481 is the processid of the select query causing these issues. I suspect it will go away, eventually but in the meantime, want to make sure the max connections don’t get reached, otherwise website will be down again!

How Rackspace CDN works and what are the requirements?

The Rackspace CDN isn’t too complicated to setup. If you wish to configure the CDN with an origin dedicated server, there is really only one main requirement;

For example, if the CDN domain rackspace gives you for use with the CDN product is cdn.customer.com.cdn306.raxcdn.com, you will be required to update the website configuration (origin) for Apache on this server.

i.e. you need to include the CDN domain cdn.customer.com.cdn306.raxcdn.com as ServerAlias for Apache or as Server_Name for Nginx. Then that server alias/virtualhost receives and handles the request for the CDN.

Basically for CDN request comes in to rackspace raxcdn.com domain, and if there is no cached version on a local edgenode, the CDN makes a call to your origin server IP configured, sending a HTTP header hostname. This is indeed why you need to setup a serveralias for the rackspace.raxcdn.com hostname, once it is created.

Provided that files can be accessed on the IP address of your origin dedicated server. i.e.

http://4.2.2.4/mycdn/images/profile.jpg

When request comes into http://cdn.customer.com.cdn306.raxcdn.com it will if not cached try the origin server.

If you wish, you can add an additional CNAME of your own for your own domain.

Example only:
i.e. cdn.mydomain.com CNAME cdn.customer.com.cdn306.raxcdn.com

This allows customer to make request to cdn.mydomain.com/mycdn/images/profile.jpg which forwards to Rackspace CDN, and if it has no cached file, it will forward to origin server the request, and cache the file around the edge nodes. The edge nodes are transparent, and the only thing you really need to worry about is configuring the virtualhost correctly.

The only reason you require the virtualhost, is because the CDN uses hostheader to allow your server to identify the virtualhost serving CDN. This naturally allows you to serve CDN on the same IP address. Naturally, if you intend to use SSL as well you may wish to consider a dedicated IP and find this community article of use.

https://support.rackspace.com/how-to/create-a-rackspace-cdn-service/

Rackspace can help you configure the virtualhost and potentially may even be able to help you configure CDN product with your dedicated server origin as well.

For pricing, please see the CDN calculator at;

www.rackspace.com/pricing
https://support.rackspace.com/how-to/rackspace-cdn-faq/

Cheers &
Best wishes,
Adam

Release: The following signatures were invalid: KEYEXPIRED

I’d recommend trying to remove the key, and let the GPG key get installed automatically, when you are prompted for the new key. Depending on the strict of the output of the error, for me it was:

W: GPG error: http://download.opensuse.org ./ Release: The following signatures were invalid: KEYEXPIRED 1489340837
sudo apt-key del 1489340837
sudo apt-get clean
sudo apt-get update

Less Ghetto Log Parser for Website Hitcount/Downtime Analysis

Yesterday I created a proof of concept script, which basically goes off and identifies the hitcounts of a website, and can give a technician within a short duration of time (minutes instead of hours) exactly where hitcounts are coming from and where.

This is kind of a tradeoff, between a script that is automated, and one that is flexible.

The end goal is to provide a hitcount vs memory commit metric value. A NEW TYPE OF METRIC! HURRAH! (This is needed by the industry IMO).

And also would be nice to generate graphing and mean, average, and ranges, etc. So can provide output like ‘stat’ tool. Here is how I have progress

#!/bin/bash
#
# Author: 	Adam Bull, Cirrus Infrastructure, Rackspace LTD
# Date: 	March 20 2017
# Use:		This script automates the analysis of webserver logs hitcounts and
# 		provides a breakdown to indicate whether outages are caused by website visits
#		In correlation to memory and load avg figures


# Settings

# What logfile to get stats for
logfile="/var/log/httpd/google.com-access.log"

# What year month and day are we scanning for minute/hour hits
year=2017
month=Mar
day=9

echo "Total HITS: MARCH"
grep "/$month/$year" "$logfile" | wc -l;

# Hours
for i in 0{1..9} {10..24};

do echo "      > 9th March 2017, hits this $i hour";
grep "$day/$month/$year:$i" "$logfile" | wc -l;

        # break down the minutes in a nested visual way thats AWsome

# Minutes
for j in 0{1..9} {10..60};
do echo "                  >>hits at $i:$j";
grep "$day/$month/$year:$i:$j" "$logfile" | wc -l;
done

done

Thing is, after I wrote this, I wasn’t really happy, so I refactored it a bit more;

#!/bin/bash
#
# Author: 	Adam Bull, Cirrus Infrastructure, Rackspace LTD
# Date: 	March 20 2017
# Use:		This script automates the analysis of webserver logs hitcounts and
# 		provides a breakdown to indicate whether outages are caused by website visits
#		In correlation to memory and load avg figures


# Settings

# What logfile to get stats for
logfile="/var/log/httpd/someweb.biz-access.log"

# What year month and day are we scanning for minute/hour hits
year=2017
month=Mar
day=9

echo "Total HITS: $month"
grep "/$month/$year" "$logfile" | wc -l;

# Hours
for i in 0{1..9} {10..24};

do
hitsperhour=$(grep "$day/$month/$year:$i" "$logfile" | wc -l;);
echo "    > $day $month $year, hits this $ith hour: $hitsperhour"

        # break down the minutes in a nested visual way thats AWsome

# Minutes
for j in 0{1..9} {10..59};
do
hitsperminute=$(grep "$day/$month/$year:$i:$j" "$logfile" | wc -l);
echo "                  >>hits at $i:$j  $hitsperminute";
done

done

Now it’s pretty leet.. well, simple. but functional. Here is what the output of the more nicely refined script; I’m really satisfied with the tabulation.

[root@822616-db1 automation]# ./list-visits.sh
Total HITS: Mar
6019301
    > 9 Mar 2017, hits this  hour: 28793
                  >>hits at 01:01  416
                  >>hits at 01:02  380
                  >>hits at 01:03  417
                  >>hits at 01:04  408
                  >>hits at 01:05  385
^C

Resetting trusted CA certificates in Redhat

I found this today, very handy link that explains how to regenerate the CA’s.

https://access.redhat.com/solutions/1549003

This didn’t actually resolve my error because my customer had a CAFile that had cert, but the CAPath was empty.. I’ve seen weird NSS -8172 errors before like this.

I’ll update this post once I figure out what it actually is causing it.

Ghetto but simple Log Parser for testing website performance

So… I got fedup with constantly writing my own stuff for basic things. I’m going to turn this into something more spectacular that accepts commandline input, and also, allows you to define which days, and months, ranges, and stuff like that.

It’s a no-frills-ghetto log parser.

#!/bin/bash

echo "Total HITS: MARCH"
grep "/Mar/2017" /var/log/httpd/somewebsite.com-access_log | wc -l;

for i in 0{1..9} {10..24};

do echo "      > 9th March 2017, hits this $i hour";
grep "09/Mar/2017:$i" /var/log/httpd/somesite.com-access_log | wc -l;

        # break down the minutes in a nested visual way thats AWsome
for j in 0{1..9} {10..60};
do echo "                  >>hits at $i:$j";
grep "09/Mar/2017:$i:$j" /var/log/httpd/somesite.com-access_log | wc -l;
done

done

It’s not perfect, it’s just a proof of concept, really.

Migrating a Plesk site after moving keeps going to default plesk page

So today a customer had this really weird issue where we could see that the website domain that had been moved from one server to a new plesk server, wasn’t correctly loading. It actually turned out to be simple, and when trying to access a file on the domain like I would get the phpinfo.php file.

curl http://www.customerswebsite.com/info.php 

This suggested to me the website documentroot was working, and the only thing missing was probably the index. This is what it actually did turn out to me.

I wanted to test though that info.php really was in this documentroot, and not some other virtualhost documentroot, so I moved the info.php file to randomnumbers12313.php and the page still loaded, this confirms by adding that file on the filesystem that all is well, and that I found correct site, important when troubleshooting vast configurations.

I also found a really handy one liner for troubleshooting which file it comes out, this might not be great on a really busy server, but you could still grep for your IP address as well.

Visit the broken/affected website we will troubleshoot

curl -I somecustomerswebsite.com

Give all visitors to all apache websites occurring now whilst we visit it ourselves for testing

tail -f /var/log/httpd/*.log 

This will show us which virtualhost and/or path is being accessed, from where.

Give only visitors to all apache websites occurring on a given IP

tail -f /var/log/httpd/*.log  | grep 4.2.2.4

Where 4.2.2.4 is your IP address your using to visit the site. If you don’t know what your Ip is type icanhazip into google, or ‘what is my ip’, job done.

Fixing the Plesk website without a directory index

[root@mehcakes-App1 conf]# plesk bin domain --update somecustomerswebsite.com -nginx-serve-php true -apache-directory-index index.php

Simple enough… but could be a pain if you don’t know what your looking for.