Renaming a folder en-mass

I was figuring out a way to rename a folder en-mass which took me a few seconds, instead of about an hour of clicking.

Linux automation is a win.

# mmv "Documentary - National Geometry series-*.avi" "#1.avi"

I had some old tv series but they were all prefixed with the same filename ‘Documentary – National Geometry series’ before their volume number and title. So I found the above command to strip that out for all 100 of them, in a single command. Basically #1 is where * is in the source command regular expression, so everything after the source expression and between the *.

It’s a really nice utility and preferred on BSD systems to perl rename imo

Rapid Troubleshooting mail not arriving at its destination

Today I had a customer whose mail was not arriving at it’s destination. IN my customers case, they knew that it was arriving at the destination but going into the spam folder. This is likely due to blacklisting, however some ISP or clients dont know it’s reaching the spam folder at the other end, and since the most common cause of this happening is due to a missing SPF (Sending policy framework record), MX record or DKIM record it is possibly to rapidly check the DNS of each using dig, if the sender domain is known.

To check for the IP Blacklistings on a mailserver use it’s ip in one of the many spam a checker;

https://mxtoolbox.com/SuperTool.aspx?action=blacklist

In other cases it might be being caused by email bounces, due to the PTR, MX or DKIM records, and not even getting into the inbox, you can see that on the sending mail server using a simple grep command;

cat /var/log/maillog | grep -i status=bounced

You probably want to save the file as well

cat /var/log/maillog | grep -i status=bounced > bouncedemail.txt 

If you wanted to know which domains bounced email, if you’ve ensured all sending domains are correctly configured via DNS..

[root@api ~]# cat /var/log/maillog | grep -i status=bounced | awk '{print $7}'
to=<pi.magnum@generationgame.com>,
to=<john.partridge@boeeb.com>,

You could use sed to extract which domains failed..

Then you could use whois against the domains to reach out to the email contact with some automation that explains that ‘weve checked DKIM, MX and SPF and all are configured correctly and believe this is an error on your behalf, etc, blah blah’..

Such a thing would be a good idea to implement for some large email providers, and I’m sure you could automate the DNS checking as well. You could likely automate the whole thing, just by watching the logs and DNS records, and some intelligent grep and awking.

Not bad.

Lysncd fails with Error: Terminating since out of inotify watches.

This is a remarkably easy problem to solve.

From man:

The inotify API provides a mechanism for monitoring filesystem
       events.  Inotify can be used to monitor individual files, or to
       monitor directories.  When a directory is monitored, inotify will
       return events for the directory itself, and for files inside the
       directory.

The problem is a kernel based value, that determines how much memory is used by processes such as lsyncd. Lets just double check what value it is set to presently:

Check Inotify max_user_watches value presently

[root@server php]# cat /proc/sys/fs/inotify/max_user_watches
100000

Check the amount of available memory on the server to ensure it can deal with the increase

[root@server-new php]# free -m
             total       used       free     shared    buffers     cached
Mem:          7358       6577        781          1        376       2427
-/+ buffers/cache:       3773       3584
Swap:            0          0          0

The cache can be added to free, as a maximum saturation. Linux uses lots of memory it doesn’t really need for efficiency.

Each used inotify watch uses 1 kB RAM in 64 bit systems, for 32bit its half.

Increase Inotify watches to a higher value

sysctl fs.inotify.max_user_watches=200000

Magento Rewrite Woes … really woes

I had a customer this week that had some terrible rewrite woes with their magento site. They knew that a whole ton of their images were getting 404’s most likely because rewrite wasn’t getting to the correct filesystem path that the file resided. This was due to their cache being broken, and their second developer not creating proper rewrite rule.

As a sysadmin our job is not a development role, we are a support role, and in order to enable the developer to fix the problem, the developer needs to be able to see exactly what it is, enter the sysads task. I wrote this really ghetto script, which essentially hunts in the nginx error log for requests that failed with no such file, and then qualifies them by grepping for jpg file types. This is not a perfect way of doing it, however, it is really effective at identifying the broken links.

Then I have a seperate routine that strips the each of the file uri’s down to the filename, and locates the file on the filesystem, and matches the filename on the filesystem that the rewrite should be going to, as well as the incorrect path that the rewrite is presently putting the url to. See the script below:

#!/bin/bash

# Author: Adam Bull
# Company: Rackspace LTD, Hayes
# Purpose:
#          This customer has a difficulty with nginx rewriting to the incorrect file
# this script goes into the nginx error.log and finds all the images which have the broken rewrite rules
# then after it has identified the broken rewrite rule files, it searches for the correct file on the filesystem
# then correlates it with the necessary rewrite rule that is required
# this could potentially be used for in-place upgrade by developers
# to ensure that website has proper redirects in the case of bugs with the ones which exist.

# This script will effectively find all 404's and give necessary information for forming a rewrite rule, i.e. the request url, from nginx error.log vs the actual filesystem location
# on the hard disk that the request needs to go to, but is not being rewritten to file path correctly already

# that way this data could be used to create rewrite rules programmatically, potentially
# This is a work in progress


# These are used for display output
cat /var/log/nginx/error.log /var/log/nginx/error.log.1 | grep 'No such file' | awk '{print "URL Request:",$21,"\nFilesystem destination missing:",$7"\n"}'
zcat /var/log/nginx/*error*.gz  | grep 'No such file' | awk '{print "URL Request:",$21,"\nFilesystem destination detected missing:",$7"\n"}'

# These below are used for variable population for locating actual file paths of missing files needed to determine the proper rewrite path destination (which is missing)
# we qualify this with only *.jpg files

cat /var/log/nginx/error.log /var/log/nginx/error.log.1 | grep 'No such file' | awk '{print $7}' | sed 's/\"//g' |  sed 's/.*\///' | grep jpg > lost.txt
zcat /var/log/nginx/*error*.gz  | grep 'No such file' | awk '{print $7}' | sed 's/\"//g' |  sed 's/.*\///' | grep jpg >> lost.txt

# FULL REQUEST URL NEEDED AS WELL
cat /var/log/nginx/error.log /var/log/nginx/error.log.1 | grep 'No such file' | awk '{print "http://mycustomerswebsite.com",$21}' | sed 's/\"//g' | grep jpg > lostfullurl.txt
zcat /var/log/nginx/*error*.gz  | grep 'No such file' | awk '{print "http://customerwebsite.com/",$21}' | sed 's/\"//g' | grep jpg >> lostfullurl.txt

# The below section is used for finding the lost files on filesystem and pairing them together in variable pairs
# for programmatic usage for rewrite rules


while true
do
  read -r f1 <&3 || break
  read -r f2 <&4 || break
  printf '\n\n'
  printf 'Found a broken link getting a 404 at : %s\n'
  printf "$f1\n"
  printf 'Locating the correct link of the file on the filesystem: %s\n'
        find /var/www/magento | grep $f2
done 3<lostfullurl.txt 4<lost.txt

I was particularly proud of the last section which uses a ‘dual loop for two input files’ in a single while statement, allowing me to achieve the descriptions above.

Output is in the form of:


Found a broken link getting a 404 at :
http://customerswebsite.com/media/catalog/product/cache/1/image/800x700/9df78eab33525d08d6e5fb8d27136e95/b/o/image-magick-file-red.jpg
Locating the correct link of the file on the filesystem:
/var/www/magento/media/catalog/product/b/o/image-magick-file-red.jpg

As you can see the path is different on the filesystem to the url that the rewrite is putting the request to, hence the 404 this customer is getting.

This could be a really useful script, and, I see no reason why the script could not generate the rewrite rules programatically from the 404 failures it finds, it could actually create rules that are necessary to fix the problem. Now, this is not an ideal fix, however the script will allow you to have an overview either to fix this properly as a developer, or as a sysadmin to patch up with new rewrite rules.

I’m really proud of this one, even though not everyone may see a use for it. There really really is, and this customer is stoked, think of it like this, how can a developer fix it if he doesn’t have a clear idea of the things that are broken, and this is the sysads job,

Cheers &
Best wishes,
Adam

Recovering Corrupt RPM DB

I had a support specialist today that had an open yum task they couldn’t kill gracefully, after kill -9, this was happening

	
[root@mybox home]# yum list | grep -i xml
rpmdb: Thread/process 31902/140347322918656 failed: Thread died in Berkeley DB library
error: db3 error(-30974) from dbenv-&gt;failchk: DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db3 -  (-30974)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:
Error: rpmdb open failed
[root@mybox home]#

The solution to fix this is to manually erase the yumdb and to manually rebuild it;


[root@mybox home]# rm -f /var/lib/rpm/__*
[root@mybox home]# rpm --rebuilddb

Finding all of the php.ini Files and finding what the memory_limit in them is

Simple as it says.

root@aea-php-slave:~# grep -rnw '/etc' -e 'memory_limit'
/etc/php/5.6/cli/php.ini:393:memory_limit = -1
/etc/php/5.6/apache2/php.ini:393:memory_limit = 128M
/etc/php5/fpm/php.ini:406:memory_limit = 128M
/etc/php5/fpm/pool.d/www.conf.disabled:392:;php_admin_value[memory_limit] = 32M
/etc/php5/fpm/pool.d/site.conf:392:;php_admin_value[memory_limit] = 32M
/etc/php5/cli/php.ini:406:memory_limit = -1
/etc/php5/apache2/php.ini:406:memory_limit = 128M

p

Site keeps on going down because of spiders

So a Rackspace customer was consistently having an issue with their site going down, even after the number of workers were increased. It looked like in this customers case they were being hit really hard by yahoo slurp, google bot, a href bot, and many many others.

So I checked the hour the customer was affected, and found that over that hour just yahoo slurp and google bot accounted for 415 of the requests. This made up like 25% of all the requests to the site so it was certainly a possibility the max workers were being reached due to spikes in traffic from bots, in parallel with potential spikes in usual visitors.

[root@www logs]#  grep '01/Mar/2017:10:' access_log | egrep -i 'www.google.com/bot.html|http://help.yahoo.com/help/us/ysearch/slurp' |  wc -l
415

It wasn’t a complete theory, but was the best with all the available information I had, since everything else had been checked. The only thing that remains is the number of retransmits for that machine. All in all it was a victory, and this was so awesome, I’m now thinking of making a tool that will do this in more automated way.

I don’t know if this is the best way to find google bot and yahoo bot spiders, but it seems like a good method to start.

Using omconfig to add a RAID 1 device for a Perc 6/i Dell Raid Controller

So, I’ve been provisioning disks, and stuff recently.. this is how I did it on a Dell. Quite an easy thing to do!

omconfig storage controller action=createvdisk controller=0 raid=r1 size=max readpolicy=ara pdisk=0:0:2,0:0:3
Command successful!

In this case the two disks newly added were 0:0:2 and 0:0:3 on the SAS ‘bus’.

An additional primary partition was created and added for this device sdb1, and a filesystem of the same kind (ext3) as the system disk was created;

 mkfs.ext3 /dev/sdb1

mke2fs 1.39 (29-May-2006)
....
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

You will naturally need to mount the partition and create an fstab entry to make this permanent;

mount /dev/sdb1 /mnt/backup

echo "/dev/sdb1               /mnt/backup             ext3    defaults        1 1" >> /etc/fstab

You may wish to consider adding the above to fstab manually. It’s not a good idea using echo with it incase you make a mistake ;-D

Cheers &
Best wishes,
Adam

Block all the IP’s from country

So, I wrote a nice little one liner for one of our customers that wanted to blanket ban Russia (even though I said it wasn’t a good idea, or marginally effective to stop attacks). Might help with spam or other stuff though, and anyway, the customer is always ‘wrong’, it’s up to us to make sure that they do it wrongly right. ;-D

curl http://www.ipdeny.com/ipblocks/data/countries/ru.zone -o russia_ips_all.txt; cat russia_ips_all.txt | xargs -i echo /sbin/iptables -I INPUT -s {} -j DROP

Here is how I achieved it above. This bans all the IP’s from russia. But, if you aren’t very equal opportunities :(, you can ban all kinds of countries:

http://www.ipdeny.com/ipblocks/

Just take a look at this, and change the url, as such. It doesn’t matter what the variables say (even if they say russia, just change the url directly after curl). For instance

http://www.ipdeny.com/ipblocks/data/countries/pl.zone -o ips_all.txt; cat ips_all.txt | xargs -i echo /sbin/iptables -I INPUT -s {} -j DROP

I was really quite happy with this little oneliner. 😀

Cheers &
Best wishes,
Adam

Help! I can’t login to my cloud-server even though I’ve reset my root password

The most common cause of this is the permit root login is set to no, although there might be other causes, like a really broken sshd_config, instead of just one variable. The procedure for looking into this is pretty much the same regardless of the breakage that has occurred. Here is what you need to do:

Here’s the full procedure:

1) Put server into rescue mode.
2) Login to cloud-server on SSH port, please note rescue mode gives you a new temporary root password allowing you to reset the password for SSH on the ‘original disk’.
3) once logged in mount the /dev/xvdb devices, this may be /dev/xvdb1 or /dev/xvdb2 but is usually /dev/xvdb1 and chroot (change root to the ‘original disk’)

# Mount old disk
mnt /dev/xvdb1 /mnt

# Change to the ‘old disk’
chroot /mnt

# Set the new password for root on the old disk:

passwd
# enter the new password when prompted

and specifically ensure that /etc/ssh/sshd_config has this line:

PermitRootLogin no

changed to:

PermitRootLogin yes

Your developer or sysad won’t be able to login until you reset the root password here, and if you do not know the username to su to root from, it is absolutely critical to perform this work, otherwise you won’t be able to access the server.

Also, once you have allowed the root login, and changed the password to something you recognise you will be able to exit rescue mode thru the control panel and login to the machine as normal.

For more detail about how to do this (although all the steps are here pretty much, please see):

https://support.rackspace.com/how-to/rackspace-cloud-essentials-rescue-mode-on-linux-cloud-servers/

I hope this helps you folks out some,