TCPDUMP command packet capture Usage

So, it’s been a little while since my last update. We’ve been quite busy recently, but for those interested in learning more about tcpdump and physically capturing packets.

List Interfaces that can be tcp dumped

tcpdump -D

Listen on Interface eth0

tcpdump -i eth0

Listen to Xenserver domain 16 on public net

tcpdump -i vif16.0 

Listen on any interface

tcpdump -i any

Super duper High verbosity tcpdump

tcpdump -vvvv -i eth0 

Be verbose and print data of each packet in both hex and ASCII

tcpdump -v -X -i eth0

Be less verbose

tcpdump -q 

Limit the capture of packets to 100

tcpdump -c 100 -i eth0 

Display IP addresses and port numbers instead of domain and service names when capturing packets (note: on some systems you need to specify -nn to display port numbers):

tcpdump -n

Capture any packets where the destination host is 192.168.1.1. Display IP addresses and port numbers:

tcpdump -n dst host 192.168.1.1

Capture any packets where the source host is 192.168.1.1. Display IP addresses and port numbers:

tcpdump -n src host 192.168.1.1

http://www.rationallyparanoid.com/articles/tcpdump.html

Creating 200 cloud servers using openstack Nova

Had a question on how to do this from a customer today.
It is possible to create very many cloud servers in a quick time something like:

#!/bin/sh
for i in `seq 1 200`;
do
nova boot --image someimageidhere --flavor '2GB Standard Instance' "\Server-$i"
sleep 5
done

So simple, but could build out many servers (a small farm) in just an hour or so:D

Update

So my colleague tells me, that backticks are bad, i.e. deprecated. Which, they are, and I expected to hear this from someone, as my knowledge is somewhat a little old school. Here is what my friend recommends.

for i in {0..200}; do
nova boot --image someimageidhere --flavor '2GB Standard Instance' "\Server-$i"
sleep 5
done

Using CBS boot from volume with Rackspace HEAT Orchestration

So, a customer reached out to us today concerning ways to use HEAT to build CBS

  blk_server:
    type: "Rackspace::Cloud::Server"
    properties:
      flavor: 15 GB Memory v1
      image: { get_param: image }
      name: "blk"
      user_data:
...

The problem is using this format they get an error

ERROR: Image Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM) requires 20 GB minimum disk space. Flavor 15 GB Memory v1 has only 0 GB.

This is happening because memory flavor doesn’t use the hypervisor instance store, and instead is using Cloud Block Storage, hence ‘0GB’.
Thanks to my friend Aaron I have dug out the documentation for building CBS boot from volume server flavors. Here is how it would be done.

parameters:
  nodesize:
    type: number
    label: Nodes Disk Size
    description: Size of the each Nodes primary disk.
    default: 50
    constraints:
      - range: { min: 50, max: 1024 }
        description: must be between 50 and 1024 Gb.

    nodeimage:
    label: Operating system
    description: |
      Server image. Defaults to 'CentOS 6 (PVHVM)'.
    type: string
    default: CentOS 7 (PVHVM)
    constraints:
    - allowed_values:
      - CentOS 7 (PVHVM)
      - Red Hat Enterprise Linux 7 (PVHVM)
      description: Must be a supported operating system.

  
  elk_server:
    type: "Rackspace::Cloud::Server"
    properties:
      flavor: 15 GB Memory v1
      block_device_mapping: [{ device_name: "vda", volume_id : { get_resource : cinder_volume }, delete_on_termination : "true" }]
      name: "elk"
      user_data:
  
    cinder_volume:
    type: OS::Cinder::Volume
    properties:
      size: { get_param: nodesize }
      image: { get_param: nodeimage }

Fixing driveclient filling hard disk

So there was an issue with the Rackspace Cloud Backup agent on an incremental release, where on some clients the disk would fill, here is how to fix it up by updating to the version released ont he 7th March 2016.

 apt-get update

 apt-get install --reinstall --assume-yes driveclient

Sending test email at commandline to test mailserver

I was migrating some mail today to a new mailserver, i needed to test mail quickly.

So I run this on an external server

echo "Hello world" | mail -s "meh" [email protected]

I then ran an tail -f /var/log/mail.log on my local mail server

tail -f /var/log/mail.log

Mar 10 17:42:22 mymail-7-wheezy postfix/cleanup[14592]: 9EF95D42A5: message-id=<[email protected]>
Mar 10 17:42:22 mymail-7-wheezy postfix/qmgr[4691]: 9EF95D42A5: from=, size=1097, nrcpt=1 (queue active)
Mar 10 17:42:22 mymail-7-wheezy postfix/virtual[14604]: 9EF95D42A5: to=, relay=virtual, delay=0.01, delays=0/0/0/0, dsn=2.0.0, status=sent (delivered to maildir)
Mar 10 17:42:22 mymail-7-wheezy postfix/qmgr[4691]: 9EF95D42A5: removed
Mar 10 17:42:22 mymail-7-wheezy amavis[14463]: (14463-01) Passed CLEAN {RelayedOpenRelay}, [37.1.1.1]:46386 [37.1.1.1]  -> , Queue-ID: 347DDD429D, Message-ID: <[email protected]>, mail_id: Y9fimRqJrWtV, Hits: 1.693, size: 650, queued_as: 9EF95D42A5, 1448 ms
Mar 10 17:42:22 mymail-7-wheezy postfix/smtp[14596]: 347DDD429D: to=, relay=127.0.0.1[127.0.0.1]:10024, delay=1.5, delays=0.01/0/0.01/1.4, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 9EF95D42A5)
Mar 10 17:42:22 mymail-7-wheezy postfix/qmgr[4691]: 347DDD429D: removed

As we can see email is working rather nicely after my DNS updates!

A good way to test this.

Check port is open over some time

for x in {0..50};do nmap -sT -p 22  134.213.31.84 | grep ssh;done

This works with or without sleep

for x in {0..50};do nmap -sT -p 22  134.213.31.84 | grep ssh; sleep 1; done

Thanks to my colleague Marcin for this.

Finding Stuff quick and dirty way

Hey. So my good friend who is a support engineer was asking me how he could find mail log that wasn’t in the traditional location and he was scratching his head.
So I put this together (which by the way is really bad), but not in a harmful way, it could just be more elegant. But since he is still learning , this seemed like a good time to introduce him to xargs.

find / | grep mail | grep log | xargs -i ls -al {}

Nice and simple though and pretty much straight to the point, if the grep pipes are forgiven. (and wouldn’t blame you if they were not 🙂 )

Upgrading Xen Tools to 6.0 and/or 6.2

# Upgrade to 6.0 , then upgrade to 6.2.0

Xen Tools 6.0.2 Download

http://63773473543190a035ec-a897bd33ba42b6c03ac54566871e97ca.r54.cf2.rackcdn.com/xs-tools-6.0.2-58937.zip

Xen Tools 6.2.0 Download

 
http://8d268c176171c62fbd4b-7084e0c7b53cce27e6cc2142114e456e.r30.cf1.rackcdn.com/xstools-6.2.zip

# Upgrade to 6.2 Use the above guide but replace the download link with this one.

http://8d268c176171c62fbd4b-7084e0c7b53cce27e6cc2142114e456e.r30.cf1.rackcdn.com/xstools-6.2.zip

Thanks to my colleague Aaron for finding these links

General Instructions:
https://support.rackspace.com/how-to/installing-xenserver-tools-on-next-generation-windows-cloud-servers/

A new way of Deploying CBS for Large Clusters, using the TOR method 5600% to 12800% faster

So, I was thinking about the problem with cloning CBS volumes, where if you want to make several 64 copies of a CBS disk or more in a quick time. But what happens is they are built sequentially and queued. They are copied one at a time. So when a windows customer approached us, a colleague reached out to me to see if there was any other way of doing this thru snapshots or clones. In fact there was, and cinder is to be considered a fox, fast and cunning and unseen , but it is trapped inside a cage called glance.

This is about overcoming those limitations, introducing TOR-CBS
Parallel CBS Building with Openstack Cinder

This is all about making the best of the infrastructure that is there. Cinder is massively distributed so, building 64 parallel copies is achievable at a much higher parallel bandwidth, and for those reasons it is a ‘tor like’ system. A friend of mine compared it to cellular division. There is a kind of organic nature to the method applied, as all children are used as new parents for copy. This explains the efficiency and speed of the system. I.e. the more servers you want to build the more time you save .

When this actually worked for the first time I had to take a step back. It really meant that building 64 CBS would take an hour, and building 128 of them would take 1 hour and 10 minutes. Damn, that’s fast!

When you’ve got all thatI.e. clone 1 disk to create a second disk. Clone both the first and the second disk to make four disks. Clone the four to make 8 in total. Clone 8 to make 16 in total. 32, 64, 128, 256, 512, 1024, 2048. Your cluster can double in size in roughly 10 minutes a go provided that Cinder service has the infrastructure in place. This appears to be a new potentially revolutionary way of building out in the cloud.

See the diagram below for a proper illustration and explanation.

rapiddeploy-tor-cbs

As you can see the one for one copy in the 9th or 10th step is in the tens of thousands of percent more efficient!! The reason is because CBS clone is a one to one copy, and even if you specify to build 50 from a single volume id source, it will incrementally build them, one by one.

My system works the same, except it uses all of the available disks already built from the previous n steps, therefore giving an n’th exponent of amplification of efficiency per step, in other words, ‘something for nothing’. It also properly utilizes the distributed nature of CBS and very many network ports. Instead of utilizing a single port from the source volume, which is ultimately the restricting bottleneck factor in spinning up large cloud solutions.

I am absolutely delighted. IT WORKS!!

The Code

build-cbs.sh

USERNAME='MYCLOUDUSERNAMEHERE'
APIKEY='MYAPIKEYHERE'
ACCOUNT_NUMBER=10010111
API_ENDPOINT="https://lon.blockstorage.api.rackspacecloud.com/v1/$ACCOUNT_NUMBER/volumes"
MASTER_CBS_VOL_ID="MY-MASTER-VOLUME-ID-HERE"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "Using MASTER_CBS_VOL_ID $MASTER_CBS_VOL_ID.."
sleep 2

# Populate CBS
# No longer using $1 and $2 as unnecessary now we have cbs-fork-step
for i in `seq 1 2`;
do

echo "Generating CBS Clone #$i"
curl -s -vvvv  \
-X POST "$API_ENDPOINT" \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-H "Content-Type: application/json" -d '{"volume": {"source_volid": "'$MASTER_CBS_VOL_ID'", "size": 50, "display_name": "win-'$i'", "volume_type": "SSD"}}'  | jq .volume.id | tr -d '"' >> cbs.created.newstep
done

echo "Giving CBS 15 minute grace time for 50 CBS clone"

z=0
spin() {
   local -a marks=( '/' '-' '\' '|' )
   while [[ $z -lt 500 ]]; do
     printf '%s\r' "${marks[i++ % ${#marks[@]}]}"
     sleep 1
     let 'z++'
   done
 }

spin

echo "Listing all CBS Volume ID's created"
cat cbs.created.newstep
# Ensure all of the initial created cbs end up in the master file
cat cbs.created.newstep >> cbs.created.all

echo "Initial Copy completed"

So the first bit is simple, the above uses the openstack Cinder API endpoint to create two copies of the master. It takes a bit longer the initial process, but if your building 64 to infinite servers this is going to be the most efficient and fastest way to do it. The thing is, we want to recursively build CBS in steps.

Enter cbs-fork-step.sh

cbs-fork-step.sh

USERNAME='MYCLOUDUSERNAMEHERE'
APIKEY='MYAPIKEYHERE'
ACCOUNT_NUMBER=10010111
API_ENDPOINT="https://lon.blockstorage.api.rackspacecloud.com/v1/$ACCOUNT_NUMBER/volumes"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

z=0
spin() {
   local -a marks=( '/' '-' '\' '|' )
   while [[ $z -lt 400 ]]; do
     printf '%s\r' "${marks[i++ % ${#marks[@]}]}"
     sleep 1
     let 'z++'
   done
 }

count=$1

#count=65;
while read n; do
echo ""
# Populate CBS TOR STEPPING

echo "Generating TOR CBS Clone $count::$n"
date
curl -s  \
-X POST "$API_ENDPOINT" \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-H "Content-Type: application/json" -d '{"volume": {"source_volid": "'$n'", "size": 50, "display_name": "win-'$count'", "volume_type": "SSD"}}' | jq .volume.id | tr -d '"' >> cbs.created.newstep


((count=count+1))

done < cbs.created.all

cat cbs.created.newstep > cbs.created.all
echo "Waiting 8 minutes for Clone cycle to complete.."
spin

As you can see from the above, the volume master ID disappears, we’re now using the 2 CBS VOL ID’s that were initially copied in the first build-cbs.sh file. From now on, we’ll iterate while reading n lines of the cbs.crated.newstep file. For redundancy cbs.created.all is used as well. The problem is this is a fixed iterative loop, what about controlling how many times this runs?

Also, we obviously need to keep count and track of each CBS, so we call them win-‘$count’, the ‘ ‘ is for termination/escape from the ‘” “‘. This allows each CBS to get the correct logical name based on the sequence, but in order for this to work properly, we need to put it all together in a master.sh file. The master forker, which adds an extra loop traversal to the design.

Putting it all together

master.sh

drwxr-xr-x. 2 root root 4096 Oct 7 10:44 curl
drwxr-xr-x. 2 root root 4096 Nov 12 13:48 customer
drwxr-xr-x. 4 root root 4096 Oct 12 15:07 .gem
# Master Controller file

# Number of Copy Steps Minimum 2 Maximum 9
# Steps 2=2 copies, 3=4 copies, 4=8, 5=16, 6=32, 7=64, 8=128, 9=256
# Steps 2=4 copies, 3=8 copies, 4=16, 5=32, 6=64, 7=128
# The steps variable determines how many identical Tor-copies of the CBS you wish to make
steps=6

rm cbs.created.all
rm cbs.created.newstep

touch cbs.created.all
touch cbs.created.newstep

figlet TOR CBS
echo ‘By Adam Bull, Rackspace UK’
sleep 2

echo “This software is alpha”
sleep 2

echo “Initiating initial Copy using $MASTER_CBS_VOLUME_ID”
# Builds first copy
./build-cbs.sh

count=4
for i in `seq 1 $steps`; do
let ‘count–‘
./cbs-fork-step.sh $count
let ‘count = (count * 2)’
done

echo “Attaching CBS and Building Nova Compute..”
./build-nova.sh

This code is still alpha, but it works really nicely. The output of the script looks like;

# ./master.sh
 _____ ___  ____     ____ ____ ____
|_   _/ _ \|  _ \   / ___| __ ) ___|
  | || | | | |_) | | |   |  _ \___ \
  | || |_| |  _ <  | |___| |_) |__) |
  |_| \___/|_| \_\  \____|____/____/

By Adam Bull, Rackspace UK
This software is alpha
Initiating initial Copy using
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   5013    114  0:00:01  0:00:01 --:--:--  5017

Generating TOR CBS Clone 3::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:25:26 UTC 2016

Generating TOR CBS Clone 4::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:25:27 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4942    113  0:00:01  0:00:01 --:--:--  4948

Generating TOR CBS Clone 5::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:32:10 UTC 2016

Generating TOR CBS Clone 6::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:32:11 UTC 2016

Generating TOR CBS Clone 7::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:32:12 UTC 2016

Generating TOR CBS Clone 8::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:32:12 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   5186    118 --:--:-- --:--:-- --:--:--  5183

Generating TOR CBS Clone 9::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:38:56 UTC 2016

Generating TOR CBS Clone 10::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:38:56 UTC 2016

Generating TOR CBS Clone 11::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:38:57 UTC 2016

Generating TOR CBS Clone 12::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:38:58 UTC 2016

Generating TOR CBS Clone 13::42145009-33a7-4fc4-9865-da7a82e943c1
Wed Mar  2 12:38:58 UTC 2016

Generating TOR CBS Clone 14::58db8ae2-2e0e-4629-aad6-5c228eb4b342
Wed Mar  2 12:38:59 UTC 2016

Generating TOR CBS Clone 15::d0bf36cb-6dd5-4ed3-8444-0e1d61dba865
Wed Mar  2 12:39:00 UTC 2016

Generating TOR CBS Clone 16::459ba327-de60-4bc1-a6ad-200ab1a79475
Wed Mar  2 12:39:00 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4953    113  0:00:01  0:00:01 --:--:--  4958

Generating TOR CBS Clone 17::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:45:44 UTC 2016

Generating TOR CBS Clone 18::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:45:45 UTC 2016

Generating TOR CBS Clone 19::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:45:45 UTC 2016

Generating TOR CBS Clone 20::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:45:46 UTC 2016

Generating TOR CBS Clone 21::42145009-33a7-4fc4-9865-da7a82e943c1
Wed Mar  2 12:45:46 UTC 2016

Generating TOR CBS Clone 22::58db8ae2-2e0e-4629-aad6-5c228eb4b342
Wed Mar  2 12:45:47 UTC 2016

Generating TOR CBS Clone 23::d0bf36cb-6dd5-4ed3-8444-0e1d61dba865
Wed Mar  2 12:45:48 UTC 2016

Generating TOR CBS Clone 24::459ba327-de60-4bc1-a6ad-200ab1a79475
Wed Mar  2 12:45:48 UTC 2016

Generating TOR CBS Clone 25::9b10b078-c82d-48cd-953e-e99d5e90774a
Wed Mar  2 12:45:49 UTC 2016

Generating TOR CBS Clone 26::0692c7dd-6db0-43e6-837d-8cc82ce23c78
Wed Mar  2 12:45:50 UTC 2016

Generating TOR CBS Clone 27::f2c4a89e-fc37-408a-b079-f405e150fa96
Wed Mar  2 12:45:50 UTC 2016

Generating TOR CBS Clone 28::5077f4d8-e5e1-42b6-af58-26a0b55ff640
Wed Mar  2 12:45:51 UTC 2016

Generating TOR CBS Clone 29::f18ec1c3-1698-4985-bfb9-28604bbdf70b
Wed Mar  2 12:45:52 UTC 2016

Generating TOR CBS Clone 30::fd96c293-46e5-49e4-85d5-5181d6984525
Wed Mar  2 12:45:52 UTC 2016

Generating TOR CBS Clone 31::9ea40b0d-fb60-4822-a538-3b9d967794a2
Wed Mar  2 12:45:53 UTC 2016

Generating TOR CBS Clone 32::ea7e2c10-d8ce-4f22-b8b5-241b81dff08c
Wed Mar  2 12:45:54 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
/

Resolving Broken or Crashed Tables in Mariadb (MySQL)

So we had someone with a lot of errors like this in mariadb.

60225  6:24:49 [Note] Server socket created on IP: '0.0.0.0'.
160225  6:24:49 [Note] Event Scheduler: Loaded 0 events
160225  6:24:49 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.5.44-MariaDB'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MariaDB Server
160225  6:24:49 [ERROR] mysqld: Table './enovie_lad/wp_options' is marked as crashed and should be repaired
160225  6:24:49 [Warning] Checking table:   './enovie_lad/wp_options'
160225  6:28:18 [ERROR] mysqld: Table './enovie_lad/wp_lad_course_assign' is marked as crashed and should be repaired
160225  6:28:18 [Warning] Checking table:   './enovie_lad/wp_lad_course_assign'
160225  6:28:18 [ERROR] mysqld: Table './enovie_lad/wp_lad_course_attendence' is marked as crashed and should be repaired
160225  6:28:18 [Warning] Checking table:   './enovie_lad/wp_lad_course_attendence'
160225  6:28:18 [ERROR] mysqld: Table './enovie_lad/wp_lad_userlog' is marked as crashed and should be repaired
160225  6:28:18 [Warning] Checking table:   './enovie_lad/wp_lad_userlog'
160227 02:31:55 mysqld_safe Number of processes running now: 0
160227 02:31:55 mysqld_safe mysqld restarted
160227  2:31:55 [Note] /usr/libexec/mysqld (mysqld 5.5.44-MariaDB) starting as process 17264 ...

You could fix this using phpmyadmin’s repair function. See, ask google
Ask Google

Or alternatively you could use mysqlcheck to repair the database(s).

./client/mysqlcheck [OPTIONS] database [tables]
./client/mysqlcheck [OPTIONS] --databases DB1 [DB2 DB3...]

OR

./client/mysqlcheck [OPTIONS] --all-databases

Inside the options you need to define -r, for repair. So if you have a database called db1 and a table called wp_lad_userlog you would run something like

./client/mysqlcheck -r database wp_lad_userlog 

For all databases to be repaired (take care);

 
./client/mysqlcheck -r --all-databases