Creating a Generative Adversarial Network and Visions of the Future

Featured

The people generated by the ASIC device do not exist. They are synthetic creations of the Generative Adversarial Network.

Over the last few years many will know I’ve been engaged in researches into blockchain. Particularly Ravencoin X16R,X16RV2 and KAWPOW, as well as the many blockchain explorers/trackers/scanners that I have written.

Recently I’ve become a little bit obsessed with GAN, a recently invented class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014 and his colleagues at Nvidia.[1] Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent’s gain is another agent’s loss).

Stylegan 1024px early model with –size 256 parameter

Naturally, using modern GPU power that has traditionally been used for physics simulations, gaming, computational problems and things like cryptocurrency mining, it was a novel idea to consider that such technology could be used to create new novel data. Or be a fundamental resource of great power and adversarial adaptation in the playing of the game.

Trent Klein on Twitter: ""I can't believe it. The computer beaten by flesh  & blood." ~Doctor Pulaski (Peak Performance) #StarTrek #Strategema… "
Naturally, the machine lacks many inherent qualities of a human grandmaster knowledgeable in practice and study, but under the right circumstances or rule-set the machine in this case “data” is able to utilize the advantage of great speed, raw compute power, stamina and perfect reproduction of a strategy to beat the grandmaster at “Strategema”. This surely is then practice over study. This shows that machines with the right knowledge have an advantage. And that human beings with the right machines, have great power with their novel advantage to direct them.

I was very impressed at deepfakes, and the speed that deeplearning, machine learning and other technology has grown, and although I was generally disinterested by early A.I, such as “ALICE” and other polymorphic approaches to computer programming, I was particularly captivated by the deeplearning of Nvidia’s GAN. It appeared that through the correct processing of image boundaries of a very huge amount of “data”, a neural network, much like Data’s in startrek, really was capable of producing extremely novel applications in science and technology. For example, there is no reason why a similar approach could not be used to improve designs, or even build an entire product from start to finish without any human intervention. It certainly would seem then the cosmic idea of a “universal constructor”, first introduced in the popular game “Deus Ex”, is not such a strange idea. Certainly not when it is possible to apply the same methodology for face mixing and latent tracing as with Nvidia GAN, to chemical structures. Theoretically a machine that conceive an indefinite number of combinations, but can also discriminatingly qualify them in a similar way to a human being. An impressive feat.

Stylegan 1024px early model with –size 512 parameter.
The “main” sample.png file generated by the GAN modeler algorithm. These above images are used for “MIXING” (see below)
The power of the Adversarial Network is able to in the early stages produce very basic images that do not yet exceed human modeling and perceptional awareness (such as whether the image is real or not)

Predicting the Beginning and the End

To those that worry, about the technology for the future that will destroy human ingenuity and practicality – I think that the transformative power, and capability of GAN and technology like it, should allow us to create self improving machines that soon will become our guardians of the earth and the extended galaxy. A far fetched idea to some, but this technology makes it seem inevitable to me.

Stylegan 1024px latest model with –size 1024 parameter.
The trained GAN Model although has a few issues in some of the images, is nearly flawless in it’s production

It may not be very soon, but from what I can see already and imagine, the possibilities for this technology are truly endless, and it will likely be used, it may very well be used for exploring the universe from home. This technology is so simple at present, that the more complex forms of it’s application, theoretically could create entire universes, and with the sufficient compute and energy, it might be possible to discover many things about our universe without actually studying them. Simply provide a few simple rules, and the rest can be generated. Theoretically, anyway. Perhaps, then, we might be nearing a real explanation for the Hawkings Paradox, perhaps some thermal dynamic problems such as the total energy available at the beginning of the universe can be solved in a similar way through GAN type neural compute from data presently at the “middle” and latent images of stars very distant in the “beginning” or past (it takes a long time for light to reach the planet earth so most cosmic light is ancient). Using this data a new type of fundamental GAN that doesn’t just shape engineering, and novel artistic insight or design, or some chemistry simulation, but it may indeed allow us to predict nearly all things, and create a new type of computer system that is quite different from the one we are familiar.

When we mix the trained GAN Model generations we get new sets of variations;

Stylegan 1024px early model with –size 1024 parameter. This image particularly shows well the adapative nature of the Generative Adversarial Network and shows how the deep learning algorithm “learns” effectively faces and can “mix” any attributes using it’s learned data from it’s previous deeplearning training. to me It is very impressive.

A new Computer System

This new computer system would, theoretically, make efficiencies everywhere where we do not. Such as the adequate and measurable metric or data storage, redundancy, and things like satellite imagery and weather reporting. The neutral net device should theoretically be linkable to human consciousness, and to a greater system and create a new type of VR highway, that I predict will one day exist, optimizing many frequent challenges of modern society, that, until about 30-40 years ago, did not exist, until the abundance of data came along. GAN is a result of the abundance of data, but perhaps certain fundamental societal and technological evolutions in civilisation. Technology like GAN and blockchain might just be an inevitable byproduct or endproduct, of more data than we can humanly handle. And finding a way to use the data we have more efficiently, and to track it properly with automation, (such as with cloud compute), this is key. Really – the secret mystical understanding of the future of technology – was based on the understanding of the derivation of technology, society, and art, and the manner in which humanity interacts with that over a period of time. This reveals how science and art, and the society that practices it must change, rather than that the change applies to society, the society very much applies to the change.

Creating the Neural Network on Nvidia/CUDA

Creating the network is simple enough to do, and this can be done without a Docker Container on what I’d recommend would be an Ubuntu 20.04 LTS system. You can also use a docker container, however a Ubuntu 20.04 LTS system with the reference Nvidia drivers and a venv environment should be sufficient for our needs. It’s worth noting that if you intend to use the latest version of torch, python 3.8 is incompatible with torch v2, and I had some difficulty installing v1 on my linux system, simply because I was running python 3.8. It should work OK if you have a venv with python 3.7 or similar. Because this configuration can break a lot of things. It is highly recommended to use either Docker or venv, or both or either to achieve this.

Installing and Preparing the Datasets

#install venv
sudo apt-get install python3-pip

# do not do this as root, create a user for it [or use your regular user]
adduser someuser
virtualenv venv -p python3

# active venv (must be done where venv created)
source venv/bin/activate

# clone my repo
git clone https://github.com/ravenlandpush/sbgan

# cd to repo and download the celeba dataset 
# (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html )
cd sbgan
python helper.py

# Prepare the JPEG data (--out datasetout pathtosearchsubdirsforimages)
python prepare_data.py --out data .


# install the necessary dependencies (note that torchvision 2 should be OK) I use it OK for this and you can skip requiring 1.x for this example
pip install torch pillow tqdm torchvision lmdb

# CUDA package names for thoroughness shouldnt be necessary if you've 3rd party nvidia drivers installed by Ubuntu 20.04 LTS.

# DANGEROUS step if you don't know what your doing
# apt-get install nvidia-cuda-dev nvidia-cuda-toolkit nvidia-cuda-toolkit-gcc 

# Start Training the Deeplearning Generative Adversarial Network with your dataset
python3 train.py --mixing /home/adam/GAN/sbgan/data

Congratulations! You’ve reached this far and your GAN is now training. You’ll notice though that it’s running probably quite slowly. For really decent performance you’ll want to have a number of GPU. I’d recommend running on Amazon were it not so expensive. You can get multiple GPU systems though between $8 and $15 an hour, so, relatively that’s not bad considering Tesla P100 gpu’s can set you back thousands a piece. For those that mean business, and for the many that work on GAN more full time they seem to be using DGX-1 which have 6 or 9 GPU builtin and are very small. Unfortunately they cost about $129,000. Although it’s still quite a specialist field, it reminds me of where bigdata was 15-20 years ago. The same could be said for enterprise linux.

Things do change. The last steps now after many weeks would be to run against the models that your generating.

A sample is saved every 100 iterations in the sample directory.

Generating from Modelling

Once your GAN has been “trained”, it should be possible to generate some really amazing mixers of images and I was taken aback by how effective some of the software of modernity has become at identifying things, even when the software does not know what it is, detecting the boundary and “putting things in the right place” is all that matters to us.. very cool.

# Generate from the Trained Models created in checkpoint folder (happens as training goes by)
# Use size 8,16,32,64,128,256,512,1024, etc
# depending how far along the training

python3 generate.py /home/adam/GAN/sbgan/checkpoint/train_step-4.model --size 64 --n_row 8 --n_col 8

python3 generate.py /home/adam/GAN/sbgan/checkpoint/train_step-4.model --size 64 --n_row 8 --n_col 8

The Checkpoints for the GAN are generated in ./checkpoints, this allow you to retrain from any specific point and to compare or merge certain image sets later on if you wish to experiment with greater complexity

The Final results

I really love GAN’s now 😀

Just for fun I wrote this script that can automatically pull data in and out of a docker container.

#!/bin/bash
# this script indefinitely makes a new face every x moments
# deep fake y'all nvidia cuda stylee

length=1000000
for (( i = 0; i <= $length; i=i+4 )) ; do
j=$(($i + 1))
k=$(($j + 1))
l=$(($k + 1))
echo "Processing face $i,$j,$k,$l";

docker run --gpus all -it --rm -v `pwd`:/scratch --user $(id -u):$(id -g) stylegan2ada:latest bash -c     "(cd /scratch && DNNLIB_CACHE_DIR=/scratch/.cache python3 generate.py --trunc=1 --seeds=$i,$j,$k,$l --outdir=out --network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metfaces.pkl)" > /dev/null

sleep 10


done
             

Simple but, cool. meh. As you can see this one uses the stylegan2-ada pretained metfaces pkl model from nvlabs. Not bad for a quick poke around at a new subject.

Compiling Ravencoin sgminer branch for x16r algorithm on ethOS

So, I was asked by a friend to compile some software for him using: https://github.com/aceneun/sgminer-gm-x16r/releases

I have to say that users branch was exactly that ‘aceneun’.

In the end I thought I’d goto the source so reached out to the raven developer of this branch:
https://github.com/brian112358/sgminer-x16r

It’s a tough time compiling in EthOS because it’s all weird, and it required extra depedancies do an apt-cache search for the extra dev libs, namely:

sudo apt-get install ocl-icd-opencl-dev it has a lot of dependancies and in ethos:

sudo apt-get-ubuntu install ocl-icd-opencl-dev

Upgrading all those packages seems not to break anything in my EthOS anyway. Now on to compiling brians branch of sgminer for the X16r algorithm.

Download repo:

git clone https://github.com/brian112358/sgminer-x16r

Prepare for compilation && checkout the dev branch && run make prepare

cd sgminer-x16r
git checkout dev

git submodule update --init --recursive
./autogen.sh

Then finally, we finish with :

CFLAGS="-Os -Wall -march=native -I/opt/AMDAPPSDK-3.0/include" LDFLAGS="-L/opt/amdgpu-pro/lib/x86_64-linux-gnu" ./configure --disable-git-version --disable-adl --prefix=/opt/sgminer-5.5.5

Finally run make to compile the software

make

You dont have to install it but you can like

make install

Make a proper file for our configs:

vi mylauncher.sh

#!/bin/bash

export GPU_FORCE_64BIT_PTR=1
export GPU_MAX_HEAP_SIZE=100
export GPU_USE_SYNC_OBJECTS=1
export GPU_MAX_ALLOC_PERCENT=100
export GPU_SINGLE_ALLOC_PERCENT=100



ADDRESS="RBFthisiswhereyourravencoinaddressgoes"
POOL="stratum+tcp://miningpanda.site:3666"
PASSWORD="x"
INTENSITY="19"

./sgminer -k x16r -o $POOL -u $ADDRESS -p $PASSWORD -I $INTENSITY

chmod +x mylauncher.sh
./mylauncher.sh

A very simple task and as an avid proponent of ravencoin!

Compiling Ravencoin sgminer branch for x16r algorithm on ethOS

So, I was asked by a friend to compile some software for him using: https://github.com/aceneun/sgminer-gm-x16r/releases

I have to say that users branch was exactly that ‘aceneun’.

In the end I thought I’d goto the source so reached out to the raven developer of this branch:
https://github.com/brian112358/sgminer-x16r

It’s a tough time compiling in EthOS because it’s all weird, and it required extra depedancies do an apt-cache search for the extra dev libs, namely:

sudo apt-get install ocl-icd-opencl-dev it has a lot of dependancies and in ethos:

sudo apt-get-ubuntu install ocl-icd-opencl-dev

Upgrading all those packages seems not to break anything in my EthOS anyway. Now on to compiling brians branch of sgminer for the X16r algorithm.

Download repo:

git clone https://github.com/brian112358/sgminer-x16r

Prepare for compilation && checkout the dev branch && run make prepare

cd sgminer-x16r
git checkout dev

git submodule update --init --recursive
./autogen.sh

Then finally, we finish with :

CFLAGS="-Os -Wall -march=native -I/opt/AMDAPPSDK-3.0/include" LDFLAGS="-L/opt/amdgpu-pro/lib/x86_64-linux-gnu" ./configure --disable-git-version --disable-adl --prefix=/opt/sgminer-5.5.5

Finally run make to compile the software

make

You dont have to install it but you can like

make install

Make a proper file for our configs:

vi mylauncher.sh

#!/bin/bash

export GPU_FORCE_64BIT_PTR=1
export GPU_MAX_HEAP_SIZE=100
export GPU_USE_SYNC_OBJECTS=1
export GPU_MAX_ALLOC_PERCENT=100
export GPU_SINGLE_ALLOC_PERCENT=100



ADDRESS="RBFthisiswhereyourravencoinaddressgoes"
POOL="stratum+tcp://miningpanda.site:3666"
PASSWORD="x"
INTENSITY="19"

./sgminer -k x16r -o $POOL -u $ADDRESS -p $PASSWORD -I $INTENSITY

chmod +x mylauncher.sh
./mylauncher.sh

A very simple task and as an avid proponent of ravencoin!

Diagnosing a sick website getting 500,000 to 1 million views a day

So today I had a customer that had some woes. I don’t even think they were aware they were getting 504’s but I had to come up with some novel ways to

A) show them where teh failure happened
B) Show them the failed pages that failed to load (i.e. get a 504 gateway timeout)
C) show them the number of requests and how they changed based on the day of the outage, and a ‘regular normal’ day.
D) show them specific type of pages which are failing to give better idea of where the failure was

In this case a lot of the failures were .html pages, so it could be a cache was being triggered too much, it could be that their application was really inefficient, or in many cases, were catalog search requests which no doubt would scratch the db pretty nastily if the database or the query wasn’t refactored or designed with scalability in mind.

With all that in mind I explained to the customer, even the most worrysome (or woesome) of applications and frameworks, and even the most grizzly of expensive MySQL queries can be combatted, simply by a more adaptable or advanced cache mechanism. Anyway, all of that out of the way, I said to them it’s important to understand the nature of the problem with the application, since in their case were getting a load average of over 600.

I don’t have their solution,. I have the solution to showing them the problem. Enter the sysad, blazing armour, etc etc. Well, thats the way it’s _supposed_ to happen !

cat /var/log/httpd/access_log | grep '26/Mar' | grep 'HTTP/1.1" 50' | wc -l
26081

cat /var/log/httpd/access_log | grep '27/Mar' | grep 'HTTP/1.1" 50' | wc -l
2

So we can see 504’s the day before wasn’t an issue, but how many requests did the site get for each day comparatively?

[[email protected] httpd]# cat access_log | grep '26/Mar' | wc -l
437598
[[email protected] httpd]# cat access_log | grep '25/Mar' | wc -l
339445

The box received 25% more traffic, but even based from the figures in the SAR, cpuload had gone up 1500% beyond what the 32 cores on their server could do. Crazy. It must be because requests are getting queued or rather ‘building up’, and there are so many requests reaching apache, hitting the request for mysql, that either mysql formed a bottleneck and might need more memory, or, at this scale, a larger or smaller (probably larger) sized packet for the request, this can speed up significantly how fast the memory bucket fills and empties, and request queue gets killed. Allowing it to build up is going to be a disaster, because it will mean not just slow queries get a 504 gateway timeout, but also normal requests to regular html pages too (or even cached pages), since at that stage the cpu is completely overwhelmed.

I wrote a script,

to find a majority of the 504’s for the 26 Mar you can use this piece:

cat access_log | grep '26/Mar' | grep 'HTTP/1.1" 50' | awk {'print $7'}

to generate a unique list for your developer/team of pages which failed you can run:

cat access_log | grep '26/Mar' | grep 'HTTP/1.1" 50' | awk {'print $7'} | sort | uniq

To be honest. In the simplicity of this post somewhere, is a stroke of inspiration (if not ingenuity). Also it’s kind of hacky and crap, but, it does work and it is effective for doing the job.

AND that is What counts.

Fixing phpmyadmin, Connection for controluser as defined in your configuration failed.

This happens when the phpmyadmin package is installed, but for some reason or another the automation the package manager and phpmyadmin have for setting up the phpmyadmin user, and phpmyadmin database doesn’t properly use the table schema from /usr/share. Here is the process of fixing this error for those that get it.

Create a database called phpmyadmin

create database phpmyadmin;

You can actually call the database anything as long as you remember what you changed it to later.

Create a database user

MariaDB [(none)]> GRANT ALL PRIVILEGES ON phpmyadmin.* to [email protected] identified by 'AVERYSECUREpasswordgoeshere98123123sdabcsd123' ;
Query OK, 0 rows affected (0.00 sec)

Locate the create_tables.sql file copied by the package manager (or zip if installing from source)

[[email protected] phpMyAdmin]# find /usr/share | grep create_table
/usr/share/phpMyAdmin/sql/create_tables.sql
/usr/share/phpMyAdmin/sql/create_tables_drizzle.sql
/usr/share/phpMyAdmin/libraries/display_create_table.lib.php
/usr/share/phpMyAdmin/test/libraries/PMA_display_create_table_test.php

Import the database schema

# Check the file is correct
[[email protected] phpMyAdmin]# vi /usr/share/phpMyAdmin/sql/create_tables.sql

# Import it
[[email protected] wp phpMyAdmin]# mysql -u root -p < /usr/share/phpMyAdmin/sql/create_tables.sql
Enter password:

Afterwards you will need to make phpmyadmin aware of the creds in /etc/phpMyAdmin/config.inc.php

vi /etc/phpMyAdmin/confing.inc.php

Confirm your changes

[[email protected] phpMyAdmin]# cat /etc/phpMyAdmin/config.inc.php | grep -A3 phpmyadmin
 * wiki <http://wiki.phpmyadmin.net>.
 */

/*
--
$cfg['Servers'][$i]['controluser']   = 'phpmyadmin';          // MySQL control user settings
                                                    // (this user must have read-only
$cfg['Servers'][$i]['controlpass']   = 'AVERYSECUREpasswordgoeshere98123123sdabcsd123';          // access to the "mysql/user"

$cfg['Servers'][$i]['pmadb']         = 'phpmyadmin'

Your work is done, and that pesky error is gone now phpmyadmin has it’s DB. This tutorial has been a long time coming as I see this all the time.

Adding a User with Sudo Access using visudo

Use the command visudo to access the /etc/sudoers file.

visudo

Uncomment this line:

## Allows people in group wheel to run all commands
# %wheel        ALL=(ALL)       ALL

So it looks like:

## Allows people in group wheel to run all commands
 %wheel        ALL=(ALL)       ALL

Save the file then

Run this command for your user

usermod -aG wheel usernameforsudoaccesshere

Your done.

But test it

su usernamewithsudoaccess
sudo yum history

Any root only command is a good enough test for this. The command should run succesfully after re-providing your users password for sudo access.

Disable/Enable TLS v1.0 v1.1 and v1.2 for plesk

This actually applies to any website, but is specifically aimed at plesk. Today a customer had complained that we’d disabled both tls 1 and 1.1, they wanted 1.1 for compatibility in the meantime, so it requires doing 1 of 2 things.

plesk bin server_pref -u -ssl-protocols 'TLSv1.1 TLSv1.2'

Or alternatively it can be done directly from within the conf.d ssl.conf for plesk in /etc/httpd/conf.d/ssl.conf, this also applies to httpd users not using plesk.

[[email protected] ~]# cat /etc/httpd/conf.d/ssl.conf | grep TLS
#SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
##     This exports the standard SSL/TLS related `SSL_*' environment variables.
##   The safe and default but still SSL/TLS standard compliant shutdown
##     the SSL/TLS standard but is needed for some brain-dead browsers. Use
##     alert of the client. This is 100% SSL/TLS standard compliant, but in
SSLProtocol +TLSv1.1 +TLSv1.2

A pretty simple operation here.

Redirect HTTP to HTTPS

It’s pretty simple after adding a HTTPS site in apache, to forward your existing HTTP website traffic to HTTPS. There might be reasons why you don’t forward everything, but in this case today I was asked to forward everything. Here is how I achieved it:

RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L] 

It could be configured for a specific directory tho

RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?somedir/(.*) https://%{SERVER_NAME}/secure/$1 [R,L] 

Pretty simple stuff.

Retrieving SMART status from a SDA disk attached to a MegaRAID card

Today I realised that manually checking the smart status of a disk required a bit more.

[[email protected] ~]# smartctl -a -d megaraid,0 /dev/sda
smartctl 5.43 2016-09-28 r4347 [x86_64-linux-2.6.32-696.16.1.el6.x86_64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

Vendor:               SEAGATE
Product:              ST3146356SS
Revision:             HS10
User Capacity:        146,815,733,760 bytes [146 GB]
Logical block size:   512 bytes
Logical Unit id:      -----
Serial number:        ----
Device type:          disk
Transport protocol:   SAS
Local Time is:        Thu Mar 15 05:18:57 2018 CDT
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature:     32 C
Drive Trip Temperature:        68 C
Elements in grown defect list: 15
Vendor (Seagate) cache information
  Blocks sent to initiator = 3694557980
  Blocks received from initiator = 4259977977
  Blocks read from cache and sent to initiator = 2859908284
  Number of read and write commands whose size <= segment size = 1099899109
  Number of read and write commands whose size > segment size = 0
Vendor (Seagate/Hitachi) factory information
  number of hours powered up = 65098.07
  number of minutes until next internal SMART test = 23

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:   105645673        6         0  105645679   105645679      65781.538           0
write:         0        0        38        38         45      48511.618           7
verify: 48452245        7         0  48452252   48452259      43540.092           7

Non-medium error count:       48

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background long   Completed                  16       1                 - [-   -    -]
# 2  Background short  Completed                  16       0                 - [-   -    -]

In order to retrieve this detail you need to use -d megaraid,n where n is the disk id number. Try 0, 1, 2, 3, etc. Or use the megaraidCLI to get a list of all the disks. I dunno I thought it was worth mentioning at least. It always pays to check this if customer is having weird I/O troubles. Quite a lot of detail is provided about errors the disk encounters. So looking here, even if SMART OK. Gives you an idea if any test failing for disk.