So I thought about a cool way to backup my files without using anything too fancy and I started to think about ZFS. Don’t know why I didn’t before because it’s ultra ultra resilient. Cheers Oracle. This is in Debian 7 Wheezy.
Step 1 Install zfs
# apt-get install lsb-release # wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_6_all.deb # dpkg -i zfsonlinux_6_all.deb # apt-get update # apt-get install debian-zfs
Step 2 Create Mirrored Disk Config with Zpool.
Here i’m using 4 x 75GB SATA Cloud Block Storage Devices to have 4 copies of the same data with ZFS great error checking abilities
zpool create -f noraidpool mirror xvdb xvdd xvde xvdf
Step 3. Write a little disk write utility
#!/bin/bash while : do echo "Testing." $x >> file.txt sleep 0.02 x=$(( $x + 1 )) done
Step 4 (Optional). Start killing the Disks with fire, kill iscsi connection etc, and see if file.txt is still tailing.
./write.sh & ; tail -f /noraidpool/file.txt
Step 5. Observe that as long as one of the 4 disks has it’s virtual block device connection your data is staying up. So it will be OK even if there is 3 or less I/O errors simultaneously. Not baaaad.
root@zfs-noraid-testing:/noraidpool# /sbin/modprobe zfs root@zfs-noraid-testing:/noraidpool# lsmod | grep zfs zfs 2375910 1 zunicode 324424 1 zfs zavl 13071 1 zfs zcommon 35908 1 zfs znvpair 46464 2 zcommon,zfs spl 62153 3 znvpair,zcommon,zfs root@zfs-noraid-testing:/noraidpool# zpool status pool: noraidpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM noraidpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 xvdb ONLINE 0 0 0 xvdd ONLINE 0 0 0 xvde ONLINE 0 0 0 xvdf ONLINE 0 0 0 errors: No known data errors
Step 6. Some more benchmark tests
time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"
Step 7. Some concurrent fork tests
#!/bin/bash while : do time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" & echo "Testing." $x >> file.txt sleep 2 x=$(( $x + 1 )) zpool iostat clear done
or better
#!/bin/bash time sh -c "dd if=/dev/zero of=ddfile bs=128k count=250000 && sync" & time sh -c "dd if=/dev/zero of=ddfile bs=24k count=250000 && sync" & time sh -c "dd if=/dev/zero of=ddfile bs=16k count=250000 && sync" & while : do echo "Testing." $x >> file.txt sleep 2 x=$(( $x + 1 )) zpool iostat clear done
bwm-ng ‘elegant’ style output of disk I/O using zpool status
#!/bin/bash time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" & while : do clear zpool iostat sleep 2 clear done
To test the resiliency of ZFS I removed 3 of the disks, completely unlatching them
NAME STATE READ WRITE CKSUM noraidpool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 1329894881439961679 UNAVAIL 0 0 0 was /dev/xvdb1 12684627022060038255 UNAVAIL 0 0 0 was /dev/xvdd1 4058956205729958166 UNAVAIL 0 0 0 was /dev/xvde1 xvdf ONLINE 0 0 0
And noticed with just one remaining Cloud block storage device I was still able to access the data on the disk as well as create data:
cat file.txt | tail Testing. 135953 Testing. 135954 Testing. 135955 Testing. 135956 Testing. 135957 Testing. 135958 Testing. 135959 Testing. 135960 Testing. 135961 Testing. 135962 # mkdir test root@zfs-noraid-testing:/noraidpool# ls -a . .. ddfile file.txt forktest.sh stat.sh test writetest.sh
That’s pretty flexible.