Setting up an Encrypted RAID with Ubuntu-ZFS+LUKS

From Fyzix
Jump to: navigation, search

Reference Documentation

Ubuntu-zfs for Linux: https://wiki.ubuntu.com/ZFS

This explains the different RAID levels: http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/

I decided to go with RAID10: http://www.zfsbuild.com/2010/06/03/howto-create-striped-mirror-vdev-pool/

http://www.taeuber.org/how-to-use-zfs-and-encryption-on-a-ubuntu-home-server/

http://www.linux-mag.com/id/6371/

If the RAID needs to be restored in the event the base drive crashes:

http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s06.html

Basically use zpool import raid01

Here is a good analysis of ZFS: http://www.mouldy.org/what-i-learned-from-setting-up-zfs-on-my-fileserver

This talks about backing up the Luks Headers: https://help.ubuntu.com/community/encryptedZfs

Prepare the disks for encryption

Use frandom as opposed to urandom. It's much faster. Make sure dependencies are installed.

Alternatively, once the RAID is built, create a file with /dev/zero that fills the hard drive. This will write all bits with random data. Depends on patience and instant gratification.

apt-get install kernel-package build-essential

Get kernel source

apt-get install linux-headers-$(uname -r)

Compile frandom http://www.billauer.co.il/frandom.html

cd /source
tar xvf frandom-1.1.tar.gz
cd /source/frandom-1.1
make
install -m 644 frandom.ko /lib/modules/`uname -r`/kernel/drivers/char/
depmod -a
modprobe frandom

This will create /dev/frandom

Wipe the drive with entropy

dd if=/dev/frandom of=/dev/sdX

Alternative entropy writing post RAID config

Alternatively, once the RAID is built, create a file with /dev/zero that fills the hard drive. This will write all bits with random data. Depends on patience and instant gratification.

dd if=/dev/zero of=/mnt/raid01/somefile

Setup Truecrypt

Install and setup Truecrypt to house the very large encrypted key.

Reference: Installing and using Truecrypt

Get Ubuntu-zfs

Add the repository

add-apt-repository ppa:zfs-native/stable
apt-get update
apt-get ugprade
apt-get install ubuntu-zfs

Get RAID applications

apt-get install parted cryptsetup

Partition devices

Configure partitions and RAID

parted --align optimal /dev/sdX
mklabel gpt
mkpart non-fs 1 100%

Make Key file

Reference: http://www.howtoforge.com/automatically-unlock-luks-encrypted-drives-with-a-keyfile

I made the key file (indigo.txt) and put it inside a truecrypt volume (/mnt/safe) in order to add another layer of security.

dd if=/dev/urandom of=/mnt/safe/indigo.txt bs=1024 count=4

Create Encrypted RAID with LUKS

In this case I am keeping the master key file inside a Truecrypt volume.

Format devices using the key file.

cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdb1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdc1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdd1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sde1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdf1

Open the encrypted devices with key file.

cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdb1 crypt1
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdc1 crypt2
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdd1 crypt3
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sde1 crypt4
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdf1 crypt5

RAID10

Create the zpool in a RAID10 configuration

zpool create raid01 -m /mnt/raid01 mirror /dev/mapper/crypt1 /dev/mapper/crypt2
zpool add raid01 mirror /dev/mapper/crypt3 /dev/mapper/crypt4

RAIDz

Or create the zpool in a RAIDZ configuration (Note: This is less flexible if you have to add hard drives later, but my new favorite).

zpool create raid01 -m /mnt/raid01 raidz2 /dev/mapper/crypt1 /dev/mapper/crypt2 /dev/mapper/crypt3 /dev/mapper/crypt4 /dev/mapper/crypt5 /dev/mapper/crypt6

RAID10 with L2ARC+ZIL (SSD Caching)

Create the zpool in a RAID10, log, and cache configuration

zpool create raid01 -m /mnt/raid01 mirror /dev/mapper/crypt3 /dev/mapper/crypt4 log /dev/mapper/crypt1 cache /dev/mapper/crypt2


Check the zpool status

RAID10

root@Holland:/dev/mapper# zpool status
  pool: raid01
 state: ONLINE
 scrub: none requested
config:
 
        NAME               STATE     READ WRITE CKSUM
        raid01             ONLINE       0     0     0
          mirror-0         ONLINE       0     0     0
            mapper/crypt1  ONLINE       0     0     0
            mapper/crypt2  ONLINE       0     0     0
          mirror-1         ONLINE       0     0     0
            mapper/crypt3  ONLINE       0     0     0
            mapper/crypt4  ONLINE       0     0     0
 
errors: No known data errors

RAIDz

root@Holland:~# zpool status
  pool: raid01
 state: ONLINE
 scrub: scrub stopped after 0h1m with 0 errors on Tue Sep  2 18:21:54 2014
config:
 
        NAME                           STATE     READ WRITE CKSUM
        raid01                         ONLINE       0     0     0
          raidz2-0                     ONLINE       0     0     0
            disk/by-id/dm-name-crypt1  ONLINE       0     0     0
            disk/by-id/dm-name-crypt2  ONLINE       0     0     0
            disk/by-id/dm-name-crypt3  ONLINE       0     0     0
            disk/by-id/dm-name-crypt4  ONLINE       0     0     0
            disk/by-id/dm-name-crypt5  ONLINE       0     0     0
 
errors: No known data errors

RAID10 with L2ARC+ZIL (SSD Caching)

root@Synod:~# zpool status
  pool: raid01
 state: ONLINE
 scrub: none requested
config:
 
        NAME               STATE     READ WRITE CKSUM
        raid01             ONLINE       0     0     0
          mirror-0         ONLINE       0     0     0
            mapper/crypt3  ONLINE       0     0     0
            mapper/crypt4  ONLINE       0     0     0
        logs
          mapper/crypt1    ONLINE       0     0     0
        cache
          mapper/crypt2    ONLINE       0     0     0
 
errors: No known data errors

Script to start the RAID

I keep the keys, which unlock all devices, inside a Truecrypt volume.

truecrypt --mount /home/fyzix/random.txt /mnt/safe
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdf1 crypt1
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdg1 crypt2
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdh1 crypt3
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdi1 crypt4
service zfs-fuse restart
truecrypt -d

Backup the Luks Headers

We make a backup of the Luks headers just in case something overwrites them (rare).

mkdir -p /root/luks_headers
cryptsetup luksHeaderBackup /dev/sdb1 --header-backup-file /root/luks_headers/luks_sdb1_header_backup
cryptsetup luksHeaderBackup /dev/sdc1 --header-backup-file /root/luks_headers/luks_sdc1_header_backup
cryptsetup luksHeaderBackup /dev/sdd1 --header-backup-file /root/luks_headers/luks_sdd1_header_backup
cryptsetup luksHeaderBackup /dev/sde1 --header-backup-file /root/luks_headers/luks_sde1_header_backup
cryptsetup luksHeaderBackup /dev/sdf1 --header-backup-file /root/luks_headers/luks_sdf1_header_backup

Performance

Raid 10

hdparm results

root@Holland:/dev# hdparm -t /dev/mapper/crypt1vg-raid01 
 
/dev/mapper/crypt1vg-raid01:
 Timing buffered disk reads: 122 MB in  3.04 seconds =  40.19 MB/sec
root@Holland:/dev# hdparm -t /dev/mapper/crypt2vg-raid02
 
/dev/mapper/crypt2vg-raid02:
 Timing buffered disk reads: 144 MB in  3.03 seconds =  47.50 MB/sec

Sequential read/write results

time sh -c "dd if=/dev/zero of=ddfile bs=4k count=2000000 && sync"
 
Holland:/mnt/raid01# time sh -c "dd if=/dev/zero of=ddfile bs=4k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB) copied, 268.078 s, 30.6 MB/s
 
root@Holland:/mnt/raid01# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
16384000000 bytes (16 GB) copied, 431.057 s, 38.0 MB/s

RAIDz

root@Holland:/mnt/raid01# time sh -c "dd if=/dev/zero of=ddfile bs=4k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB) copied, 290.392 s, 28.2 MB/s
 
real    4m50.787s
user    0m1.064s
sys     0m25.506s
root@Holland:/mnt/raid01# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
16384000000 bytes (16 GB) copied, 479.859 s, 34.1 MB/s

Samba network performance

Using a 32bit system I got between 12MB/s to 20MB/s

For second RAID

In this case I am keeping the master key file inside a Truecrypt volume.

cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sda1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdb1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdc1
cryptsetup --verbose luksFormat --key-file /mnt/safe/indigo.txt /dev/sdd1

Open the encrypted devices.

cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sda1 crypt5
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdb1 crypt6
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdc1 crypt7
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdd1 crypt8

Create the zpool in a RAID10 configuration

zpool create raid02 -m /mnt/raid02 mirror /dev/mapper/crypt5 /dev/mapper/crypt6
zpool add raid02 mirror /dev/mapper/crypt7 /dev/mapper/crypt8

Runraid script for both Raids

truecrypt --mount /home/fyzix/random.txt /mnt/safe
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdf1 crypt1
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdg1 crypt2
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdh1 crypt3
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdi1 crypt4
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sda1 crypt5
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdb1 crypt6
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdc1 crypt7
cryptsetup luksOpen --key-file /mnt/safe/indigo.txt /dev/sdd1 crypt8
zpool clear raid01
zpool clear raid02
zfs mount -a
truecrypt -d
mount --bind /mnt/raid01/www /var/www
echo "...........---===[Raids Online]===---............."
df -h

Samba performance write speed averages around 27.6 MB/s Tops out around 37 MB/s. Read averages 34MB/s Tops out around 49MB/s