Creating an encrypted filesystem on a partition
Here's a best-known method for creating an encrypted filesystem on a disk partition on a Debian GNU/Linux system.
In order to mount this filesystem, you'll need a passphrase, so either the system must not automatically mount the filesystem at boot time, or you'll need access to the system's console during the boot process to type or cut-and-paste the passphrase when prompted by cryptsetup.
These instructions also apply to creating an encrypted filesystem on an md RAID device.
It's also possible to create an encrypted filesystem on an entire disk, rather than just a partition, but the LUKS maintainers recommend that you instead create a single partition for the whole disk and use that, instead. Many disk and filesystem tools expect a partition table, even if the contents of the partitions are encrypted. This does not weaken the encrypted filesystem in any way. (Note that this recommendation doesn't appear to apply to md RAID devices: the RAID device has metadata of its own and probably doesn't need a partitioning scheme on top, so feel free to create the encrypted filesystem on the entire md device.)
Install the parted, cryptsetup, openssl and dcfldd packages:
# apt-get install parted cryptsetup openssl dcfldd
The following steps assume that the partition to be encrypted is named /dev/sdz1, and that the filesystem will be mounted on /foobar. Change these names as needed.
Unmount the partition
If the partition is currently mounted, you must first unmount it (after backing up any data on the partition that needs to be saved, of course):
# umount /dev/sdz1
Partition the disk
Create a new disk label and partition on the disk. Here I'm using the GPT partitioning scheme.
Many new high-capacity spinning storage hard drives use a 4096-byte sector size to get around limitations in the LBA addressing scheme. For performance reasons, it's crucial when using these drives that all partitions are aligned to 4KB boundaries. Most GNU/Linux tools have been updated to deal with the new size (previously all spinning storage drives used 512-byte sectors). However, for backward compatibility with Windows XP, some of these 4K sector drives report that they're using a 512-byte sector size. The only way to know for sure is to consult your drive manufacturer's documentation, or to comb the web in search of this information. It's probably safe to assume that most new drives circa September 2010 with a capacity of at least 1.5TB are using 4K sectors.
As of version 2.3,
parted appears to do the right thing in both cases.
Partitioning the drive
# parted --align optimal /dev/sdz GNU Parted 2.3 Using /dev/sdz Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt Warning: The existing disk label on /dev/sdz will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? Yes (parted) mkpart non-fs 1 100% (parted) p Model: ATA ST2000DL003-9VT1 (scsi) Disk /dev/sdr: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB non-fs (parted) q Information: You may need to update /etc/fstab. #
Fill the partition with random data
This may take awhile; on typical x86 hardware circa 2008, this step proceeds at the rate of about 8MB/s.
# dcfldd if=/dev/urandom of=/dev/sdz1
Generate a strong key
Use a cryptographically strong key to encrypt the filesystem. When creating an encrypted filesystem on a local machine, you might use a USB flash drive to store the key (protected by a passphrase) and connect it only when mounting the filesystem, but obviously you don't have that option in the case of a system that needs to be managed remotely. In fact, if you're creating this filesystem on a remote machine, make sure that either a) the boot sequence does not depend on this drive being mounted at boot time or b) you have some kind of secure access to the console during boot (e.g., ssh into a Xen dom0 and running xm console on the domU during its boot sequence) so that you can type or cut-and-paste the passphrase when prompted by cryptsetup. The key should be stored in an encrypted file (e.g., encrypted with GPG), should not be stored on the system, and should consist of printable characters, if you're going to be typing or cut-and-pasting the key over a secure remote session. Using printable characters weakens the key compared to an equal-length key consisting of random bytes, but using such a key would require storing the key in an encrypted file somewhere on the system; that file, in turn, would need to be encrypted with a different passphrase, which itself would need to consist of printable characters so that it could be decrypted for use with cryptsetup, and so on.
pwsafecan generate good, printable, random keys. If you use it, create a key with at least 256 bits of entropy.
Create the LUKS partition
LUKS is the Linux Unified Key Setup, the standard for GNU/Linux disk encryption.It supports multiple keys per encrypted filesystem, which is good for allowing several users to mount the system without needing to share a secret, and for backup keys in case a primary key is lost.
The default cipher for LUKS partitions is, as of August 2010, the aes-cbc-essiv:sha256 cipher; it has provisions to protect against watermarking attacks and is a good, proven cipher. Recently, LUKS has added support for XTS-based ciphers, which may eventually become the default mode. The default key size is 256 bits. Use the --key-size= option to specify a different size.
See http://en.wikipedia.org/wiki/Disk_encryption_theory for more details, but this recipe will assume that the defaults are good enough.
Regular disk device
If the device to be encrypted is a regular disk and not an md RAID array, use this command to create the new LUKS partition:
# cryptsetup --verbose luksFormat --verify-passphrase /dev/sdz1
When prompted, supply the key you created in the step above.
md RAID array
If the device to be encrypted is an md RAID array, you should use the --align-payload= to ensure that crypto blocks are aligned on RAID stripes. This option takes as an argument the number of 512-byte sectors in a full RAID stripe. To calculate this value, multiply your RAID chunk size in bytes by the number of data disks in the array (N/2 for RAID 1, N-1 for RAID 5 and N-2 for RAID 6), and divide by 512 bytes per sector. In the example below, /dev/md0 is a RAID 6 device with 4 data disks and a stripe size of 128 kbytes: 128 * 1024 * 4 / 512 = 1024 sectors.
# cryptsetup --verbose luksFormat --verify-passphrase --align-payload=1024 /dev/md0
When prompted, supply the key you created in the step above.
Mount the LUKS partition
Mount the LUKS partition. This command creates a new device-mapper device at /dev/mapper/foobar.
# cryptsetup luksOpen /dev/sdz1 foobar Enter LUKS passphrase: key slot 0 unlocked. Command successful.
Once again, when prompted, supply the key you created previously.
You can now access that device as if it were a standard block storage device. Either create a filesystem on the encrypted device (/dev/mapper/foobar in this case) or use it as a physical volume for LVM (see next step).
Use the device as an LVM PV (optional)
You can use the encrypted device as an LVM physical volume (PV), in which case all of the volumes on the device will also be encrypted. However, because the PV isn't visible at boot time due to the encryption layer, there are a few manual steps required whenever you boot the machine.
Create the PV
Assuming our new device is named /dev/mapper/foobar, let's make it a PV.
If the encrypted device is a regular disk:
# pvcreate --metadatatype 2 /dev/mapper/foobar Physical volume "/dev/mapper/foobar" successfully created
md RAID array
If the encrypted device is an md RAID array, we want to ensure that LVM creates volumes on full stripe boundaries. To begin with, we need to tell LVM to allocate its metadata block size as a multiple of the full stripe width. The --metadatasize flag takes as an argument the size of the metadata block; the default is 192 kbytes. If that's a multiple of your stripe size, you don't need to use the --metadatasize flag. Otherwise, calculate your stripe size (chunk size times number of data disks in the array), round it down by a couple of kbytes (the --metadatasize flag is a little quirky) and specify that size. Here's an example using a RAID 6 array with 6 disks (i.e., 4 data disks) using a chunk size of 128k: the stripe size is 128 * 1024 * 4 = 512 kbytes.
# pvcreate --metadatatype 2 --metadatasize 500k /dev/mapper/foobar Physical volume "/dev/mapper/foobar" successfully created
To ensure that the first physical extent (PE) of the PV is aligned properly, we can use the pvs command:
# pvs /dev/mapper/foobar -o+pe_start PV VG Fmt Attr PSize PFree 1st PE /dev/dm-8 lvm2 -- 5.46T 5.46T 512.00K
In this example, that's the right alignment, so we can proceed.
Create the VG
Creating the volume group (VG) is most likely the same regardless of the device type (disk or md array) because the only stripe-sensitive parameter is the physical extent size. It's 4MB by default in LVM2, so unless 4MB is not a multiple of your stripe width, you don't need to do anything special; otherwise, look into the --physicalextentsize parameter in the vgcreate man page.
# vgcreate foobarvg /dev/mapper/foobar Volume group "foobarvg" successfully created
Create one or more LVs
Now create one or more logical volumes (LVs) just like you would for any other VG. There are no special options needed here for stripe alignment on an md array; LV sizes are dictated by the VG PE size.
# lvcreate --name baz --size 5TB foobarvg
Create the filesystem
When creating a filesystem on an md array, we should again consider the chunk and stripe sizes. This example shows how to account for them on an ext3 filesystem. The parameters are identical for ext4, but different for XFS or other non-ext-based filesystems; consult the man pages for details.
The relevant options for ext3 are stride and stripe-width. stride is identical to the md array chunk size, and stripe-width is identical to the array stripe width, except that both options are specified in units of filesystem blocks instead of bytes. The default ext3 (and ext4) block size is 4096 bytes, so simply divide your chunk size and stripe width by 4096 to get the proper values for these parameters. Here's an example using a RAID 6 array with 6 disks (i.e., 4 data disks) using a chunk size of 128k (stripe size is therefore 512 kbytes):
# mke2fs -t ext3 -E stride=32,stripe-width=128 -L baz /dev/foobarvg/baz
# mke2fs -t ext3 -L baz /dev/foobarvg/baz
command will suffice if the underlying device is not an md array.
Add the filesystem to /etc/crypttab and /etc/fstab (optional)
If you did not use the device as an LVM PV and instead created a filesystem directly on the encrypted device, you can configure the filesystem to be decrypted at boot time by adding the following line to /etc/crypttab
foobar /dev/sdz1 none luks,checkargs=ext3
Note that this line will cause the boot process to hang until the key is supplied. If you want the mount attempt to timeout and continue the boot without mounting the filesystem if the key is not supplied in a sufficient amount of time, use the following line instead (assuming a 20s timeout; adjust as needed):
foobar /dev/sdz1 none luks,timeout=20,checkargs=ext3
To automatically mount the decrypted filesystem at boot, add the following line to /etc/fstab
/dev/mapper/foobar /foobar ext3 defaults 0 2
If the filesystem doesn't have a line in /etc/crypttab, or if you specify a timeout there, add the noauto option to the /etc/fstab line so that the boot doesn't fail when the device isn't mapped.
/dev/mapper/foobar /foobar ext3 noauto 0 2