First make sure to create partitions aligned to your SSD erase block size (in my case 512k):
sudo fdisk -H32 -S32 /dev/sdb
You can check with fdisk -lu /dev/sdb
that the start of each partition is divisible by 512.
Then initialize the desired partition to use with LVM2 using the dataalignment parameter:
pvcreate --dataalignment 512k /dev/sdb1
Make sure your /etc/lvm/lvm.conf
contains the following options:
md_chunk_alignment = 1 data_alignment_detection = 1 data_alignment = 0 data_alignment_offset_detection = 1
Now you can use vgcreate
to create your volume grup, and then lvcreate
to create the logical volumes.
When creating ext4 filesystems (with TRIM support), use the following command:
mkfs.ext4 -O extent -b 4096 -E stride=128,stripe-width=128 /dev/mapper/vg1-test
stride and stripe-width are calculated as sector size / block size
= 512k / 4k = 128
When mounting ext4 filesystems, use the ‘discard‘ parameter to enable TRIM support:
mount -o discard,noatime,nodiratime /dev/mapper/vg1-test /mnt/
Extra tip: for more speed you can consider turning off journaling (to avoid double-write overhead), at the cost of an easily corruptable filesystem.
Check if journaling is enabled: dumpe2fs /dev/mapper/vg1-test |grep 'Filesystem features'
Disable journaling: tune2fs -O ^has_journal
What is the improvement of this setup? do you have any benchmarks to compare? or at least do you have any data about the actual performance? at the end where is the improvement.
Due to how SSDs work, it is better to make sure your filesystem writes and partitions on your SSD are aligned to multiples of your SSD’s erase block size.
I don’t have any benchmark, but you can read more here:
http://blog.nuclex-games.com/2009/12/aligning-an-ssd-on-linux/
http://wiki.freeswitch.org/wiki/SSD_Tuning_for_Linux
I tried this setup vs defaults and actually the defaults gave me almost 2x the performance…
#Defaults…
[root@nas VMs]# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.93733 s, 273 MB/s
#Following the article above….
[root@nas VMs2]# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 7.30458 s, 147 MB/s