First, the bash script above creates zfs vdevs of a certain size, ashift, volblocksize and strategy (raidz1, raidz2, raidz3, mirror and so on). Second it starts fio with a number of parameters and repeats this 5 times. Third, results are collected in a .csv file for analysis.
Because of the time involved to perform adequate random IOPS tests on ZFS, only sequential read/write benchmarks are covered here. Random IOPS benchmarks can be found there
In Open Office, each set of 5 measurements was averaged. For writing, the first result of 5 was omited as an outlier and the remaining 4 measurements were averaged. In some charts, because of jitter, the standard deviation for writing is given with an errorbar.
The results are adjusted for data disks, so for example a raidz1 of 5 disks will appear as 4 data disks in the charts.
The y-axis show scaling instead of throughput, to make the results less machine specific. Scaling means: how much better relative to a single disk does it perform. For example, a 4-disk stripe should perform 4x better than a single disk.
Zpool created with
-o ashift=12 -o volblocksize=64k -o primarycache=metadata.
Mdraid array with mdadm and
--chunk=512 and after creation
echo 8192 > /sys/block/md0/md/stripe_cache_size
Comparison of 8 disk zfs stripe, checksum=off|on