Post

Boost ZFS Performance with a Special VDEV in TrueNAS

Curious about how a special metadata VDEV can boost ZFS performance, especially with spinning disks? In this video, I walk through what it is, why it matters, and how to set it up in TrueNAS and some from the terminal. I’ll also share real-world benchmarks comparing pools with and without a special VDEV, so you can see the difference for yourself.

📺 Watch Video

Testing

I created a test script that you can find here: https://github.com/timothystewart6/zfs-tools

The test script will try to create a pool based on 3 drives. HDD1, HDD2, and NVME_SPECIAL. You can modify these to match your disk Ids.

You can find your disk Ids by running:

1
ls -l /dev/disk/by-id/

You can also adjust the test files by changing TEST_COUNT however I found that 100,000 is a good number to get consistent results.

Update the script with your disk Ids.

You can then run the script:

1
2
chmod +x zfs-metadata.sh # make it executable
./zfs-metadata.sh # run the script

It will create 2 pools, test-1 and test-2, test-2 has the special vdeb.

This will take a long time to run depending on your system, disks, and TEST_COUNT.

When it’s done, you will find the results in /mnt/test-results/

If you want to check the small blocks values for your pool:

List pool

1
zpool list -v test-1 # change based on your pool name

Setting Small Blocks & Record Size

To change your small blocks value you can do so like this, however it’s a good idea to check your record size first

1
zfs get recordsize test-1

You should see something like:

1
2
NAME    PROPERTY    VALUE    SOURCE
test-1  recordsize  128K     default

You always want to be sure that your record size > small blocks size, otherwise all blocks will be written

Get value

1
zfs get special_small_blocks test-1 -r # change based on your pool name

Set value

1
zfs set special_small_blocks=64k test-1 # change based on your pool name and the small block value you want to use

If you want to use something higher than 128K you would do something like this

1
2
zfs set recordsize=256K test-1/yourdataset
zfs set special_small_blocks=128K test-1/yourdataset

My Test results

Random Read (4K, iodepth=1)

Here are my results from the video.

  • test-1 pool was a single 14TB EXOS drive
  • test-2 pool was a single 14TB EXOS drive + a Samsung 990 Pro NVMe
Metrictest-1 (HDD only)test-2 (Special VDEV)Improvement
IOPS4,33157,200+1,220%
Bandwidth16.9 MiB/s223 MiB/s+1,220%
Avg Latency865.88 µs66.35 µs−92%
99% Latency8.0 µs2.8 µs−65%
Read IO Completed2.0 GiB26.2 GiB+1,210%

Random Write (4K, iodepth=1)

Metrictest-1 (HDD only)test-2 (Special VDEV)Change
IOPS209195−6.7%
Bandwidth838 KiB/s782 KiB/s−6.7%
Avg Latency18.8 ms20.2 ms+7.4% (worse)
99% Latency10.18 µs10.82 µsSlightly worse

Random Write (4K, iodepth=16)

Metrictest-1 (HDD only)test-2 (Special VDEV)Improvement
IOPS210229+9%
Bandwidth840 KiB/s919 KiB/s+9%
Avg Latency303.8 ms277.4 ms−8.7%
99% Latency701 ms376 ms−46.4%
99.95% Latency776 ms443 ms−43%
Max Latency842 ms516 ms−38.7%

Random Read/Write (4K, iodepth=16)

Metrictest-1 (HDD only)test-2 (Special VDEV)Improvement
Read IOPS196247+26%
Write IOPS196246+25%
Read Bandwidth788 KiB/s989 KiB/s+25.5%
Write Bandwidth786 KiB/s986 KiB/s+25.5%
Avg Read Latency160.15 ms127.45 ms−20%
Avg Write Latency163.93 ms130.24 ms−20.5%
99% Read Latency502 ms207 ms−58.8%
99% Write Latency506 ms209 ms−58.7%

Metadata – Random Access (20,000 files)

PoolDuration (s)Improvement
test-171.91 s
test-265.73 s+8.6% faster

Metadata – Sequential Access (20,000 files)

PoolDuration (s)Improvement
test-1139.80 s
test-281.62 s+41.6% faster

📦 Products in this video 📦

While enterprise gear is great for businesses, I have found that if you have a good warranty, redundancy, and understand the trade-offs, consumer gear works great for home.

Join the conversation

🛍️ Check out the new Merch Shop at https://l.technotim.live/shop

⚙️ See all the hardware I recommend at https://l.technotim.live/gear

🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files

🤝 Support me and help keep this site ad-free!

This post is licensed under CC BY 4.0 by the author.