NVMe Parity vs HDD Parity in Unraid

This is a very simple test where we are looking to see if using NVMe in parity will have any major improvement over hard drives in parity. Spoiler, between the two different NVMe drives I tested with, there was no major improvements in write speeds.

Grains of Salt

You should be aware that we are only testing 2 different NVMe drives that are arguably not very good. While they are fast and do provide great IOPS (Input/Output Operations Per Second), they are just TLC OEM drives that are made more for administrative tasks than server tasks. Also, you should be aware that due to limited hardware availability, this test does not represent the entire market and more testing is required to give us a better idea if there is or is not something to be gained with NVMe drives being used in parity. Okay, now that we got that out of the way, let’s get started.

IMG_7248.jpg

Hardware

As per usual, I always start with the hardware. For this test we were using the following drives in parity,

Not much information is available on the LITEON drive but for benchmarks it’s spectacularly below average with the Toshiba drive barely being any better. Both are TLC based and are very affordable on the used market because they are both OEM drives for HP and Dell. The Seagate hard drive is, well, a hard drive with decent cache 128MB and it’s a 7200rpm spinner, more than adequate for what we are testing out.

Other Noteworthy Hardware

Between the two Unraid servers we are using the following hardware to assist us and to remove any potential bottlenecks.

Server 1, our data transmitter/sender aka “Transcencia” is rocking the following:

Server 2, our data receiver aka “PNAS” is sporting the following:

On the networking hardware front between the two servers is sitting the infamous Ubiquiti UniFi US-16-XG, this means we have 10Gb Ethernet capabilities between both servers. No network bottlenecks for us, it all comes down drive architecture. Now let’s put this all together with a picture.

topology.png

MTU 9000

It should be noted that the 10 gigabit switch has jumbo frames enabled and both servers have their MTU size set to 9000. This will allow us to have the maximum head room with our network equipment. What this translates to, is the possibility of hitting 1 gigabyte per second transfers, assuming our hard drives could write faster than 130MB/s.


The Array

So now we need to set the stage here if you will and talk about how our array is setup. We are not using any cache drives because I wanted to try and control as much as possible and also keep the write speeds as similar as possible by writing directly to the array. Having cache would actually artificially inflate our results and I wanted results that were a bit more ground truth.

Test 1

  • No cache drives

  • 3 hard drives as data drives

  • 1 hard drive as parity

Test 2

  • No cache drives

  • 3 hard drives as data drives

  • Toshiba KXG50PNV1T02 as Parity

Test 3

  • No cache drives

  • 3 hard drives as data drives

  • LITEON CX2 GB1024 Q11 as Parity

Here is a screenshot of all the drives we will be testing with. There are some extras in here but they weren’t used in testing.

Screen Shot 2020-09-19 at 9.02.32 AM.png

Test Setup

The test setup is remarkably similar to the previous testing we did with “All NVMe SSD Array in Unraid” where I will literally be borrowing the same script to do all of our write testing. The only difference here is we will not be trying to fill the array to the brim but will be trying to do at least 41 transfers of the same video file in order to capture the total time and speed.

Execution Steps

The steps to execute this test are simple.

  1. Run the script for each of the Tests listed way above

  2. After each test is done, delete all of the files out of the array in PNAS

  3. Rename or edit the script as necessary to not write over the CSV file that gets created

The Script

This is the script I am using to execute the test. This script simply copies over our 45GB video file 41 times with a 30 second break in between each secure copy. Each time a transfer takes place the results get written to a CSV file that we can later analyze. You should know that secure copy is not a great benchmarking tool but for our purposes it works fine.

#!/bin/bash
i="0"
while [ $i -lt 41 ]
do
script -q -c "echo run-$i;scp -r /mnt/user/Videos/Completed/2020/5700XT.mov 
172.16.1.10:/mnt/user/downloads/$i-5700XT.mov" >> /tmp/resultsNVME.csv
i=$[$i+1]
sleep 30
done

Sample output

Screen Shot 2020-09-19 at 9.19.59 AM.png

Test Results

Finally, the stuff you came looking for is here. In a very big surprise there isn’t much variance between a hard drive and NVMe, at least none that I think warrants the purchase of expensive flash storage for your parity disk.

Screen Shot 2020-09-19 at 12.56.57 PM.png
Screen Shot 2020-09-19 at 12.56.07 PM.png

Test Sources

Here is a link that will allow you view the CSV files and the converted files for review if you so choose to view them.

Questions?

Why did you do this?

The simple answer is curiosity. In theory because of the way Unraid works, there should be something to gain from having faster drives in parity. The better IOPS of an NVMe should have given me a bigger difference. I believe however, that the reason we aren’t seeing much of a difference is because of the type of NVMe we are using. I would like to do this again with SLC and MLC drives or even TLC drives that are more prosumer or even for servers.

Should I do this?

Right now it’s not worth it in the slightest. If you have DEEP pockets you are probably better off with a different solution all together or simply just having MLC NVMe Drives that are 8TB (or whatever is ridiculous) as cache in your array.

Is there a better way?

God yes, I will die on this hill if I have to. NVMe in cache is by far the best way to go. From my experience you will want a 1TB NVMe drive at a minimum. Why? Well, the 1TB drives tend to have the ability for better/longer sustained writes. We got to see this in action when I was able to hit sustained 1GB/s transfers as can be seen here.

What about reading from the “Data Receiver”?

Honestly, there are a ton of variables here and while setting up one test to make this happen is possible, I personally didn’t see the value because I feel like more often than not, most people tend to store data on their servers and not pull it. Yes, there are plenty of exceptions.



Conclusion

Honestly, I think this could use a second or third look with different brands and different types of drives. Unfortunately, these are the only drives I could get my hands on and do tests with. Thus this information is inherently flawed. Hopefully one day we can take a look at this again but with more and different drives.