Jump to content
RESET Forums (homeservershow.com)

Starting my 54l journey - discussion appreciated.


Recommended Posts

Since you guys have been so great in helping me out so far I figured I would post what I am up to:


  • HP N54L
  • 1 x 120gb Sandisk SSD (rear esataport)
  • 5x 3TB Seagate Barracudas
  • 2x stick of the stock memory (I have another n54l that I am using for a SAN for my lab)
  • 1 5.25" hot swap bay
  • Intel Pro/1000 NIC, used off ebay (can be had soooo cheap) Nic's teamed in a LACP Dynamic aggregation in the intel drivers
  • Server 2012 Storage Server

Total cost? Just under $1k. This should give me many years of storage.


Right now I am playing with performance and how I am going to cut up 13.6TB of raw storage. What is great about storage spaces compared my messing with a ZFS or LVM build is that I can cut up the raw pool into virtual disks with independent levels of redundancy in the dame drive pool. The major downside is that WSS2k12 parity spaces write speeds are very lackluster. However, I can mount two ~20gb VHDX's from the SSD and set them as journaling disks, and I get 40-50 MB/s writes in read world transfers. Whatever, I digress:


What I am going to be using this all for you ask? Well, right now:


  • Replacing an aging mediasonic raid-in-a-box direct attached unit. 
  • Central repository for pretty much everything. Music, movies, documents, photography, applications, machine backups, VM's etc.
  • Centralize and simplify offsite backups by keeping my crashplan license on this box. 
  • Use crashplan to backup my laptop and such directly to a partition
  • Can access media when my main desktop computer is off.
  • VM's on my main server can use resources here so for example I can run a Sub-Sonic VM
  • A few hundred GB of storage for the family.
  • Manage it with RSAT/RDP headlessly

So yeah, fun times. I have enough experience with server 2012 where I don't need to play with it much more, but I need to set on how I am going to handle the storage in my head still. Right now I plan on using thinly provisioned virtual disks for each category of storage. Most likely:


  • Parity Drive for Video files - The lowish writes with still a little on initial upload, but the absurd read speeds will be fine for streaming
  • Mirror for music - Foobar can be finicky with network streaming, and the additional redundancy overhead is offset by the smaller file sizes
  • Parity for photos - seriously, how often do you write large quantities of pictures?
  • Mirror/3 for local backups -  these will not be stored offsite probably, so 2 drives of redundancy and good speeds
  • Simple drive for temporary iSCSI targets as needed. All speed, no redundancy needed


Benchmarks and screenshots to come, but for now I figured typing this all out would both get it straight in my mind, and also invite discussion and possible ideas from you guys while adding more info to the forums.

Link to comment
Share on other sites

So, thanks to RDP I had the spare time to run some benchmarks through my day so far today. Stick with me here, I was comprehensive.


Before we start, let's see how the SSD is doing in the 54l:



Looks about right for Sata2 and a mediocre SSD. Just FYI but the SSD is plugged into the rear esata port.


First off I wanted to see what sort of overhead I was going to see with storage spaces vs. a single drive. So I Created a volume on a drive with disk management and ran a task. I then took the same drive, put it in a test pool and made a fixed full size simple volume out of it.


d2pyqUZ.png Zr5VR4z.png



Looks like across the board we are seeing a 5%-6% hit on performance right off the bat for using storage spaces.


Now I am going to work my up up through the various disk redundancy modes. As I go I will keep adding drives to the pool:


Two drive mirror:



1n writes and 2n reads, looks about right.


Two drive simple (striped):



2n/2n However some weirdness in the tests on the last two runs. I ran this a few time but the results were the same.


3 disk parity, no journal:



Yikes. 2n-1 reads. But those write speeds are ~1/6th what the drive can do. I think this is a lot of people's sticking point when it comes to Storage Spaces, and why a lot of people are kicking it to the curb during testing. But we shall carry on.


"3-way" mirror, 5 drives:



Looks like 1n/1n. Until we get to the last two tests and the reads spike. Might need more testing. However I don't plan on using this in production.


5 Drive Stripe



Yeaaaaaaah boy. Too bad this is statistical horror-show to use in real life eh? Like I said, I might use this to roll out temporary iSCSI disks. However gigabit will be my master there.


Edited by Fantasysage
Link to comment
Share on other sites

Forgive the weirdness, I lost the middle 3rd of this post due to this forum not giving me a warning about it:


Now we get to the interesting part, journaled on the right, none on the left:




So you can see by adding two journaling disks, and least in my tests, provided  pretty solid boost in performance. Now as far as the disks I am using, it was 2 thinly provisioned vhdx's on the SSD, mounted and added to the storage pool as journal disks via powershell. 


While the tests show up to a 100% boost, in the real world it is more like 50%.


Also something to note is the jump in non-journaled performance by adding the extra two drives - it is definitely worth mentioning.



Now as far as real world performance? This is from one 2k12 box to the storage server, a 50gb vhd:




Speed starts out fast, then the buffers die and it bottoms out, only to settle in around the high 50mb/s range.


And as for the performance of the n54l during the writes?




35-40% CPU during the writes - all that parity calculation has to come from somewhere. 




The box is still responsive and usable during a file transfer. Server manager is traversable, but it can be a little sluggish when loading a new snap in. 


Read speed from the parity drive is obviously snappy, and saturated the network connection, and was much lighter on the box as well:





Also some atto tests to the network share as a mapped drive:





So yeah, at least in my case, I think a parity volume is more then sufficient for my video files. Read speeds won't be a problem, and write speeds - while a little disappointing - are acceptable. And while it is a definite con, it is made up for by having the ability to run windows applications natively, and permissions being easy to manage.


Once thing about the journaling is that is absolutely slaughters the SSD that it is on. Also, there is not more room on in this box for another drive! So I am left with the small VHD's on the SSD. I won;t be writing and re-writing data on this drive so It isn't too bad, but it makes me pause and think. Also VHD's do not natively re-mount on boot in windows. So I need to get my way around that, either a start up script ot a task scheduler hack. Not sure what I am going to do yet.


Anyway, I hope all of this can be of help to someone! I will keep you guys posts as I go, might try thumb-drives for the journal disks, maybe a USB3 card in the spare PCIe 1x slot with 2 USB3 thumb drives?


Something to think about anyway.

Edited by Fantasysage
Link to comment
Share on other sites

Interesting tidbit to note concerning journaling disks that I have not read elsewhere:


So here's the deal. If you are using vhd's on an ssd as I am, they will NOT mount on boot unless you script it somehow (something I have not yet done). Another thing to note is that your entire storage pool will also fail and none of your virtual disks in the pool will mount either. If you re-attach the vhd's in disk manager (local system disk manager, not storage spaces) hit F5 and everything is there. This is somewhat scary, as now I am predicating the stability of ALL my storage on two virtual disks on one drive. So I played around a bit. I made a copy of the two virtual disks (JD1 and JD2) and moved them somewhere else. I then copies 1tb of actual data to the server. I rebooted the system, and instead of attaching the primary VHD's, I attached the backups. Note that the backups were some 250mb smaller. And boom, no problems! Everything mounted fine and all the data was readable. This makes sense, as nothing is stored for long on the JD's, but who knew how msoft implemented it?


So I think I will be incorporating weeklies of the JD's into my backup and recovery plan. Something to keep in mind. 


Also, talking about them, I have copied just less than 4TB of actual data to the server so far. Both VHD's are hovering around 1.3gb:



These are expandable to 30gb, but I doubt they will ever see that. Also, in writing all this data, my SSD has seen a ton of writes. Depeniding of what value you read, around 8TB worth:




But that is going to taper off quickly, as the initial copy is done. I am not particularly worried, especially since I can use an old copy of the VHD's if I have to to rebuild the storage pool.


More to come as I play.

Edited by Fantasysage
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...