SSD Caching on Hyper-V

• August 20, 2011

Written By: Timekills

Does SSD caching, as enabled by the Intel Z68 chipset, speed up operations in a Hyper-V (virtualized) client on a Server 2008R2 host?

The argument against using an SSD as a cache when using a motherboard with Intel’s new Z68 chipset is difficult to rebut. If you are using the system as a workstation, why not just install the OS to the SSD and reap the full speed benefits of the drive all the time? You can choose what additional programs you want on the SSD as well – certainly you know what programs you use most often, whether it be Photoshop or a game – and you’d want them to stay on that SSD so you can use the speed you shelled out the big bucks for, right? Intel will tell you that is why they set the minimum cache at only 20GB so you can use a cheap, small SSD. Well, one, they don’t really exist at this time. Two, the smaller the cache the less likely what you want cached is going to be actually, well, cached. And three, the Intel example 20GB SSDs are hardly inexpensive. So when would this be a benefit?

How about if you could use that single SSD for speed across multiple OS’s/systems? Instead of getting SSD speed benefits on one system, if you could share that cache across four, five, or more PCs? The speed benefit shared by them all. Sure, none of them would have full SSD speed, but all of them could have the significant improvements claimed by Intel when caching is used. Magic? No. The way to this computing nirvana is using the cache on a drive that holds multiple virtual hard drives on a Z68 chipset enabled host. In this case, Windows Server 2008R2 with Hyper-V enabled.

Environment:

Motherboard: Gigabyte GA-Z68A-D3H-B3

CPU: Intel 2500K

RAM: 16GB G.SKILL Ripjaws F3-12800CL9D

OS drive: 2x Seagate Momentus 7200.4 ST9160412AS 160GB 7200 RPM 16MB Cache 2.5″ SATA 3.0Gb/s in a RAID1 config (enclosed in IcyDock 2-in-1 RAID enclosure)

Data drive(s): 5x 2TB Samsung EcoGreen F4 HD2040UI in RAID 5 attached to HighPoint RocketRaid 2720 (6Gb/sec PCI-E 2.0 x8)

VHD drive: 600GB WD6000BLHX 10K RPM 2.5” Velociraptor

SSD: 60GB OCZ Vertex 2

OS: (Host): Windows Server 2008R2 with Hyper-V enabled

(Guests): -Windows Home Server 2011 (4GB RAM allocated static)

-Windows 7 Ultimate x 3 (one for HTPC/TV recording, one for Minecraft server, one for editing)

A little background on the Z68 caching as implemented currently. You have the option of using one SSD of 20GB or more (up to 64GB max for cache, although you can use any size SSD) as a cache for any onboard Intel SATA port. As I am using a 60GB SSD, I had the option of the caching program partitioning it into a 20 GB cache leaving 40GB usable as an additional hard drive, but I chose to use the entire SSD for caching.

An obvious limitation is that the SSD cache can be used to cache the drive attached to one SATA port only. In a workstation environment typically this would be used with a large drive as the boot and storage drive and the SSD caching that drive. The question then is why not just install the OS on the SSD? In theory, if a smaller SSD is used, it could allow for better use than just the OS. But how would this be used in a server environment – especially a virtual host?

I originally thought to cache the host OS drive, thinking the Hyper-V manager would see increased speeds, as well as the OS itself of course. Below I have some Crystal Disk Mark speed tests that show the results of caching the server OS drive (only using 20GB cache during these tests): The blue (first test) is no caching; the green is with caching:

clip_image002clip_image004

The laptop drives are a few years old, but they are 7200 RPM and no slouch, however the speed increases are quite impressive – not full GEN2 SSD speeds certainly but very fast. Especially interesting are the exponential increases in 4K read and write speeds.

But what good is this really? The whole reason I used the laptop drives in RAID 1 as the host OS is because drive access for the host shouldn’t impact the virtual OS use; really the host OS is rarely used except for host maintenance which should be very rare. Is the speed increase cool – yes. Does it help? I did some empirical and subjective testing for virtual OS boot up and access times and found that it did indeed speed up access to the Hyper-V Manager, even when remotely managed from a separate workstation, but had little measurable and no noticeable impact on actual OS use or speeds.

The second round of tests were where I hoped, and found, the results to be substantial and worthwhile for virtual OS use. This was accomplished by caching the virtual OS host drive, in this case the Velociraptor, with the hope/intent that the SSD would then be effectively caching all the virtual OS’s. This is also the reason I went with the full 60GB on the SSD (along with some concerns about TRIM, which I will address later.)

Of course, the Velociraptor is a speedy little drive in its own right. If I had to do it over again, I’d not spend the money on the Velociraptor as it’s speed is wasted when used with caching. It is much more cost effective to use a standard SATA drive and the SSD caching as shown by the benchmarks. Again the blue is no caching and the green is with caching.

clip_image006clip_image008

The write speeds really threw me off. Certainly the read speeds are impressively fast, even given the Velociraptors impressive showing by itself, but why are the write speeds so low? In multiple tests I saw the write speeds as high as 145MB/sec in Sequential and around 90 MB/sec in 512K. On average the speeds were higher, but even so, only the speeds you’d get natively from the Velociraptor. However the 4K speeds are still much better, and lets remember this is completely random data – not exactly the point of caching. So how else to measure improvements, if any?

There are many different benchmarking suites; none are universally accepted. More challenging is all the full suite Windows OS benchmarks require video capabilities that a virtual machine/OS just doesn’t have. So how does one test empirically the advantage of a cache? A common test is boot up time. What defines the complete boot process is debatable, but for my test I chose the Minecraft server. It automatically starts the Minecraft server Java program upon logon (auto-logon). I also wait for all system tray icons to complete: Splashtop Streamer, MS Security Essentials, and network connectivity. Times are as follows:

No Cache: 39.50 sec 38.41sec 38.45sec average: 38.77 seconds

Cache: 31.96sec 29.90sec 28.83sec average: 30.23 seconds

May not seem significant, but that is a 22% decrease in boot time.

Another example is file transfer time – this is a real world example of why the Crystal Disk Mark synthetic speeds aren’t indicative of how much faster the systems are with the caching enabled.. While moving a WHS V1 ISO (a PP3 plus updated version I have) that was about 2.2GB from my RAID drive to the V (cache enabled) drive to use as an install for a new virtual machine, I averaged 320 MB/sec speeds! Obviously it was writing from the RAID directly to the cache, but it was instantly usable.

TRIM. The bane of SSDs is the garbage that collects on them slowing them down over time. Bad enough on a dedicated OS drive where there is relatively little change, but a cache? Isn’t that exactly the type of use that will quickly kill a non-TRIM enabled SSD? And the SATA port has to be RAID enabled to use it, which has been a no-go for TRIM. Fear not. Server 2008R2 reports DisableDeleteNotify=0 which means TRIM is enabled, even though the SATA ports must be set to a RAID state. OCZ’s Toolbox after all the testing showed just over 520GB of data written to and over 480GB of data read from the SSD with 100% life remaining. I saw no decrease in speeds during the testing. Admittedly I haven’t had the system in use long enough to say for sure it is working, but I have every reason to believe it is. Time will tell. However as I am using the full 60GB for cache it is only a matter of disabling the acceleration using the IRST software in the host OS and running an SSD cleaning tool to bring it back to new. We’re talking less than 10 minutes of time. I don’t believe that will be necessary.

Ultimately it still comes down to whether you want to spend the money on an SSD to use it for caching. The argument still stands that if you are buying it for a workstation, you’d be better off just installing the OS on the SSD and getting the full speed benefits of the drive. However, I think a virtual environment such as the one tested is the perfect use. Rather than getting great speed on one OS and normal speed on the others, now multiple OS’s can benefit and see speed improvements from one relatively low-cost SSD. Worth it? Yes.

Share

Tags: , , , , ,

Category: BYOB Hardware, User Builds

Comments (6)

Trackback URL | Comments RSS Feed

  1. @welchwerks says:

    can i ask? is the ssd on the same 6 gb controller or 3gb as the raptor. very nice write up

  2. Timekills says:

    The SSD is on a 3Gb controller, while the Raptor is on on of the 6Gb controllers.. I wanted to see if it would work across different controllers as long as they were all Intel.

  3. Joe_Miner says:

    Thanks TK for the nice write-up of your analyses. This is very useful to know.

  4. fredp1 says:

    Nice write up. Just one question. My understanding is the SSD can accelerate a port or a volume on the Z68. So if you have a RAID volume on the Z68, you can use the SSD to accelarate that volume. From what I have read, you must let the raid volume complete its initialisation process first. Once that is done, the option to accelarate the volume is activated.

  5. tinkererguy says:

    This is a great write up! You inspired me to do some testing on my Z68 motherboard, so I typed up the results, where I compare using a 3 year old SSD to a new SATA3 SSD, which made a big difference for my 3x1TB RAID0: http://tinkertry.com/ssdscompared4srt

×

Shop and help out Home Server Show. Drag this box to your BookMarks Bar. Amazon