Jump to content
RESET Forums (homeservershow.com)

Passing SATA ports thru to VM's


ikon
 Share

Recommended Posts

This post is probably directed at timekills more than anyone else but, please, anyone with info please chime in.

 

I'm thinking of setting up a small VM system to just do SpinRite. The idea would be to set up the same number of VM's as there are SATA ports on the mobo, less 1 for the host OS drive and VM files. IOW, if a mobo has 6 SATA ports, I would set up 5 VM's. Each VM would have a single SATA port passed through to it.

 

With the size of drives today, it can take a long time to SpinRite one at level 4, which is what I use to 'certify' a drive for production. If I can do 5 or more drives at once it would shorten the average time per drive considerably.

 

Here's my question: is it possible to pass SATA ports transparently thru to a VM? SpinRite hooks INT 13 to get low level access to the drive, and I'm concerned that a VM won't provide the level of access SpinRite needs to do its job properly.For example, SpinRite will work on a drive connected via USB, even in a VM, but it won't have access to the SMART data from the drive because USB doesn't support it.

Link to comment
Share on other sites

In s2008r2 with the hyper-v role you can pass a HDD through. So I would imagine that would work as you envision. My question, is at what point, if at all, does the HDD controller get saturated and spinrite slows down because too much data is trying to be transferred.

 

In the end, the average time per drive wont change, unless saturation is an issue, but average total time could be reduced.

Link to comment
Share on other sites

In s2008r2 with the hyper-v role you can pass a HDD through. So I would imagine that would work as you envision. My question, is at what point, if at all, does the HDD controller get saturated and spinrite slows down because too much data is trying to be transferred.

 

In the end, the average time per drive wont change, unless saturation is an issue, but average total time could be reduced.

An interesting question for sure. Makes for a pretty interesting experiment I think.

 

The way I was looking at avg time was this: if 1 drive takes 5 days, and I can do 5 drives in the same time, then the avg time per drive is 1 day. Of course, it can also be looked at as 5 drives would normally take 25 days but now only take 5 days. Either way, it's a huge saving..... if it works ;)

Link to comment
Share on other sites

I don't know. I'm not sure just how passed through the hardware control really is, and it also differs depending on how VT-d compliant your setup is. If you have a setup that is ESXi certified, then you have the best chance of true VT-d. Even then, I just don't know if the INT13 hook is tranparently pased through or if there is a HAL interplay from the host.

 

Example being all the drives have to be offline from the host before they can be accessed by the guest. In your situation there probably isn't much interplay but for a hardware array you have to set up the array in the host before you can pass it through to the guest - which implies that the drives control is not completely transparent to the host. Also I have had challenges with optical drive pass through, although others have not.

 

Bottom line - great question, great idea, and I'm sorry but I don't know the answer. I'll admit I'm skeptical.

Link to comment
Share on other sites

I don't know. I'm not sure just how passed through the hardware control really is, and it also differs depending on how VT-d compliant your setup is. If you have a setup that is ESXi certified, then you have the best chance of true VT-d. Even then, I just don't know if the INT13 hook is tranparently pased through or if there is a HAL interplay from the host.

 

Example being all the drives have to be offline from the host before they can be accessed by the guest. In your situation there probably isn't much interplay but for a hardware array you have to set up the array in the host before you can pass it through to the guest - which implies that the drives control is not completely transparent to the host. Also I have had challenges with optical drive pass through, although others have not.

 

Bottom line - great question, great idea, and I'm sorry but I don't know the answer. I'll admit I'm skeptical.

never hurts to ask, right? I don't have access to ESXi (that is the non-free, enterprise version, right?), so I will probably have to go hyper-V, or with whatever the free edition of VMware is (it is a bit pricey for small home use) :)

Link to comment
Share on other sites

An interesting question for sure. Makes for a pretty interesting experiment I think.

 

The way I was looking at avg time was this: if 1 drive takes 5 days, and I can do 5 drives in the same time, then the avg time per drive is 1 day. Of course, it can also be looked at as 5 drives would normally take 25 days but now only take 5 days. Either way, it's a huge saving..... if it works ;)

 

Okay, I follow you know. I was looking at it, as in no matter how you slice it, it is still 5 days/drive. If you did more than one drive at a time, it just reduces the total days.

 

When you pass a disk through, it shows as offline to the host in disk manager. It definitely sees it, but it can't access it. no-control may be able to help here. He seems to be the guy with the most VM experience on the board. Well, outside of wodysweb (I think that is his id) but I haven't seen him around i a while.

Link to comment
Share on other sites

To clarify, I didn't mean go with ESXi - just that using ESXi certified equipment would improve the VT-d compatibility. with any host, including using Hyper-V.

Link to comment
Share on other sites

To clarify, I didn't mean go with ESXi - just that using ESXi certified equipment would improve the VT-d compatibility. with any host, including using Hyper-V.

ah. gotchya. Hmmm, is that a limited list, or is there generally lots of equipment that's ESXi certified?

Link to comment
Share on other sites

Hi Ikon, the list of documented working consumer class motherboards with ESXi support is small/old:

http://vm-help.com/e...hitebox_HCL.php

 

and I suspect what you're trying to do won't work, on ESXi anyway. That doesn't stop many of us from getting creative to save a buck (or many bucks), say, for personal self-teaching use, on such "whiteboxes" Here's what I've found so far to help explain why it probably won't work, after having done some "unsupported" testing of ESX 4.1 Update 1 recently.

 

The entire LSI RAID adapter PCI device

http://tinkertry.com/vmdirectpath

is what I configured VMDirectPath (passthru) for, and that works fine, see screenshots at that URL. But I can't get granular, that is, make single/individual SATA port assignments.

 

More info here:

http://tinkertry.com/vzilla

http://tinkertry.com...ikesanddislike/

 

On the Z68's Sandybridge SATA ports, similarly, I also can't choose just a single SATA port, in fact, even when I chose "Intel Corporation Cougar Point 6 port SATA AHCI Controller" and rebooted ESXi, then assigned that PCI device to a single virtual machine, I couldn't seem to get visibility for the Intel based RST RAID array that I configured in the BIOS, but didn't try to hard yet either (it's really software RAID, so need to install fresh Windows 7 with F6 at install to choose driver for Intel RAID). Given it's an integral part of the Z68 chipset and not a standalone PCI card, it's quite possible I'd never get that working anyway.

 

I could see individual AHCI devices in normal non-RAID mode attached to the motherboard's SATA ports, where ESX sees the individual drives in a pool, but that's virtualized, seriously doubt spin-rite would run like that.

 

Granted, my brief tests were a couple of months ago, and I plan to retry some of this experimenting once VMware ESXi 5.0 arrives, and see how that goes.

 

But, given ESXi 5.0 will have native USB3 support, full >2TB drive support and >2TB volume support (nice for large RAID array), the importance of passing stuff through to my WHS VMs will likely be fading a bit, don't know yet. I'd prefer to have the versatility to attach USB 3 and RAID storage to any virtual machine/ For now, I'm likely sticking with LSI RAID adapter (which is on VMware's supported storage adapter list) for that pool of storage anyway, especially if I can get good speed with large RAID array caching using SSDs, we'll see:

http://tinkertry.com/goodraidcontrollerswithssdcachingandesxsupport

 

Just my 2 cents worth.

Edited by tinkererguy
Link to comment
Share on other sites

Hi Ikon, the list of documented working consumer class motherboards with ESXi support is small/old:

http://vm-help.com/e...hitebox_HCL.php

 

and I suspect what you're trying to do won't work, on ESXi anyway. That doesn't stop many of us from getting creative to save a buck (or many bucks), say, for personal self-teaching use, on such "whiteboxes" Here's what I've found so far to help explain why it probably won't work, after having done some "unsupported" testing of ESX 4.1 Update 1 recently.

 

The entire LSI RAID adapter PCI device

http://tinkertry.com/vmdirectpath

is what I configured VMDirectPath (passthru) for, and that works fine, see screenshots at that URL. But I can't get granular, that is, make single/individual SATA port assignments.

 

More info here:

http://tinkertry.com/vzilla

http://tinkertry.com...ikesanddislike/

 

On the Z68's Sandybridge SATA ports, similarly, I also can't choose just a single SATA port, in fact, even when I chose "Intel Corporation Cougar Point 6 port SATA AHCI Controller" and rebooted ESXi, then assigned that PCI device to a single virtual machine, I couldn't seem to get visibility for the Intel based RST RAID array that I configured in the BIOS, but didn't try to hard yet either (it's really software RAID, so need to install fresh Windows 7 with F6 at install to choose driver for Intel RAID). Given it's an integral part of the Z68 chipset and not a standalone PCI card, it's quite possible I'd never get that working anyway.

 

I could see individual AHCI devices in normal non-RAID mode attached to the motherboard's SATA ports, where ESX sees the individual drives in a pool, but that's virtualized, seriously doubt spin-rite would run like that.

 

Granted, my brief tests were a couple of months ago, and I plan to retry some of this experimenting once VMware ESXi 5.0 arrives, and see how that goes.

 

But, given ESXi 5.0 will have native USB3 support, full >2TB drive support and >2TB volume support (nice for large RAID array), the importance of passing stuff through to my WHS VMs will likely be fading a bit, don't know yet. I'd prefer to have the versatility to attach USB 3 and RAID storage to any virtual machine/ For now, I'm likely sticking with LSI RAID adapter (which is on VMware's supported storage adapter list) for that pool of storage anyway, especially if I can get good speed with large RAID array caching using SSDs, we'll see:

http://tinkertry.com/goodraidcontrollerswithssdcachingandesxsupport

 

Just my 2 cents worth.

thanks very much. I guess my first observation is that you mention RAID quite a bit. I'm not trying to pass thru any RAID devices, just the raw mobo SATA ports. As you say, you can see the individual AHCI devices in non-RAID mode. I'm just hoping you're wrong about SpinRite not working like that :)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...