Jump to content
RESET Forums (homeservershow.com)

ESXi RDM problem


k-smirnov
 Share

Recommended Posts

Hello, I need community help!

 

 

My config is HP Microserver G8, 1220LV2, 16gb, 4х4Tb WD40EFRX, 1x180gb Intel SSD 530 in ODD bay.
B120i in AHCI mode because I want to use RDM for all 4x4tb disks in Openmediavault.
I successfully installed VMware-ESXi-6.0.0-Update1 for HP servers and chose SSD for DataStore.
I downgraded the hpvsa driver to scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64
Then I made four .vmdk's for 4tb disks (wd1.vmdk-wd4.vmdk) and put them to DataStore/RDM
I made VM (Debian 6 x64 for Openmediavault) and attached new RDM disks to the new SCSI contoller.
But the VM can't start with the message:

 

"Failed to start the virtual machine.
Module DiskEarly power on failed.
Cannot open the disk '/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3.vmdk' or one of the snapshot disks it depends on."

 

 

The conflict disk can be any of four RDM disks and can to change after some time to any of three other.
There are no .lck files in VM folder.

 

The part of VM log:

 

 

2015-11-02T20:30:55.771Z| Worker#3| I120: OBJLIB-FILEBE : FileBEOpen: can't open '/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3-rdmp.vmdk' : Failed to lock the file (262146).
2015-11-02T20:30:55.771Z| Worker#3| I120: DISKLIB-VMFS : "/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3-rdmp.vmdk" : failed to open (Failed to lock the file): ObjLib_Open failed. Type 10
2015-11-02T20:30:55.771Z| Worker#3| I120: DISKLIB-LINK : "/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3.vmdk" : failed to open (Failed to lock the file).
2015-11-02T20:30:55.771Z| Worker#3| I120: DISKLIB-CHAIN : "/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3.vmdk" : failed to open (Failed to lock the file).
2015-11-02T20:30:55.771Z| Worker#3| I120: DISKLIB-LIB : Failed to open '/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3.vmdk' with flags 0xa Failed to lock the file (16392).
2015-11-02T20:30:55.771Z| Worker#3| I120: DISK: Cannot open disk "/vmfs/volumes/562fad39-ee2c7a4c-9be4-d0bf9c461484/RDM/wd3.vmdk": Failed to lock the file (16392).

 

I used command lsof and saw running processes hpHelper-main and smartd
I think they are the reason of the conflict because the disk connected with this processes is wd3.rdm (you can see it on the screenshot).

 

 

Does anybody know how can I resolve this problem to use all four RDM disks?
Sorry for my English ((

 

%D0%A1%D0%BD%D0%B8%D0%BC%D0%BE%D0%BA+%D1

 

Link to comment
Share on other sites

How did you create the RDMs? Here is how I created mine (2 x 3TB and 1 x 6TB), and they work flawlessly with my Windows 2012 R2 VM:

 

List attached disks:
ls -l /vmfs/devices/disks/
    e.g. "naa.600508b1001c47a3fc4a3a1322803d20"
 
Create RDM pass-through:
vmkfstools -z /vmfs/devices/disks/naa.600508b1001c47a3fc4a3a1322803d20 /vmfs/volumes/local-CrucialC300/<VMNAME>/rdm_Data.vmdk
 
Attach existing virtual disk to the VM.

 

==EDIT #1:

 

Ah, I think you are running in AHCI mode. I recommend you switch to RAID mode, you will get better temperatures (my fan spins at 6% -- it is dead silent). Just create each disk as a single RAID 0 array (there is even a wizard that will create them all for you in one hit). You also get much better queue depth in RAID mode. I tested both, and RAID mode gives better performance, and the fan speed is the absolute icing on the cake.


##EDIT #2

 

I've also collected a list of CLI commands to manage the array controller (assuming you are using the HP drivers and tools installed inside ESXi):

 

HP SmartArray CLI commands on ESXi
 
Show configuration
/opt/hp/hpssacli/bin/hpssacli ctrl all show config
 
Controller status
/opt/hp/hpssacli/bin/hpssacli ctrl all show status
 
Show detailed controller information for all controllers
/opt/hp/hpssacli/bin/hpssacli ctrl all show detail
 
Show detailed controller information for controller in slot 0
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 show detail
 
Rescan for New Devices
/opt/hp/hpssacli/bin/hpssacli rescan
 
Physical disk status
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show status
 
Show detailed physical disk information
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show detail
 
Logical disk status
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld all show status
 
View Detailed Logical Drive Status
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 1 show
 
Enable Drive Write Cache
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify dwc=enable forced
 
Disable Drive Write Cache
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify dwc=disable forced
 
Create New RAID 0 Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:2 raid=0
 
Create New RAID 1 Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:1,1I:1:2 raid=1
 
Create New RAID 5 Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:1,1I:1:2,2I:1:6,2I:1:7,2I:1:8 raid=5
 
Delete Logical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 delete
 
Add New Physical Drive to Logical Volume
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 add drives=2I:1:6,2I:1:7
 
Add Spare Disks
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 array all add spares=2I:1:6,2I:1:7
 
Erase Physical Drive
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd 2I:1:6 modify erase
 
Turn on Blink Physical Disk LED
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 modify led=on
 
Turn off Blink Physical Disk LED
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 modify led=off
 
Modify smart array cache read and write ratio (cacheratio=readratio/writeratio)
/opt/hp/hpssacli/bin/hpssacli ctrl slot=0 modify cacheratio=100/0
Edited by rotor
  • Like 1
Link to comment
Share on other sites

Hy

try this:

- click on scsi controller in vm settings and make sure scsi bus sharing is set to none

 if you have no .lck files delete instead *-ctk.vmdk (or, better, move them ;-) )

Edited by danyxp
  • Like 1
Link to comment
Share on other sites

I'm in the same as you.

No problem in creating and adding RDM disks to a vm but once in a while a random disk fails to lock so the vm won't start. But it will if I remove that disk(not delete) from the vm.

So I started to test every latest update of HP ESXi customized image: 6.0 U1, 5.5 U3, 5.1 U3, in this order. Failure still!

Then I stumbled upon this post: VM guest will not boot with pass through device enabled after upgrading from 5.1 to 5.5

It's about PCI Passthrough of network or storage device not working on recent versions(GPU passthrough still works).

I thought it'd apply to our RDM Passthrough too. So I'm testing the latest known version without that restriction: VMware-ESXi-5.1.0-Update1-1065491-HP-5.61.2-Sep2013.iso

So far not a single failure after a couple of test reboots.

Can you try this ISO and report please? Perhaps I was just lucky this time with that very old version or HP let us down by restricting us to GPU Passthrough only as that article suggests.

Link to comment
Share on other sites

It's about PCI Passthrough of network or storage device not working on recent versions(GPU passthrough still works).

I thought it'd apply to our RDM Passthrough too. So I'm testing the latest known version without that restriction: VMware-ESXi-5.1.0-Update1-1065491-HP-5.61.2-Sep2013.iso

RDM is nothing to do with DirectPath I/O. The article you found is not related to your issues. If downgrading fixed it for you then it is something else.

 

DirectPath I/O for all devices with a Proliant server is supported in ESXi 5.5 Patch 4 or all versions of ESXi 6.0.

Link to comment
Share on other sites

Thanks for clarifying this. Still I don't understand how I solved the issue. I didn't change anything in hardware or software configuration, except ESXi on microSD. At every attempt I format all my 4 SATA drives and the SSD holding the datastore for VMs

Edited by heliox
Link to comment
Share on other sites

Thanks for clarifying this. Still I don't understand how I solved the issue. I didn't change anything in hardware or software configuration, except ESXi on microSD. At every attempt I format all my 4 SATA drives and the SSD holding the datastore for VMs

RDM is just one of those features that is very quirky under ESXi in my opinion. The ZFS community don't like RDM because of all the quirks and bugs. If you can get RDM to work then don't mess with the configuration again. Similar to USB device passthrough in ESXi. If you've got a VT-d capable CPU and have enough slots then just passing through the entire controller is much better but you do have some restrictions such as inability to suspend the VM etc...

Link to comment
Share on other sites

  • 5 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...