ID #1101

vmfsRawDeviceMap

 

example for vmfsRawDeviceMap


# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=0dd6c4a4
parentCID=ffffffff
createType="vmfsRawDeviceMap"

# Extent description
RW 156301488 VMFSRDM "iscsi-lun-rdm.vmdk"

# The Disk Data Base
#DDB

ddb.virtualHWVersion = "7"
ddb.longContentID = "32d4a0950fb635a71add77e10dd6c4a4"
ddb.uuid = "60 00 C2 93 7d 93 ff 85-b2 c0 9a 49 9b aa 70 88"
ddb.geometry.cylinders = "9729"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic" 


note : green parameters are optional and can be skipped
note : to change virtual hardware version simply edit the value - and eventually remove
parameters that do not exist in this example

 

"vmfsRawDeviceMap"


physical disk used by ESX
compatibilty mode: physical


Required files:
test.vmdk
and
test-rdm.vmdk

The rdm-file is a link to the physical disk.
In datastorebrowser or winscp it seems to have the same
size as the capacity of the physical device. In the virtual hardware editor of ESX this type is
listed as Mapped Raw Lun.

To create this type click "add disk" - select LUN
and select "physical" compat mode.

 


 

 


Tags: vmfsRawDeviceMap

Related entries:

You can comment this FAQ

Comment of ypdxAtVPaHw:
I will say, since i wrote this post i've been running over 20 priudctoon VM's on NFS on my NetApps and another 20 on fiberchannel. A Couple things here, NFS runs great and the de-duplication has been very useful. It's also been nice for running systems off and ESX Server that is not within fiber distance to my SAN.A couple of cons are:1. NFS doesn't transfer as quick on my non-disruptive filer upgrades. It works, the systems stall for a short period during the cluster failover process and the VM's are happy... but it does give the system a short period (maybe 10 - 30 seconds) of downtime. Amazingly, our streaming media VM only stalled on the non-buffered video.2. It's easy to over subscribe your NFS Share if your using De-Dup and you provision too many systems too quickly. We filled up a volume with systems and de-dup was saving over 80%... but once the changes came to the individual VM's we filled up the volume and eventually caused VM's to not be able to start. I'd recommend only using up to 60% of the de-dup savings as it will eventually catch up to you.
Added at: 2012-04-01 03:41