Skip to content

Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

This isn’t specific to multi-site clustering, but I’ve certainly had to use this many times when adding devices to my multi-site clusters. Adding disks to a multi-site Windows 2008 cluster is not as easy as it should be. In Windows 2008, Microsoft has added some new “logic” while adding disk resources to a cluster. In Windows 2008, when you attempt to “Add a disk” through the cluster administrator GUI, the cluster does a quick check on the available disks to ensure that the disks are present on all nodes of the cluster before presenting this as an available disk in the Cluster Administrator GUI. This can be bad for geo-clusters as the disks are unlikely read/write enabled on all sites, causing the cluster GUI to display an error message:

You may also experience this same behavior when adding a disk resource to a 2008 cluster that you only want to have available to a single, or subset of nodes. This issue could also occur if you deleted a cluster disk resource from your multi-cluster and attempted to add it back in thru the cluster GUI. Because of this behavior, we need to work a little harder to add a disk into a cluster for these situations. To work around this issue, you have a couple of options. The first option would be to evict the offending node(s) from the cluster and then add the storage using the cluster administrator GUI. Yes, this might be a bit painful for some, but if your environment can handle evicting/adding nodes without impact, this is probably the easiest way to get these disks into the cluster.

After evicting the remote nodes, the cluster would then only check the disks from your local storage system on the local node and would see that the disks are viable for cluster use. Now using cluster GUI, when you attempt to add a disk, the error message no longer displays and you will now be presented with the options to add the disks into the cluster. Once you’ve added the disks into the cluster, you would then re-join the other nodes back into the cluster.

If evicting a node isn’t an option, you can manually add the disk into the cluster using cluster.exe commands. I wrote a little MSKB about how to do this for Windows 2000/2003 in MSKB 555312, and there are some slight differences in Windows 2008. Microsoft has renamed just about all of the cluster’s physical disk private properties for Longhorn so my KB isn’t quite accurate for 2008. To manually add a disk using cluster.exe in Windows 2008, you would do the following:

First, we create the empty resource with no private properties…this is the same first step as documented in 555312:

C:>cluster res “Disk Z:” /create /type:”Physical Disk” /group:”Available Storage”

This creates a resource of the Physical Disk type in the group named “Available Storage” with no private properties. Next, my favorite secret hidden private property in 2000/2003 Drive has been renamed in Windows 2008. It has been renamed to DiskPath and it is no longer a hidden property, so it isn’t top secret anymore. If you look at the private properties of a physical disk resource you’ll see:

C:>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T Resource Name Value
— ——————– —————————— ———————–
D Disk Z: DiskIdType 5000 (0x1388)
D Disk Z: DiskSignature 0 (0x0)
S Disk Z: DiskIdGuid
D Disk Z: DiskRunChkDsk 0 (0x0)
B Disk Z: DiskUniqueIds … (0 bytes)
B Disk Z: DiskVolumeInfo … (0 bytes)
D Disk Z: DiskArbInterval 3 (0x3)
S Disk Z: DiskPath
D Disk Z: DiskReload 0 (0x0)
D Disk Z: MaintenanceMode 0 (0x0)
D Disk Z: MaxIoLatency 1000 (0x3e8)

So now I can use this DiskPath value and Windows will magically figure out all of the other gory private properties for my disk using the mount point I specify in the DiskPath parameter. Notice in the above output the DiskSignature, DiskUniqueIds and DiskVolumeInfo fields are empty after creating the “empty” physical drive resource. Now when I use the DiskPath parameter, Windows will magically figure out these fields based on the mount point info provided. I’ve mounted this disk as my Z: drive, so here’s my command using the DiskPath parameter:

C:>cluster res “Disk Z:” /priv DiskPath=”Z:”

At this point, you would bring the disk online in the cluster and it fills out the rest of the private property values for the disk. After bringing the disk online, when you look at the resource’s private properties, it shows:

C:>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T Resource Name Value
— ——————– —————————— ———————–
D Disk Z: DiskIdType 0 (0x0)
D Disk Z: DiskSignature 4198681706 (0xfa42cc6a)
S Disk Z: DiskIdGuid
D Disk Z: DiskRunChkDsk 0 (0x0)
B Disk Z: DiskUniqueIds 10 00 00 00 … (132 bytes)
B Disk Z: DiskVolumeInfo 01 00 00 00 … (48 bytes)
D Disk Z: DiskArbInterval 3 (0x3)
S Disk Z: DiskPath
D Disk Z: DiskReload 0 (0x0)
D Disk Z: MaintenanceMode 0 (0x0)
D Disk Z: MaxIoLatency 1000 (0x3e8)

Notice that the DiskSignature, DiskUniqueIds and DiskVolumeInfo are now filled in for this disk. You’ll also notice that the DiskPath value has automatically been cleared…not sure why this occurs, but it seems that after the DiskPath value has resolved the other properties, the DiskPath is cleared. If you check the resource properties before bringing the disk online, you’ll see the DiskPath value set, but after bringing the cluster resource online, the DiskPath value is cleared and the signature, ID and volume fields are populated.

Leave a Reply