Browsed by
Category: EMC

Know how su EMC

EMC Navisphere CLI Command Examples with NaviSecCLI

EMC Navisphere CLI Command Examples with NaviSecCLI

emc

Navisphere CLI is a command line interface tool for EMC storage system management.

You can use it for storage provisioning and manage array configurations from any one of the managed storage system on the LAN.

It can also be used to automate the management functions through shell scripts and batch files.

CLI commands for many functions are server based and are provided with the host agent.

The remaining CLI commands are web-based and are provided with the software that runs in storage system service processors (SPs).

Configuration and Management of storage-system using Navisphere CLI:

The following steps are involved in configuring and managing the storage system (CX series, AX series) using CLI:

  • Install the Navisphere on the CLI on the host that is connected to the storage. This host will be used to configure the storage system.
  • Configure the Service processor (SP) agent on the each SP in the storage system.
  • Configure the storage system with CLI
  • Configuring and managing remote mirrors (CLI is not preferred to manage mirrors)

The following are two types of Navisphere CLI:

  1. Classic CLI is old version and it does not support any new features. But, this will still get the typical storage array jobs done.
  2. Secure CLI is most secured and preferred interface. Secure CLI includes all the commands as Class CLI with additional features. It also provides role-based authentication, audit trails of CLI events, and SSL-based data encryption.

Navisphere CLI is available for various OS including Windows, Solaris, Linux, AIX, HP-UX, etc.

Two EMC CLARiiON Navisphere CLI commands:

  1. naviseccli (Secure CLI) command sends storage-system management and configuration requests to a storage system over the LAN.
  2. navicli (Classic CLI) command sends storage-system management and configuration requests to an API (application programming interface) on a local or remote server.

In storage subsystem (CLARiiON, VNX, etc), it is very important to understand the following IDs:

  • LUN ID – The unique number assigned to a LUN when it is bound. When you bind a LUN, you can select the ID number. If you do not specify the LUN ID then the default LUN ID bound is 0, 1 and so on..
  • Unique ID – It usually refers to the storage systems, SP’s, HBAs and switch ports. It is WWN (world wide Name) or WWPN (World wide Port Name).
  • Disk ID 000 (or 0_0_0) indicates the first bus or loop, first enclosure, and first disk, and disk ID 100 (1_0_0) indicates the second bus or loop, first enclosure, and first disk.

1. Create RAID Group

The below command shows how to create a RAID group 0 from disks 0 to 3 in the Disk Processor Enclosure(DPE).

naviseccli –h H1_SPA createrg 0  0_0_0   0_0_1   0_0_2  0_0_3

In this example , -h Specifies the IP address or network name of the targeted SP on the desired storage system. The default, if you omit this switch, is localhost.

Since each SP has its own IP address, you must specify the IP address to each SP. Also a new RAID group has no RAID type (RAID 0, 1, 5) until it is bound. You can create more RAID groups 1, 2 and so on using the below commands:

naviseccli –h H1_SPA createrg 1  0_0_4 0_0_5 0_0_6

naviseccli –h H1_SPA createrg 2 0_0_7 0_0_8

This is similar to how you create raid group from the navsiphere GUI.

2. Bind LUN on a RAID Group

In the previous example, we created a RAID group, but did not create a LUN with a specific size.

The following examples will show how to bind a LUN to a RAID group:

navisecli -h H1_SPA bind r5 6 -rg 0  -sq gb -cap 50

In this example, we are binding a LUN with a LUN number/LUN ID 6 with a RAID type 5 to a RAID group 0 with a size of 50G. –sq indicates the size qualifier in mb or gb. You can also use the options to enable or disable rc=1 or 0(read cache), wc=1 or 0 (write cache).

3. Create Storage Group

The next several examples will shows how to create a storage group and connect a host to it.

First, create a stroage group:

naviseccli -h H1_SPA storagegroup -create -gname SGroup_1

4. Assign LUN to Storage Group

In the following example, hlu is the host LUN number. This is the number that host will see from its end. Alu is the array LUN number, which storage system will see from its end.

naviseccli -h H1_SPA storagegroup -addhlu -gname SGroup_1 -hlu 12 -alu 5

5. Register the Host

Register the host as shown below by specificing the name of the host. In this example, the host server is elserver1

naviseccli -h H1_SPA elserver1 register

6. Connect Host to Storage Group

Finally, connect the host to the storage group as shown below by using -connecthost option as shown below. You should also specify the storagegroup name appropriately.

naviseccli -h H1_SPA storagegroup -connecthost -host elserver1 -gname SGroup_1

7. View Storage Group Details

Execute the following command to verify the details of an existing storage group.

naviseccli  -h H1_SPA storagegroup –list –gname SGroup_1

Once you complete the above steps, your hosts should be able to see the newly provisioned storage.

8. Expand RAID Group

To extend a RAID group with new set of disks, you can use the command as shown in the below example.

naviseccli -h H1_SPA chgrg 2 -expand 0_0_9  0_1_0 -lex yes -pri high

This extends the RAID group with the ID 2 with the new disks 0_0_9 & 0_1_0 with lun expansion set to yes and priority set to high.

9. Destroy RAID Group

To remove or destroy a RAID group, use the below command.

naviseccli -h H1_SPA destroyrg 2  0_0_7 0_0_8 0_0_9 0_1_0 –rm yes –pri high

This is similar to how you destroy raid group from the navisphere GUI.

10. Display RAID Group Status

To display the status RAID group with ID 2 use the below command.

naviseccli -h H1_SPA getrg 2 -lunlist

11. Destroy Storage Group

To destroy a storage group called SGroup_1, you can use the command like below:

naviseccli -h H1_SPA storagegroup -destroy -gname SGroup_1

12. Copy Data to Hotspare Disk

The naviseccli command initiates the copying of data from a failing disk to an existing hot spare while the original disk is still functioning.

Once the copy is made, the failing disk will be faulted and the hotspare will be activated. When the faulted disk is replaced, the replacement will be copied back from the hot spare.

naviseccli –h H1_SPA copytohotspare 0_0_5  -initiate

13. LUN Migration

LUN migration is used to migrate the data from the source LUN to a destination LUN that has more improved performance.

naviseccli migrate –start –source 6 –dest 7 –rate low

Number 6 and 7 in the above example are the LUN IDs.

To display the current migration sessions and its properties:

naviseccli migrate –list

14. Create MetaLUN

MetaLUN is a type of LUN whose maximum capacity is the combined capacities of all LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN in to the larger capacity called a metaLUN. Similar to LUN, a metaLUN can belong to storage group and can be used for Snapview, MirrorView and SAN copy sessions.

You can expand a LUN or metaLUN in two ways — stripe expansion or concatenate expansion.

A stripe expansion takes the existing data on the LUN or metaLUN, and restripes (redistributes) it across the existing LUNs and the new LUNs you are adding.

The stripe expansion may take a long time to complete. A concatenate expansion creates a new metaLUN component that includes the new LUNs and appends this component to the end of the existing LUN or metaLUN. There is no restriping of data between the original storage and the new LUNs. The concatenate operation completes immediately

To create or expand a existing metaLUN, use the below command.

naviseccli -h H1_SPA metalun -expand -base 5 -lun 2 -type c -name newMetaLUN-sq gb –cap 50G

This creates a new meta LUN with the name “newMetaLUN” with the meta LUN ID 5 using the LUN ID 2 with a 50G concatenated expansion.

15. View MetaLUN Details

To display the information about MetaLUNs, do the following:

naviseccli -h H1_SPA metalun –info

The following command will destroy a specific metaLUN. In this example, it will destory metaLUN number 5.

naviseccli –h H1_SPA metalun –destroy –metalun 5
How do I deal with 712d841a alerts generated by custom templates?

How do I deal with 712d841a alerts generated by custom templates?

We have received so many below alerts this weekend due to relocation failure due to less then 10% space left in pool.

Received alerts like “FAST VP relocations fail with error code 712d841a with extended code 0xe12d8709.”

Time Stamp 05/22/12 10:39:04 (GMT) Event Number 712d841a Severity Error Host SPB Storage Array FCN00120xxxxxx SP N/A Device N/A Description Internal Information Only. Could not complete operation Relocate 0xB00000D5A allocate slice failed because  0xe12d8709.

00000400 03002c00 d3040000 1a842de1 1a842de1 00000000 00000000 00000000 00000000 00000000 712d841a

SAN : EMC VNX5300 / VNX Operating Environment (OE) 7.x

Solution :

The following steps i have done to fix the email alerts issue:

  1. Stop using the custom template.
  2. Export the template.
  3. Modify the template manually with a text editor and replace: Event
    {
    Min 0x712d8000
    Max 0x712dbfff
    Threshold 1
    Interval 1
    } with Event
    {
    Min 0x712d8000
    Max 0x712d8419
    Threshold 1
    Interval 1
    }
    Event
    {
    Min 0x712d841b
    Max 0x712dbfff
    Threshold 1
    Interval 1
    }
  4. Delete the old template.
  5. Import the new template.

Apply the new

Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

This isn’t specific to multi-site clustering, but I’ve certainly had to use this many times when adding devices to my multi-site clusters. Adding disks to a multi-site Windows 2008 cluster is not as easy as it should be. In Windows 2008, Microsoft has added some new “logic” while adding disk resources to a cluster. In Windows 2008, when you attempt to “Add a disk” through the cluster administrator GUI, the cluster does a quick check on the available disks to ensure that the disks are present on all nodes of the cluster before presenting this as an available disk in the Cluster Administrator GUI. This can be bad for geo-clusters as the disks are unlikely read/write enabled on all sites, causing the cluster GUI to display an error message:

You may also experience this same behavior when adding a disk resource to a 2008 cluster that you only want to have available to a single, or subset of nodes. This issue could also occur if you deleted a cluster disk resource from your multi-cluster and attempted to add it back in thru the cluster GUI. Because of this behavior, we need to work a little harder to add a disk into a cluster for these situations. To work around this issue, you have a couple of options. The first option would be to evict the offending node(s) from the cluster and then add the storage using the cluster administrator GUI. Yes, this might be a bit painful for some, but if your environment can handle evicting/adding nodes without impact, this is probably the easiest way to get these disks into the cluster.

After evicting the remote nodes, the cluster would then only check the disks from your local storage system on the local node and would see that the disks are viable for cluster use. Now using cluster GUI, when you attempt to add a disk, the error message no longer displays and you will now be presented with the options to add the disks into the cluster. Once you’ve added the disks into the cluster, you would then re-join the other nodes back into the cluster.

If evicting a node isn’t an option, you can manually add the disk into the cluster using cluster.exe commands. I wrote a little MSKB about how to do this for Windows 2000/2003 in MSKB 555312, and there are some slight differences in Windows 2008. Microsoft has renamed just about all of the cluster’s physical disk private properties for Longhorn so my KB isn’t quite accurate for 2008. To manually add a disk using cluster.exe in Windows 2008, you would do the following:

First, we create the empty resource with no private properties…this is the same first step as documented in 555312:

C:>cluster res “Disk Z:” /create /type:”Physical Disk” /group:”Available Storage”

This creates a resource of the Physical Disk type in the group named “Available Storage” with no private properties. Next, my favorite secret hidden private property in 2000/2003 Drive has been renamed in Windows 2008. It has been renamed to DiskPath and it is no longer a hidden property, so it isn’t top secret anymore. If you look at the private properties of a physical disk resource you’ll see:

C:>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T Resource Name Value
— ——————– —————————— ———————–
D Disk Z: DiskIdType 5000 (0x1388)
D Disk Z: DiskSignature 0 (0x0)
S Disk Z: DiskIdGuid
D Disk Z: DiskRunChkDsk 0 (0x0)
B Disk Z: DiskUniqueIds … (0 bytes)
B Disk Z: DiskVolumeInfo … (0 bytes)
D Disk Z: DiskArbInterval 3 (0x3)
S Disk Z: DiskPath
D Disk Z: DiskReload 0 (0x0)
D Disk Z: MaintenanceMode 0 (0x0)
D Disk Z: MaxIoLatency 1000 (0x3e8)

So now I can use this DiskPath value and Windows will magically figure out all of the other gory private properties for my disk using the mount point I specify in the DiskPath parameter. Notice in the above output the DiskSignature, DiskUniqueIds and DiskVolumeInfo fields are empty after creating the “empty” physical drive resource. Now when I use the DiskPath parameter, Windows will magically figure out these fields based on the mount point info provided. I’ve mounted this disk as my Z: drive, so here’s my command using the DiskPath parameter:

C:>cluster res “Disk Z:” /priv DiskPath=”Z:”

At this point, you would bring the disk online in the cluster and it fills out the rest of the private property values for the disk. After bringing the disk online, when you look at the resource’s private properties, it shows:

C:>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T Resource Name Value
— ——————– —————————— ———————–
D Disk Z: DiskIdType 0 (0x0)
D Disk Z: DiskSignature 4198681706 (0xfa42cc6a)
S Disk Z: DiskIdGuid
D Disk Z: DiskRunChkDsk 0 (0x0)
B Disk Z: DiskUniqueIds 10 00 00 00 … (132 bytes)
B Disk Z: DiskVolumeInfo 01 00 00 00 … (48 bytes)
D Disk Z: DiskArbInterval 3 (0x3)
S Disk Z: DiskPath
D Disk Z: DiskReload 0 (0x0)
D Disk Z: MaintenanceMode 0 (0x0)
D Disk Z: MaxIoLatency 1000 (0x3e8)

Notice that the DiskSignature, DiskUniqueIds and DiskVolumeInfo are now filled in for this disk. You’ll also notice that the DiskPath value has automatically been cleared…not sure why this occurs, but it seems that after the DiskPath value has resolved the other properties, the DiskPath is cleared. If you check the resource properties before bringing the disk online, you’ll see the DiskPath value set, but after bringing the cluster resource online, the DiskPath value is cleared and the signature, ID and volume fields are populated.

LUNs presented from a CLARiiON array to a host are read only

LUNs presented from a CLARiiON array to a host are read only

Knowledgebase Solution   

Environment:  Product: CLARiiON
Environment:  EMC SW: Navisphere Manager
Environment:  EMC SW: Replication Manager
Problem:  LUNs are read only when allocated to a host from Navisphere.
Problem:  LUNs presented to host are read only.
Change:  Customer added the LUN to a Storage Group from Navisphere.
Root Cause:  When a Replication Manager job runs, it leaves attributes on a LUN. If the LUN is mounted again using Replication Manager, will be correctly presented to the host. If the LUN is mounted using Navisphere, the attributes are not cleared and may present problems to hosts accessing the LUNs. 

Fix:  Follow these steps:
Run c:diskpart
DISKPART> select disk 4

(Select the appropriate disk, which in this case is 4.)
 
DISKPART> detail disk

PowerDevice by PowerPath
Disk ID: 9F0B09CD
Type : FIBRE
Bus : 0
Target : 1
LUN ID : 3

Volume ### Ltr Label Fs Type Size Status Info
———- — ———– —– ———- ——- ——— ——–
Volume 5 M SQL 2005 MD NTFS Partition 10 GB Healthy
 
Verify the Disk ID is set correctly. Note the Volume number as well and use it in the following command:

DISKPART> select volume 5
DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt
——– ———- ——- ——- — —
* Disk 4 Online 10 GB 0 B

Readonly : Yes
Hidden : No
No Default Drive Letter : Yes
Shadow Copy : Yes
 
If any of the Read Only, Hidden, or No Default drive letters are set to Yes, clear them with the follow command:

DISKPART> att vol clear readonly hidden nodefaultdriveletter

Volume attributes cleared successfully.
DISKPART>exit
 
Notes:  The att vol clear readonly hidden nodefaultdriveletter command clears the attributes set for the LUN presented under Windows.
Notes:  Drive needed to be rescanned in Device Manager before it was represented to the host.
Notes:  The LUNs should be added to a Storage Group using Replication Manager.

Enabling jumbo frames in VMware ESX 4

Enabling jumbo frames in VMware ESX 4

First, you must create the vSwitch and change the MTU to 9000. For this example, vSwitch9 will be the name you would replace with your own.

esxcfg-vswitch -a vSwitch9

Then, set the MTU of the vSwitch.

esxcfg-vswitch -m 9000 vSwitch9

esxcfg-vswitch -l will list all the vSwitches so you can confirm your changes and settings.

Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch9 64 1 64 9000

As you can see, the Uplinks field is blank, so you must add a physical NIC to the virtual switch, but first we’ll do a few other things. iSCSI access is controlled by a VMkernel interface and assigned to a port group on the vSwitch.

To create the portgroup:

esxcfg-vswitch -A vSwitch9

Then create the VMkernel interface:

esxcfg-vmknic -a -i -n -m 9000

If you named the port group “iSCSI” and your IP is 172.16.0.1/24, the command would like this:

esxcfg-vmknic -a -i 172.16.0.1 -n 255.255.255.0 -m 9000 iSCSI

esxcfg-vmknic -l will confirm your settings.

The last step is to add a physical NIC to the vSwitch. This can be done via the GUI, but we are having fun here, so let’s do it with commands.

esxcfg-vswitch -L vSwitch9

You’re done! You can now refresh the Networking portion of the Configuration tab in the vSphere Client to see the new virtual switch components. Keep in mind that you must enable jumbo frames on your physical switch for this to work.

How to Backup NAS (Network Attached Storage) EMC Shares in Backup Exec without the NDMP

How to Backup NAS (Network Attached Storage) EMC Shares in Backup Exec without the NDMP

How to Backup NAS (Network Attached Storage) EMC Shares in Backup Exec without the NDMP Option

Problem How to Backup NAS (Network Attached Storage) EMC Shares in Backup Exec without the NDMP Option

Solution To be able to backup NAS Shares without the NDMP (Network Data Management Protocol) Option with an EMC Filer, perform the steps listed below: Disable NDMP on the EMC Data Mover by removing the following line from the /nas/server/slot_x/netd file: ndmp port=10000

To activate the change afterward, reboot that Data Mover.

EMC Knowledge Base ID: emc158412