Sun Microsystems, Inc.  Sun System Handbook - ISO 4.1 October 2012 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1397928.1
Update Date:2012-08-27
Keywords:

Solution Type  Technical Instruction Sure

Solution  1397928.1 :   Pillar Axiom: How To Perform Fiber Channel Pathing Verification for VMware ESX Server 3.x to Eliminate Non Optimized Access to Axiom Storage  


Related Items
  • Pillar Axiom 600 Storage System
  •  
  • Pillar Axiom 500 Storage System
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>Pillar Axiom>SN-DK: Ax600
  •  
  • .Old GCS Categories>Sun Microsystems>Storage - Disk>Pillar Axiom
  •  




In this Document
Goal
Fix


Applies to:

Pillar Axiom 600 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Pillar Axiom 500 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Information in this document applies to any platform.

Goal


A procedure to verify that the ESX server node communication is not causing non-optimized access to the Axiom

Hardware and Software Versions

Hardware                     Software
Axiom 500/600                 All
VMWare ESX Server       3.x

Fix


The following articles are related to Fiber Channel Pathing Verification for VMware ESX Server:

An explanation of non-optimized access
Prerequisites

A host with network access to the Axiom GUI.
Console access of all ESX server nodes or VI Client access to the server(s)
Notes

This action plan is non-disruptive to data access

It is not advisable to modify any ESX FC communication paths without adequate understanding of probable disruption. This procedure is for verification purposes only.
Identify all LUNs associated with ESX Server nodes


Log into the Axiom GUI.
Click on the storage tab at the top of the window.
On the left hand side the Storage widow under SAN, click Hosts
Click on the first ESX server host from the list presented. The Modify Host Information sub window should appear
In the sub window, click the LUN Connections tab near the bottom
Record the names and LUN numbers of all LUNs listed.
Click Cancel
Repeat this section for all remaining ESX server nodes.
Identify the owning node for all LUNs associated with ESX nodes

In the Storage screen, on the left-hand side under SAN, click LUNs
Click on the name of the first LUN recorded in the first section. The Modify LUN sub window should open
In the sub window under the LUN Assignment section, record the value of Current Slammer Location:
Click Cancel
Repeat this section for all remaining LUNs recorded in the last section.
Verify pathing configuration for all ESX nodes

NOTE: This process can be accomplished through either CLI or VI Client application

CLI Method:

1.       On an ESX server node console, execute the following command:
i.         esxcfg-mpath
2.       This command will produce an out similar to the following:
i.  Disk vmhba1:2:6 /dev/sds (191624MB) has 4 paths and policy of Fixed
    FC 28:0.0 500110a00017b4b0<->2100000b08043030 vmhba1:2:6 On active preferred
    FC 28:0.0 500110a00017b4b0<->2400000b08043030 vmhba1:3:6 On
    FC 28:0.1 500110a00017b4b2<->2300000b08043030 vmhba2:2:6 On
    FC 28:0.1 500110a00017b4b2<->2200000b08043030 vmhba2:3:6 On

   Disk vmhba1:2:7 /dev/sdt (271424MB) has 4 paths and policy of Fixed
    FC 28:0.0 500110a00017b4b0<->2100000b08043030 vmhba1:2:7 On active preferred
    FC 28:0.0 500110a00017b4b0<->2400000b08043030 vmhba1:3:7 On
    FC 28:0.1 500110a00017b4b2<->2300000b08043030 vmhba2:2:7 On
    FC 28:0.1 500110a00017b4b2<->2200000b08043030 vmhba2:3:7 On

   Disk vmhba1:2:8 /dev/sdu (293999MB) has 4 paths and policy of Fixed
   FC 28:0.0 500110a00017b4b0<->2100000b08043030 vmhba1:2:8 On active preferred
   FC 28:0.0 500110a00017b4b0<->2400000b08043030 vmhba1:3:8 On
   FC 28:0.1 500110a00017b4b2<->2300000b08043030 vmhba2:2:8 On
   FC 28:0.1 500110a00017b4b2<->2200000b08043030 vmhba2:3:8 On
3.       Verify the pathing policy for all LUNs is set to fixed
4.       Using information recorded in previous steps, match each LUN number of a SCSI device to its appropriate LUN name on the Axiom and verify that the Active/Preferred path is set to address a WWPN of the correct Slammer Node.
i.         At this time, PLEASE NOTE: All WWPNs beginning with 21 and 23 reside on CU0 of a SAN Slammer and all WWPNs beginning with 22 and 24 reside on CU1 of a SAN Slammer.
5.       If any changes are needed, note them to bring to the attention of the ESX server administrator after the action plan is complete.
6.       Repeat this section for all remaining ESX server nodes.
VI Client Method

1.       Select the ESX Server you are modifying in the GUI
2.       Select the Configuration Tab
3.       Select Storage is the listed fields
4.       Select the LUN you are working with
5.       Select the hard to see Properties link on the far right side of the page
6.       Select the Manage Paths button on the lower right of the pop up screen

See screenshot below
NOTE: This is also where the Data Store Name can be changed to match the LUN name.




Here is the Manage Paths screenshot.

7.       Ensure the Policy is Fixed.
8.       Ensure the first two characters of the SAN Identifier match the owning Slammer node and port desired as verified above.
9.       After all modifications are completed run GetLunPerf to confirm all NonOptimizedAccess issues have been resolved.
For More Information

Refer to the follow articles for more information about Fiber Channel Pathing Verification for VMware ESX Server:

An explanation of non-optimized access
VMware support


Attachments
This solution has no attachment
  Copyright © 2012 Sun Microsystems, Inc.  All rights reserved.
 Feedback