Asset ID: |
1-71-1001837.1 |
Update Date: | 2012-07-31 |
Keywords: | |
Solution Type
Technical Instruction Sure
Solution
1001837.1
:
Sun Storage 3510 FC JBOD Array: How to Replace a Disk
Related Items |
- Sun Storage 3510 FC Array
|
Related Categories |
- PLA-Support>Sun Systems>DISK>Arrays>SN-DK: SE31xx_33xx_35xx
- .Old GCS Categories>Sun Microsystems>Storage - Disk>Modular Disk - 3xxx Arrays
|
PreviouslyPublishedAs
202514
Applies to:
Sun Storage 3510 FC Array - Version Not Applicable and later
All Platforms
Goal
Goal
This document explains how to replace a failed disk drive within a Sun Storage 3510 FC JBOD (Just a Bunch of Disks) array.
Fix
Steps to Follow
1. For a full list of 3510 FC array supported configurations including updates to
the below support levels, check the latest versions of:Sun StorEdge 3510 FC and 3511 SATA Array Release Notes (817-6597) Sun StorEdge 3000 Family Installation, Operation, and Service Manual (Appendix B: Using a Standalone JBOD Array (3510 FC Array Only)
The Sun StorEdge 3510 JBOD Expansion unit is supported in a single (standalone)
array configuration with the following support conditions:
- Support for volume servers only: 220R, 250, 420R, 450, V120, V280, and V880- Support for Solaris 8, 9 and 10 Operating Systems only- Support for Veritas Volume Manager (VxVm) 3.5, 4.0, 4.1, 5.0 or later, and Sun Solaris Volume Manager(SVM)/Solstice Disksuite (SDS)- Multi-pathing and/or load balancing between a server and single array via VxVM DMP only (no MPxIO support)- No daisy-chaining of JBODs; single JBOD connected to single or dual FC 2Gb HBAs only- No hub or switch between server and JBOD- Data only, no booting from a FC JBOD- No cluster support, neither VCS nor SunCluster2. Ensure all packages and patches are installed according to the release notes section: "Installing Sun StorEdge SAN Foundation Software". You can alternatively find the correct SAN foundation 4.x patches to download by reviewing the Sun StorEdge SAN Foundation Software 4.4 Installation Guide, 817-3671
In most case, only the patches for qlc, luxadm, cfgadm are needed.
3. Remove the disk configuration from the system. Important, never use the luxadm remove_device command
Place the device offline for the failed disk using luxadm(1M):# /usr/sbin/luxadm -e offline /dev/rdsk/cxtyd0s2 Remove entries from /dev using devfsadm(1M):# /usr/sbin/devfsadm -Cv Note: Now when the 3510 JBOD disk is under SDS or SVM with the latest luxadm patch, this procedure should no longer fail with the message :# /usr/sbin/luxadm -e offline /dev/rdsk/cxtyd0s2devctl: I/O error4. Now replace the failed disk. For the physical replacement, there is no way to locate the failed disk in the box. Keep the chart below in mind for an array with a boxid set to 0.targets0 3 6 91 4 7 102 5 8 11If the boxid is modified, just subtract (16*boxid) from the target to get theposition in the chart:boxid = 1 target from 16 to 27boxid = 2 target from 32 to 43boxid = 3 target from 48 to 59boxid = 4 target from 64 to 75
Note: Remember that the boxid can be set by the hidden switch under the front left
plastic cover.
After disk replacement, devfsadmd(1M) daemon should automatically recognize the new disk,
create the device and link so that the disk is available by format. If not, the following
commands should be used to diagnose the issue:# /usr/sbin/luxadm -e port
devices/pci@1f,0/pci@1/SUNW,qlc@2/fp@0,0:devctl CONNECTED
Note: If you get a "NOT CONNECTED" error on the path used by 3510, check the fiber connection on box and server using cfgadm(1M).# /usr/sbin/cfgadm -alAp_Id Type Receptacle Occupant Conditionc1 scsi-bus connected configured unknownc1::/dev/lus unknown connected configured unknownc1::rmt/0 tape connected configured unknownc2 scsi-bus connected configured unknownc2::lus1 unknown connected configured unknownc3 fc-private connected configured unknownc3::2100000c5020555d disk connected configured unknownc3::2100000c50205653 disk connected configured unknownc3::2100000c50205a3f disk connected configured unknownc3::2100000c50205aad disk connected configured unknownc3::2100000c50205d18 disk connected configured unknownc3::215000c0ff002f2a ESI connected configured unknownc4 fc connected unconfigured unknown
If cfgadm(1) doesn't show the above result, more work will be required.
When the controller isn't seen or seen unconfigured run the following:# /usr/sbin/cfgadm -c configure cx
When drives appear with a tag condition set to "unusable" issue the following: # /usr/sbin/luxadm -e forcelip devices/pci@1f,0/pci@1/SUNW,qlc@2/fp@0,0:devctl# /usr/sbin/luxadm -e port <--- gives the pathname
Refer to below CR if receive devctl error documented in step 3 above:
BugID 5075852 for workaround
Special note regarding Step #3
According to BugID 4921470 and BugID 6376642
"The luxadm(1M) utility is not supported for monitoring and managing Sun
StorEdge 3000 family arrays. However, certain luxadm arguments and
options can be used, including display, probe, dump_map, and rdls."
The revision will appear in section B.4 of the Sun StorEdge 3000 Family
Change History
Date: 2011-02-11
User Name: [email protected]
Action: Currency & Update
Attachments
This solution has no attachment