Sun Microsystems, Inc.  Sun System Handbook - ISO 4.1 October 2012 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1444933.1
Update Date:2012-08-27
Keywords:

Solution Type  Technical Instruction Sure

Solution  1444933.1 :   Pillar Axiom: iSCSI Implementation Best Practices  


Related Items
  • Pillar Axiom 300 Storage System
  •  
  • Pillar Axiom 600 Storage System
  •  
  • Pillar Axiom 500 Storage System
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>Pillar Axiom>SN-DK: Ax600
  •  




In this Document
Goal
Fix


Applies to:

Pillar Axiom 300 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Pillar Axiom 500 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Pillar Axiom 600 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Information in this document applies to any platform.

Goal

This paper provides the guidelines for iSCSI Network best practices not covered in iSCSI Integration Guides or Administration Guide.

Fix

1. TOPOLOGY BEST PRACTICES



The logical topology above will aid in creating an optimized configuration. The blue paths show the logical connections the between the initiator and targets. This sort of topology is best employed when the Sever OS supports AxiomONE Path Management software or has built in ALUA support such as VMWare where preferred paths can be configured and persisted.
If the Server OS does not have a supporting AxiomONE Path Manager solution or tested ALUA support and it currently on the Pillar Data Systems support matrix the server should be attached in a single path configuration.
Please refer to Pillar Data Systems OS specific iSCSI best practices documentation and Administrator Guide when ever possible.



2. ISCSI BEST PRACTICES

2.1 AXIOM ISCSI BEST PRACTICES

In this section, the goal is to configure the Axiom iSCSI ports. Refer to the Axiom Administrator’s Guide for more details and setting options to configure the Axiom iSCSI ports for access and authentication. The basic configuration requires assignment of an IP for 2 or 4 of the Slammer ports. In the Axiom GUI, click on Storage->Slammer Ports and using the pull down menu on the bottom select “Modify iSCSI Port Settings”. Enter the IP information for each iSCSI Slammer port. The result should list an IP for each port. Note: Hosts with APM installed should expose all 4 Slammer ports and utilize round robin access to the preferred paths.



The Axiom’s iSCSI HBAs can handle 512 outstanding IO requests per port at any one time. For HA considerations iSCSI initiators should not exceed 256 outstanding IO’s per port so in failover situation there is available bandwidth to service traffic from the failed over port. If HA is not a concern then limit aggregated execution throttle to a maximum of 512 for host connected to any single port.

Example: Say you had 8 LUNs configured to access Slammer1 CU0 Port0 /Port 1 in round robin access on two SAN hosts. Your execution throttle for the host should be set to 256/8 or and execution throttle of 32 on each host. You may fine tune this if you find through trending as necessary.

Follow the HBA vendor or iSCSI software initiator configuration to set execution throttle or queue depth to appropriate setting.


2.2 NETWORKING BEST PRACTICES

- Use non blocking switches and auto negotiate speed on the switches for the Axiom iSCSI ports.

- DHCP can be used if the leases are essentially perpetual.

- Disable unicast storm control on iSCSI ports on the switchs. Most switches have unicast storm control disabled by default. If your switch has this enabled, you should disable this on the ports connected to iSCSI hosts and targets to avoid packet loss.

- Enable Flow Control on network switches and adapters; flow control ensures a receiver can make the sender pace its speed and is important in avoiding data loss

- Ensure spanning tree algorithm for detecting loops is turned off; loop detection introduces a delay in making a port become usable for data transfer and may lead to application timeouts

- Segregate SAN and LAN traffic. iSCSI SAN interfaces should be separated from other corporate network traffic (LAN). Servers should use dedicated NICs for SAN traffic when not using dedicated HBAs. Deploying iSCSI disks on a separate network helps to minimize network congestion and latency. Additionally, iSCSI volumes are more secure when…

- Segregate SAN & LAN traffic can be separated using port based VLANs or physically separate networks. The Axiom cannot currently configure specified VLAN tags for its interface and relies on the switch to segregate VLAN traffic.

- Configure additional Paths for High Availability; use AxiomONE Path Manager if the OS is supported with additional NICs or HBAs in the server to create additional connections to the iSCSI storage array through redundant Ethernet switch fabrics.

- For Microsoft SAN Hosts without HBAs, unbind File and Print Sharing from the iSCSI NIC

- Use Gigabit Ethernet connections for high speed access to storage. Congested or lower speed networks can cause latency issues that disrupt access to iSCSI storage and applications running on iSCSI devices. iSCSI is suitable for WAN and lower speed implementations including replication where latency and bandwidth are not a concern.

- Use Server class NICs when HBAs are not an option. It is recommended to use NICs which are designed for enterprise networking and storage applications.

- Use CAT6 rated cables for Gigabit Network Infrastructures.

- Use Jumbo Frames if supported in your network infrastructure. Jumbo Frames can be used to allow more data to be transferred with each Ethernet transaction and reduce the number of frames. This larger frame size reduces the overhead on both your servers and iSCSI targets. For end to end support, each device in the network needs to support Jumbo frames including the NIC / HBA and Ethernet switches.

- Ensure physical security

- Use CHAP authentication because that ensures each host has its own password.

- Use iSNS for discover

 

NOTE: It is also advised that if using iSCSI interfaces with Solaris machines, the PDU (protocol data unit) size might be set to 8k where this can fill up the iSCSI buffers rather quickly causing Slammer Control Unit WarmStart events. The recommendation (if Solaris machines using iSCSI) is to ensure the PDU size is set to a robust value (such as 64K) which would help alleviate any resource constraints and potential Slammer Control Unit WarmStart events.


Attachments
This solution has no attachment
  Copyright © 2012 Sun Microsystems, Inc.  All rights reserved.
 Feedback