Asset ID: |
1-71-1213723.1 |
Update Date: | 2012-05-30 |
Keywords: | |
Solution Type
Technical Instruction Sure
Solution
1213723.1
:
Sun Storage 7000 Unified Storage System: Configuration and tuning for CIFS performance
Related Items |
- Sun Storage 7410 Unified Storage System
- Sun Storage 7310 Unified Storage System
- Sun ZFS Storage 7120
- Sun ZFS Storage 7320
- Sun ZFS Storage 7420
- Sun Storage 7110 Unified Storage System
- Sun Storage 7210 Unified Storage System
|
Related Categories |
- PLA-Support>Sun Systems>DISK>NAS>SN-DK: 7xxx NAS
- .Old GCS Categories>Sun Microsystems>Storage - Disk>Unified Storage
|
In this Document
Applies to:
Sun Storage 7110 Unified Storage System - Version Not Applicable to Not Applicable [Release N/A]
Sun Storage 7210 Unified Storage System - Version Not Applicable to Not Applicable [Release N/A]
Sun Storage 7410 Unified Storage System - Version Not Applicable to Not Applicable [Release N/A]
Sun Storage 7310 Unified Storage System - Version Not Applicable to Not Applicable [Release N/A]
Sun ZFS Storage 7120 - Version Not Applicable to Not Applicable [Release N/A]
7000 Appliance OS (Fishworks)
NAS head revision : [not dependent]
BIOS revision : [not dependent]
ILOM revision : [not dependent]
JBODs Model : [not dependent]
CLUSTER related : [not dependent]
Goal
To provide information to assist the user to configure the system for CIFS performance
Fix
When configuring the Series 7000 Unified Storage System appliance, the following factors have the greatest influence on the overall CIFS performance of the system:
- The choice of the ZFS pool RAID level
- Number of disks configured in the ZFS pool(s)
- Provision of Write-optimized SSDs (logzillas)
- Provision of Read-optimized SSDs (readzillas)
- Matching the ZFS filesystem recordsize to the client workload I/O size
- Size of (DRAM) Memory
- Number/speed of CPUs
In general, the biggest causes of CIFS performance problems on the Series 7000 appliance when configuring/sizing the system are:
- The 'wrong' choice of RAID level
- No Log SSDs ... or too few. Even if CIFS is asynchronous, the applications writing to the CIFS filesystem may use O_DSYNC option to write in a SYNC manner.
- Not enough disks configured in each pool
The choice of the ZFS pool RAID level
This is the most important decision when configuring the system for performance.
Choosing a RAID level:
- Double Parity RAID is the default 'Data Profile' type on the BUI storage configuration screen - > this is NOT a good choice for Random workloads.
- If tuning for performance and high number of IOPs, always choose 'Mirrored'.
- For random and/or latency-sensitive workloads:
- use a mirrored pool and configure sufficient Read SSDS and Log SSDs in it.
- budget for 'disk IOPS + 30%' for cache warmup.
- RAIDZ2/RAID3 provide great usable capacity for archives and filestores, but don't use these RAID levels for random workloads.
RAID recommendations:
- For random and/or latency sensitive workloads use mirrors (R1) and assume 100 IOPs/vdev.
- RAIDZ2/Z3 provide great usable capacity but poor performance for random workloads unless working set can be held in ARC/L2ARC.
- RAIDZ1 with Narrow Stripes is a reasonable compromise between Mirrors and RAIDZ2/Z3 but is vulnerable to disk failure.
- more than 2 Storage Pools per appliance supported in 2010.Q1 so you will be able to have a number of RAID levels per node.
Number of disks configured in the ZFS pool(s)
For good performance we need as many drives and Log SSDs as possible per pool ... so discourage lots of small pools. If customer wants to create separate pools for different projects/departments, create separate projects in the pool.
Multiple pools may be required if applications/usage dictates different RAID levels on the underlying storage.
Pool/disks recommendations:
- The more disks configured in a pool, the more IOPS are available -> keep the number of storage pools to a minimum (FEWER, LARGER pools better).
Provision of Write-optimized SSDs (logzillas)
Write-optimized (log) SSDs accelerate Synchronous Writes. Synchronous writes do not complete until the data is stored in non-volatile storage.
Log SSD vs DRAM:
- Sync Writes are buffered in the DRAM and the Log SSD.
- Non-sync writes are just buffered in DRAM.
- Written data is buffered in memory for up to 5 seconds.
Applications & Protocols that use Synchronous Writes:
- Databases
- Email systems
- Writes over CIFS are NOT synchronous writes on the appliance unless the application requests them (O_DSYNC option).
'Mirror' or 'Stripe' Log SSDs configuration ?
- If you have one Log SSD and it fails, there is potential data loss if you have a second failure before ZFS flushes the data to disk.
- Mirroring Log SSDs can maintain synchronous write performance after the failure of one Log SSD device.
- Striping Log SSD devices aggregates the performance of the SSDs together, but the 'stripe' will fail if one SSD device fails.
Log SSD recommendations:
- Need one log SSD per 100 MB/s of synchronous writes. A logzilla is able to sustain 3300 IOPS/s for a 4KB IO size.
- Log SSDs are required when applications or protocols performing synchronous writes are going to be used - omit them and performance issues may result.
- Log SSDs might be needed for CIFS when O_DSYNC option is used on application side.
- Always configure a minimum of TWO log SSDs in ZFS storage appliance.
- When sizing, take into consideration whether the log SSDs are mirrored or striped.
Provision of Read-optimized SSDs (readzillas)
The aim of read-optimized SSDs is to enhance read performance by accelerating ZFS caching.
Read SSD recommendations:
- No read SSDs are supported in a 7110/7210 and 7120.
- Read SSDs can be added (non-disruptively) to a 7310/7410/7320/7420.
- Each read SSD can service 3100 IOPS/sec for a 8KB IO size.
- While the read SSD is being written to (populated from the ARC), it is not available for reading -> need two read SSDs to make sure at least one is available for reads.
- Configure a minimum of two read SSDs in 7310/7410/7320/7420.
Matching the ZFS pool recordsize to the client workload I/O size
Notes on recordsize and file re-writing:
- Desktop applications completely re-write files when saving them.
- Some applications update individual blocks in files repeatedly, eg. databases.
- In that case - if the I/O size is smaller than the filesystem recordsize - read-modify-writes can occur (regardless of RAID level) eg. need to write 64Kb to a file with a (filesystem) 128Kb blocksize, must read 128Kb block before updating it and re-writing it.
Blocksize recommendations:
- When configuring the ZFS pool/shares recordsize, attempt to match it to the actual client workload I/O size.
Size of (DRAM) Memory
Memory is used as a 'cache' for data blocks (ARC).
Memory recommendations:
- Attempt to size the memory configuration so that the application 'working set' fits into the appliance memory (only applicable where the application has a 'working set' eg. databases, VDI).
- 64 GB should be viewed as a minimum memory configuration for production use.
Number/speed of CPUs
CPU recommendations:
- Minimum of two CPUs recommended.
- Software features such as compression and replication make heavy use of CPU cores.
- Need to add CPUs to get the maximum memory slots.
Do we have a hardware performance 'bottleneck' ?
In some instances, the 'limiting factor' in the system performance may be some component of the actual system configuration ie. the number/size/speed of:
- CPU
- DRAM memory
- Network hardware
- Read SSDs (readzilla) configured on the server
- Write SSDs (logzilla) configured in the JBODs
- Disks
See "Do we have a hardware performance 'bottleneck' ?" section of <Document 1213725.1> for manually observing hardware bottlenecks in Analytics.
Back to <Document 1331769.1> Sun Storage 7000 Unified Storage System: How to Troubleshoot Performance Issues.
References
<NOTE:1213725.1> - Sun Storage 7000 Unified Storage System: Configuration and tuning for NFS performance
<NOTE:1331769.1> - Sun Storage 7000 Unified Storage System: How to Troubleshoot Performance Issues
Attachments
This solution has no attachment