Asset ID: |
1-72-1315536.1 |
Update Date: | 2012-01-13 |
Keywords: | |
Solution Type
Problem Resolution Sure
Solution
1315536.1
:
Sun Storage 7000 Unified Storage System: RAIDZ2 Performance Issues With High I/O Wait Queues
Related Items |
- Sun Storage 7410 Unified Storage System
- Sun ZFS Storage 7320
- Sun Storage 7310 Unified Storage System
- Sun Storage 7210 Unified Storage System
- Sun Storage 7110 Unified Storage System
- Sun ZFS Storage 7120
- Sun ZFS Storage 7420
|
Related Categories |
- PLA-Support>Sun Systems>DISK>NAS>SN-DK: 7xxx NAS
- .Old GCS Categories>Sun Microsystems>Storage - Disk>Unified Storage
|
In this Document
Symptoms
Cause
Performance with raidz2 profile
Performance with mirror profile
Solution
Created from <SR 3-3256848516>
Applies to:
Sun Storage 7410 Unified Storage System - Version: Not Applicable
and later [Release: N/A and later ]
Sun Storage 7110 Unified Storage System - Version: Not Applicable and later [Release: N/A and later]
Sun Storage 7310 Unified Storage System - Version: Not Applicable and later [Release: N/A and later]
Sun Storage 7210 Unified Storage System - Version: Not Applicable and later [Release: N/A and later]
Sun ZFS Storage 7120 - Version: Not Applicable and later [Release: N/A and later]
Information in this document applies to any platform.
NAS head revision : [not dependent]
BIOS revision : [not dependent]
ILOM revision : [not dependent]
JBODs Model : [not dependent]
CLUSTER related : [not dependent]
Symptoms
Lower than expected random read IO performance on 7000 series with raidz2 pool profile.
This is especially the case in pools with less than 15 spindles assigned to a single pool but applies to all raidz2 pools.
Cause
A zpool is constructed of one or many virtual devices (vdevs), these vdevs are themselves constructed of block devices which in the case of the ZFS Storage Appliance are always entire disks (spindles).
Performance with raidz2 profile
NAME STATE READ WRITE CKSUM
TestPool Raidz2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
c0t6d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
c0t8d0 ONLINE 0 0 0
c0t9d0 ONLINE 0 0 0
c0t10d0 ONLINE 0 0 0
c0t11d0 ONLINE 0 0 0
c0t12d0 ONLINE 0 0 0
c0t13d0 ONLINE 0 0 0
c0t14d0 ONLINE 0 0 0
spares
c0t15d0 AVAIL
c0t16d0 AVAIL
Here, we see all data drives (spindles) belong to a single top level vdev under zfs. When an IO read is done, we have to read the data from all the disks inside the raidz2 vdev.
In the worse situation with extrem random workload, a disk can sustain 150 IOPS. So, the TestPool will be limited to : 1 vdev * 150 IOPS = 150 IOPS / sec
Performance with mirror profile
NAME STATE READ WRITE CKSUM
TestPool Mirror ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
c0t6d0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
c0t8d0 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
c0t9d0 ONLINE 0 0 0
c0t10d0 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
c0t11d0 ONLINE 0 0 0
c0t12d0 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
c0t13d0 ONLINE 0 0 0
c0t14d0 ONLINE 0 0 0
spares
c0t15d0 AVAIL
c0t16d0 AVAIL
Here, we have 7 vdevs. When an IO read is done, we can read it from 1 vdev and the next IO can be read from the next vdev in parallel.
So, the TestPool will be able to sustain : 7 vdev * 150 IOPS = 1050
IOPS / sec. Actually, it can be even better as we can read different data from each disk of a vdev in parallel.
They key concept to remember is that the IOPS are determined per vdev not per spindle (disk).
Mirrored Storage Profiles allow for a much higher vdev count when using the same amount of spindles.
Solution
Use mirrored profiles for pools when ever possible to greatly improve IOPs.
This will reduce capacity but greatly increase performance.
Attachments
This solution has no attachment