Asset ID: |
1-71-1392714.1 |
Update Date: | 2012-08-27 |
Keywords: | |
Solution Type
Technical Instruction Sure
Solution
1392714.1
:
Pillar Axiom: Brick Zeroing Rates for Storage Reclimation
Related Items |
- Pillar Axiom 300 Storage System
- Pillar Axiom 600 Storage System
- Pillar Axiom 500 Storage System
|
Related Categories |
- PLA-Support>Sun Systems>DISK>Pillar Axiom>SN-DK: Ax600
|
In this Document
Applies to:
Pillar Axiom 500 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Pillar Axiom 600 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Pillar Axiom 300 Storage System - Version Not Applicable to Not Applicable [Release N/A]
Information in this document applies to any platform.
Goal
Zeroing occurs independently on each Brick at a rate based on the load (on the individual Brick) and the type and capacity of the Brick.
Hardware and Software Versions
Hardware Software
Pillar Axiom 300/500/600 Pillar AxiomONE 3.x
Fix
The following article is directly related to this topic:
Upgrading from R2.8 to R3.x
After completing an upgrade to R3 from an earlier release, the Pillar Axiom system shows that there is no free space. This is because the ra w storage in the Brick storage enclosures is converted from RAID 5 to an array-neutral format to support the RAID 10, wide stripe, and Oracl e ASM performance capabilities of R3.
To do this, the allocated space is converted to RAID 5 Minimum Allocation Units (MAU) with the existing data intact. The unallocated space i s then converted to blank MAUs by writing zeroes to all unused disk blocks. This process is called a "Brick Zero." As the Brick Zero proceed s, the space available for allocation increases. This same practice applies to space that results from the deletion of LUNs or filesystems; the space is not available for re-allocation until it has been zeroed to allow it to be allocated as any of the supported RAID types.
While the zero rate for a Brick generally varies depending on the load, the approximate rate is 1.5 hours per Terabyte. Similarly, as filesy stems and LUNs are deleted, the unallocated storage is reclaimed by Brick Zeroing at similar rates.
NOTE: The zeroing times presented in this article are intended to serve only as a guideline and do not indicate guaranteed limits for the B rick zeroing process.
The following table lists measured Brick zero times for a variety of Brick types:
Brick zero time
Brick type Min hours Max hours Avg hours
FC146 2.2 4.6 3.4
FC300 4.6 9.4 7.0
SATA160 0.8 1.7 1.2
SATA250 1.2 2.6 1.9
SATA400 2.0 4.2 3.1
SATA500 2.5 5.2 3.8
SATA750 3.7 7.8 5.8
SATA1000 5.0 10.4 7.7
TIP: You can determine the amount of storage not available due to a Brick Zero in progress by examining the UnpreparedSystemCapacity field in response to the pdscli request GetStorageConfigDetails.
Note that the total available space displayed changes after an upgrade from R2 to R3. This is because R2 only supports RAID 5, and the avail able space is reported as actual useable RAID 5 space.
R3, in contrast, supports a variety of RAID types on the Bricks so the available space is reported as raw space. The actual amount of space that can be allocated as user data storage therefore varies depending on whether that storage is configured as RAID 5, RAID 10, wide stripe, or Oracle Performance Profile.
Attachments
This solution has no attachment