Asset ID: |
1-71-1465038.1 |
Update Date: | 2012-09-27 |
Keywords: | |
Solution Type
Technical Instruction Sure
Solution
1465038.1
:
Calculating Usable Space in Exadata Cell
Related Items |
- Exadata Database Machine X2-2 Qtr Rack
- Exadata Database Machine X2-2 Full Rack
- Exadata Database Machine X2-8
- Exadata Database Machine X2-2 Half Rack
- Oracle Exadata Storage Server Software
- Exadata Database Machine X2-2 Hardware
- Exadata Database Machine V2
- Oracle Exadata Hardware
|
Related Categories |
- PLA-Support>Database Technology>Engineered Systems>Oracle Exadata>DB: Exadata_EST
|
This document helps in understanding the RAW disk Space available for ASM from each cell
Applies to:
Exadata Database Machine X2-2 Half Rack - Version All Versions and later
Exadata Database Machine X2-2 Full Rack - Version All Versions and later
Exadata Database Machine X2-2 Hardware - Version All Versions and later
Exadata Database Machine X2-2 Qtr Rack - Version All Versions and later
Exadata Database Machine X2-8 - Version All Versions and later
Linux x86-64
Goal
To provide information and method to arrive at usable space in Exadata Cell
Fix
Each Exadata Cell contains 12 Hard disks. These HDD can be of 2 types, one SATA and another one is SAS. SAS disks are of 600GB in size and SATA disks are on 2TB in Size
But effective size of SAS disks is 558.9 GB and the same for SATA disks is 1862.6GB. This can be checked with list physicaldisk output from the cellcli command
CellCLI> list physicaldisk
20:0 E1JZ34 normal
20:1 E1JY93 normal
20:2 E1JYKQ normal
20:3 E1JYT1 normal
20:4 E1JZ08 normal
20:5 E1JXPB normal
20:6 E1K030 normal
20:7 E1KEYB normal
20:8 E1JYPK normal
20:9 E1JYX0 normal
20:10 E1KEXS normal
20:11 E1JYWX normal
FLASH_1_0 1047M050DV normal
FLASH_1_1 1047M050HD normal
FLASH_1_2 1047M050LY normal
FLASH_1_3 1047M050M0 normal
FLASH_2_0 1048M0539N normal
FLASH_2_1 1047M04XX7 normal
FLASH_2_2 1048M05381 normal
FLASH_2_3 1048M05376 normal
FLASH_4_0 1047M04XWH normal
FLASH_4_1 1047M04XX0 normal
FLASH_4_2 1047M04XQA normal
FLASH_4_3 1047M04XW9 normal
FLASH_5_0 1047M04XE8 normal
FLASH_5_1 1047M04XEV normal
FLASH_5_2 1047M04XDX normal
FLASH_5_3 1047M04XEW normal
The above shows that we have 12 physical disks from lun 20:0 to 20:11 and the 16 flash disks.
The detail output of the physical disk shows the size
CellCLI> list physicaldisk 20:0 detail
name: 20:0
deviceId: 19
diskType: HardDisk
enclosureDeviceId: 20
errMediaCount: 0
errOtherCount: 0
foreignState: false
luns: 0_0
makeModel: "SEAGATE ST360057SSUN600G"
physicalFirmware: 0A25
physicalInsertTime: 2012-01-20T17:15:33-05:00
physicalInterface: sas ====>>> interface is SAS
physicalSerial: E1JZ34
physicalSize: 558.9109999993816G ====>>> Size
slotNumber: 0
status: normal
In the above the disk interface is SAS and available size is 558.9GB
Incase of SATA the size will be 1862.6GB
In Exadata Cell, First 2 disks are mirrored using RAID and is used for the Operating System . You can see the imageinfo output to see which partition are used
# imageinfo
Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
Cell version: OSS_11.2.3.1.0_LINUX.X64_120304
Cell rpm version: cell-11.2.3.1.0_LINUX.X64_120304-1
Active image version: 11.2.3.1.0.120304
Active image activated: 2012-04-24 08:27:58 -0400
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8
In partition rollback: Impossible
Cell boot usb partition: /dev/sdm1
Cell boot usb version: 11.2.3.1.0.120304
Inactive image version: 11.2.2.4.0.110929
Inactive image activated: 2012-04-24 07:05:18 -0400
Inactive image status: success
Inactive system partition on device: /dev/md5
Inactive software partition on device: /dev/md7
Boot area has rollback archive for the version: 11.2.2.4.0.110929
Rollback to the inactive partitions: Possible
The about output shows active partitions are /dev/md6 and /dev/md8 and the inactive partition is /dev/md5 and /dev/md7
We can use the fdisk output to see the size of these paritions
# fdisk -l /dev/md6
Disk /dev/md6: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
# fdisk -l /dev/md8
Disk /dev/md8: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
# fdisk -l /dev/md5
Disk /dev/md5: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
# fdisk -l /dev/md7
Disk /dev/md7: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
So, for operating system usage 29GB of space is utilize for each Active/Inactive partitions.
To present equal sized disk to the ASM instance, the 29GB in all the disks are left unused or used for another diskgroup (DBFS in most common cases). This can be seen in the griddisk output
$ cellcli -e list griddisk attributes name,asmDiskGroupName,cellDisk,offset,size,status
DATA_CD_00_dmorlx8cel13 DATA CD_00_dmorlx8cel13 32M 423G active
DATA_CD_01_dmorlx8cel13 DATA CD_01_dmorlx8cel13 32M 423G active
DATA_CD_02_dmorlx8cel13 DATA CD_02_dmorlx8cel13 32M 423G active
DATA_CD_03_dmorlx8cel13 DATA CD_03_dmorlx8cel13 32M 423G active
DATA_CD_04_dmorlx8cel13 DATA CD_04_dmorlx8cel13 32M 423G active
DATA_CD_05_dmorlx8cel13 DATA CD_05_dmorlx8cel13 32M 423G active
DATA_CD_06_dmorlx8cel13 DATA CD_06_dmorlx8cel13 32M 423G active
DATA_CD_07_dmorlx8cel13 DATA CD_07_dmorlx8cel13 32M 423G active
DATA_CD_08_dmorlx8cel13 DATA CD_08_dmorlx8cel13 32M 423G active
DATA_CD_09_dmorlx8cel13 DATA CD_09_dmorlx8cel13 32M 423G active
DATA_CD_10_dmorlx8cel13 DATA CD_10_dmorlx8cel13 32M 423G active
DATA_CD_11_dmorlx8cel13 DATA CD_11_dmorlx8cel13 32M 423G active
DBFS_DG_CD_02_dmorlx8cel13 DBFS_DG CD_02_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_03_dmorlx8cel13 DBFS_DG CD_03_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_04_dmorlx8cel13 DBFS_DG CD_04_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_05_dmorlx8cel13 DBFS_DG CD_05_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_06_dmorlx8cel13 DBFS_DG CD_06_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_07_dmorlx8cel13 DBFS_DG CD_07_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_08_dmorlx8cel13 DBFS_DG CD_08_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_09_dmorlx8cel13 DBFS_DG CD_09_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_10_dmorlx8cel13 DBFS_DG CD_10_dmorlx8cel13 528.734375G 29.125G active
DBFS_DG_CD_11_dmorlx8cel13 DBFS_DG CD_11_dmorlx8cel13 528.734375G 29.125G active
RECO_CD_00_dmorlx8cel13 RECO CD_00_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_01_dmorlx8cel13 RECO CD_01_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_02_dmorlx8cel13 RECO CD_02_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_03_dmorlx8cel13 RECO CD_03_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_04_dmorlx8cel13 RECO CD_04_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_05_dmorlx8cel13 RECO CD_05_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_06_dmorlx8cel13 RECO CD_06_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_07_dmorlx8cel13 RECO CD_07_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_08_dmorlx8cel13 RECO CD_08_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_09_dmorlx8cel13 RECO CD_09_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_10_dmorlx8cel13 RECO CD_10_dmorlx8cel13 423.046875G 105.6875G active
RECO_CD_11_dmorlx8cel13 RECO CD_11_dmorlx8cel13 423.046875G 105.6875G active
The above shows there are 3 sets of griddisks namely DATA, RECO and DBFS
The DATA* grid disk is configured on all the 12 disks with offset of 32M and each DATA* gridkdisk is of size 423GB.
The RECO* grid disks is configured on all the 12 disks with offset of 423.046875G and each RECO* griddisk is of size 105.6875G
The DBFS* grid disks is configured only on last 10 disks with offset of 528.734375G and each DBFS* griddisk is of size 29.125G
Hence on available space for ASM each storage cell is as below
DATA : 5076GB
RECO : 1268.25GB
DBFS : 291.25GB
Total 6635.5GB is available from each cell
Hence, When presented to ASM we have to consider the redundancy level, with Normal redundancy each disk is mirrored with another 1 disk and with High redundancy each disk is mirrored with another 2 disks.
Attachments
This solution has no attachment