Sun Microsystems, Inc.  Sun System Handbook - ISO 4.1 October 2012 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-1315926.1
Update Date:2012-08-20
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  1315926.1 :   11.2.0.1 to 11.2.0.2 Database Upgrade on Exadata Database Machine  


Related Items
  • Exadata Database Machine X2-8
  •  
  • Exadata Database Machine X2-2 Qtr Rack
  •  
  • Exadata Database Machine X2-2 Full Rack
  •  
  • Exadata Database Machine X2-2 Half Rack
  •  
  • Exadata Database Machine X2-2 Hardware
  •  
  • Exadata Database Machine V2
  •  
  • Oracle Exadata Hardware
  •  
  • Oracle Exadata Storage Server Software
  •  
Related Categories
  • PLA-Support>Database Technology>Engineered Systems>Oracle Exadata>DB: Exadata_EST
  •  




In this Document
Purpose
Details
 Oracle Exadata Database Machine Maintenance
 Overview
 Conventions and Assumptions
 References
 Oracle Documentation
 Oracle Support Documents
 Prepare the Existing Environment
 Review Database 11.2.0.2 Upgrade Prerequisites
 Sun Datacenter InfiniBand Switch 36 is running software version 1.1.3-2 or later.
 Database 11.2.0.1 software has applied at a minimum bundle patches DB_BP6 and GI_BP4 in the grid home and all database homes on all database servers.
 Exadata Storage Server software is version 11.2.2.1.1 or later.
 Ensure network 169.254.x.x is not currently used on database servers.
 Do not place the new ORACLE_HOME under /opt/oracle.
 Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:
 Download Required Files
 Run Exadata HealthCheck
 Update OPatch in Grid Home and Database Home on All Database Servers
 Apply Fix 12652740 (for unpublished bug 9329767) to the 11.2.0.1 Grid Home
 Install and Upgrade Grid Infrastructure to 11.2.0.2
 Create the new ORACLE_HOME directory where 11.2.0.2 will be installed
 Prepare installation software
 Enable Automatic Memory Management (AMM) for ASM
 Set cluster_interconnects Explicitly for ASM
 Perform 11.2.0.2 Grid Infrastructure software installation and upgrade using OUI
 Relink oracle Executable in Grid Home with RDS
 Implement Necessary Workarounds
 Execute rootupgrade.sh on Each Database Server
 Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Grid Home
 Install Database 11.2.0.2 Software
 Prepare Installation Software
 Perform 11.2.0.2 Database Software Installation with OUI
 Relink oracle Executable in Database Home with RDS
 Update OPatch in New Grid Home and New Database Home on All Database Servers
 Install Latest 11.2.0.2 Bundle Patch - Do Not Perform Post Installation Steps
 Stage the patch
 Create OCM response file if required
 Patch 11.2.0.2 database home
 Patch 11.2.0.2 grid home
 Skip patch post installation steps
 Data Guard only - Apply Fix for Bug 11664046 on Primary and Standby Database Servers
 Stage the patch
 Patch 11.2.0.2 database home
 Apply 11.2.0.2 Bundle Patch Overlay Patches as Specified in Document 888828.1
 Apply Customer-specific 11.2.0.2 One-Off Patches
 Upgrade Database to 11.2.0.2
 Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
 Run Pre-Upgrade Information Tool
 Handle obsolete and underscore parameters
 Review pre-upgrade information tool output
 Set cluster_interconnects Explicitly for All Primary and Standby Databases
 Data Guard only - Synchronize Standby and Switch to 11.2.0.2
 Flush all redo generated on the primary and shutdown
 Shutdown the primary database.
 Disable fast-start failover and Data Guard broker
 Shutdown the standby database and restart it with 11.2.0.2
 Start all primary instances in restricted mode
 Upgrade the Database with Database Upgrade Assistant (DBUA)
 Review and perform steps in Oracle Upgrade Guide, Chapter 4 'After Upgrading to the New Release'
 Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Database Home
 Add Underscore Initialization Parameters Back
 Run 11.2.0.2 Bundle Patch Post Installation Steps
 Data Guard only - Enable Fast-Start Failover and Data Guard Broker
 DBFS only - Perform DBFS Required Updates
 Obtain latest mount-dbfs.sh script from Document 1054431.1
 Edit mount-dbfs.sh script and Oracle Net files for the new 11.2.0.2 environment
 Modify the dbfs_mount cluster resource
 If using wallet-based authentication, recreate the symbolic link to /sbin/mount.dbfs
 Run Exadata HealthCheck
 Deinstall the 11.2.0.1 Database and Grid Homes
 Community Discussions
References


Applies to:

Exadata Database Machine V2 - Version Not Applicable and later
Exadata Database Machine X2-2 Full Rack - Version Not Applicable and later
Exadata Database Machine X2-2 Half Rack - Version Not Applicable and later
Exadata Database Machine X2-2 Hardware - Version Not Applicable and later
Exadata Database Machine X2-2 Qtr Rack - Version Not Applicable and later
Information in this document applies to any platform.

Purpose

This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.

Details

This document contains div tags within the source HTML. Care should be taken to keep in tact.
Note the "@div" lines within the content and at the bottom Comment section,
"Lifecycle Advisor Document [=DIVs=]" for more detail.

Oracle Exadata Database Machine Maintenance

11.2.0.1 to 11.2.0.2 Upgrade

Overview

div starts here

This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.  

@div ends here There are four main sections to the upgrade:

Section Overview
Prepare the Existing Environment @div starts here
The software versions and patches installed in the current environment must be at certain minimum levels before upgrade to 11.2.0.2 can begin.  Depending on the patch installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime.
@div ends here
Install and Upgrade Grid Infrastructure to 11.2.0.2 Grid Infrastructure upgrade from 11.2.0.1 to 11.2.0.2 is always performed in a RAC rolling manner.
Install Database 11.2.0.2 Software Database 11.2.0.2 software installation is performed into a new ORACLE_HOME directory.  It is performed with no impact to the current environment.
Upgrade Database to 11.2.0.2 @div starts here
Database upgrade from 11.2.0.1 to 11.2.0.2 requires database-wide downtime.

Rolling upgrade with Logical Standby or Golden Gate may be used to reduce database downtime during this section.  This topic is not covered in this document.
@div ends here


div starts here

Conventions and Assumptions

For all patching example commands below the following is assumed:
  • The database and grid software owner is oracle.
  • The Oracle inventory group is oinstall.
  • The file ~oracle/dbs_group and ~root/dbs_group exists and contains the names of all database servers.
  • The database home and grid home is the same on all primary and standby servers.
  • Current database home is /u01/app/oracle/product/11.2.0/dbhome_1
  • Current grid home is /u01/app/11.2.0/grid
  • New database home will be /u01/app/oracle/product/11.2.0.2/dbhome_1
  • New grid home will be /u01/app/11.2.0.2/grid
  • The primary database to be upgraded is named PRIM.
  • The standby database associated with primary database PRIM is named STBY.

@div ends here

References

Oracle Documentation

Oracle Support Documents

  • <Document 888828.1> - Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions
  • <Document 1270094.1> - Exadata Critical Issues
  • <Document 1314319.1> - Bug Fix List: the 11.2.0.2 Patch Bundles for Oracle Exadata Database Machine
  • <Document 1070954.1> - Oracle Database Machine HealthCheck
  • <Document 1054431.1> - Configuring DBFS on Oracle Database Machine
  • <Document 361468.1> - HugePages on Oracle Linux 64-bit
  • <Document 1284070.1> - Updating key software components on database hosts to match those on the cells
  • <Document 884522.1> - How to Download and Run Oracle's Database Pre-Upgrade Utility
  • <Document 1288640.1> - Managed Recovery (MRP) Fails w/ ORA-328 After Upgrade to 11.2.0.2 and Switchover
  • <Document 969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix
  • <Document 1279525.1> - Oracle Exadata Database Machine Grid Infrastructure Upgrade to 11.2.0.2 fails with ASM ORA-4031
  • <Document 1281913.1> - Root Script Fails if ORACLE_BASE is set to /opt/oracle
  • <Document 1299752.1> - How to Downgrade from 11.2.0.2 to 11.2.0.1 Grid Infrastrucure using +ASM

 

Prepare the Existing Environment

Here are the steps performed in this section.

  • Review Database 11.2.0.2 Upgrade Prerequisites
  • Download Required Files
  • Run Exadata HealthCheck
  • Update OPatch in Grid Home and Database Home on All Database Servers
  • Apply <Patch 12652740> (for unpublished bug 9329767) to Grid Home on All Database Servers

 

Review Database 11.2.0.2 Upgrade Prerequisites


div starts here

The following prerequisites must be in place prior to performing the steps in this document to upgrade Database or Grid Infrastructure to 11.2.0.2.

Sun Datacenter InfiniBand Switch 36 is running software version 1.1.3-2 or later.

  • If you must update InfiniBand switch software to meet this requirement, install the most recent version indicated in <Document 888828.1>.

Database 11.2.0.1 software has applied at a minimum bundle patches DB_BP6 and GI_BP4 in the grid home and all database homes on all database servers.

  • If you must update Database 11.2.0.1 or Grid Infrastructure 11.2.0.1 software to meet this requirement, install the most recent version indicated in <Document 888828.1>.
  • Apply all overlay and additional patches required for the installed bundle patch.  The list of required overlay and additional patches is found in <Document 888828.1>.
  • If 11.2.0.1 BP11 or earlier is installed, then the fix 12652740 for unpublished bug 9329767 is required for the Grid home.  The patch to install depends on the 11.2.0.1 bundle patch currently installed in the Grid home:
    • BP6 - <Patch 12855122>
    • BP7 - <Patch 12855457>
    • BP8 - <Patch 12855210>
    • BP9 - <Patch 12855250>
    • BP10 - <Patch 12855287>
    • BP11 - <Patch 12859491>
    • BP12 - no patch required
  • Verify that one-off patches currently installed on top of 11.2.0.1 are fixed in 11.2.0.2.  Review <Document 1314319.1> for the list of fixes provided with 11.2.0.2 bundle patches.  If you are unable to determine if a one-off patch is still required on top of 11.2.0.2, contact Oracle Support.

Exadata Storage Server software is version 11.2.2.1.1 or later.

  • If you must update Exadata Storage Server software to meet this requirement, install the most recent version indicated in <Document 888828.1>.
  • If your database servers currently run Oracle Linux 5.3 (kernel version 2.6.18-128), in order to maintain the recommended practice that OFED software version on database servers and Exadata Storage Servers is the same, then your database server must be updated to run Oracle Linux 5.5 (kernel version 2.6.18-194).  Follow the steps in <Document 1284070.1> to perform this update.  Note that it is not required to update Oracle Linux to 5.5.

Ensure network 169.254.x.x is not currently used on database servers.

  • The Oracle Clusterware Redundant Interconnect Usage feature, new in 11.2.0.2, uses network 169.254.0.0/16 for the highly available virtual IP (HAIP).  Although this feature is currently not used on Exadata, Oracle Clusterware 11.2.0.2 will manage a virtual interface for the HAIP with an IP address in the 169.254.0.0/16 network.  To avoid conflicting with the HAIP address, ensure the 169.254. x.x network is currently not in use on the database servers (Note - Per RFC 3927, network 169.254.x.x should not be used for static addresses).  For each database server, this can be confirmed by making sure that the command "/sbin/ifconfig | grep ' 169.254' " returns no rows. If 169.254.x.x is in use, it must be changed prior to 11.2.0.2 upgrade or the upgrade will fail.    Follow instructions in the section with the title "Changing InfiniBand IP Addresses and Host Names" in chapter 7 of the Oracle Exadata Database Machine Owner's Guide.

Do not place the new ORACLE_HOME under /opt/oracle.

  • If this is done, see <Document 1281913.1> for additional steps required after software is installed.

Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:

    • If the target 11.2.0.2 bundle patch is BP6 or earlier, then the fix for <bug 11664046> is required.  The fix is included in 11.2.0.2 BP7.  If installing 11.2.0.2 BP7 or later, a separate overlay patch is not required.
    • The standby database is running in real-time apply mode as determined by verifying v$archive_dest_status.recovery_mode='MANAGED REAL TIME APPLY' for the local archive destination on the standby database.
    • The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.

@div ends here

Download Required Files


div starts here

Download and stage the following files in /u01/app/oracle/patchdepot on database servers. 

Data Guard - If there is a standby database, stage the files on standby database servers also.

Files staged on first database server only
  • <Patch 10098816> - Oracle Database 11g, Release 2 (11.2.0.2) Patch Set 1 for Linux x86-64
    • p10098816_112020_Linux-x86-64_1of7.zip - Oracle Database
    • p10098816_112020_Linux-x86-64_2of7.zip - Oracle Database
    • p10098816_112020_Linux-x86-64_3of7.zip - Oracle Grid Infrastructure
    • p10098816_112020_Linux-x86-64_7of7.zip - Deinstall tool
Files staged on all database servers.
  • <Patch 6880880> - OPatch latest update
    • p6880880_112000_Linux-x86-64.zip
  • Fix 12652740 for 11.2.0.1 CRS rolling upgrade unpublished bug 9329767
    • This fix is not required if you currently have 11.2.0.1 Database BP12 installed in the grid home.  11.2.0.1 Database BP12 includes patch 12652740, which fixes this bug.
    • The patch to download depends on the 11.2.0.1 bundle patch currently installed in the Grid home:
      • BP6 - <Patch 12855122>
      • BP7 - <Patch 12855457>
      • BP8 - <Patch 12855210>
      • BP9 - <Patch 12855250>
      • BP10 - <Patch 12855287>
      • BP11 - <Patch 12859491>
      • BP12 - no patch required
  • Latest Database 11.2.0.2 bundle patch for Exadata latest
    • Refer to <Document 888828.1> for the latest 11.2.0.2 bundle patch
    • <Patch 11828582> - Bundle Patch 5 is used within this document - p11828582_112020_Linux-x86-64.zip
  • Data Guard only - Bundle patch overlay fix for unpublished bug 11664046.  See <Document 1288640.1> for details.
    • The fix for bug 11664046 is included in 11.2.0.2 BP7 and later.  If installing 11.2.0.2 BP7 or later, a separate overlay patch is not required.
    • If installing 11.2.0.2 BP6 or earlier, the overlay patch required must match the 11.2.0.2 bundle patch installed.  At the time of publication there are two overlay patches available.
      • BP4 overlay <Patch 11868617>
      • BP5 overlay <Patch 12312927>
    • If an overlay patch for the 11.2.0.2 bundle patch you will install is not listed above, either contact Oracle Support to check on availability of an overlay patch for your bundle patch, or utilize the workaround described in <Document 1288640.1>.  This is described in more detail later in this document.
    • <Patch 12312927> - Overlay fix for unpublished bug 11664046 on top of 11.2.0.2 BP5 is used within this document - p12312927_112020_Linux-x86-64.zip
For files that are staged on all database servers, use the following command to distribute the files to all database servers (using p6880880_112000_Linux-x86-64.zip in this example):

(oracle)$ dcli -l oracle -g ~/dbs_group -f p6880880_112000_Linux-x86-64.zip \
          -d /u01/app/oracle/patchdepot

 

@div ends here

 

Run Exadata HealthCheck

div starts here

Run HealthCheck to validate software, hardware, and firmware, and configuration best practice validation.  Resolve any issues identified by HealthCheck before proceeding with the upgrade. 

@div ends here Review <Document 1070954.1> for details.

Update OPatch in Grid Home and Database Home on All Database Servers

div starts here

Run both of these commands on one database server.

Data Guard - If there is a standby database, run these commands on one standby database server also.
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/11.2.0/grid \
          /u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip

(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/oracle/product/11.2.0/dbhome_1 \
          /u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip

@div ends here

Apply Fix 12652740 (for unpublished bug 9329767) to the 11.2.0.1 Grid Home

div starts here

The patch to install depends on the currently installed 11.2.0.1 bundle patch in the grid home.  See the prerequisites section above for details.

This step is not required if you currently have 11.2.0.1 Database BP12 installed in the grid home.
  • This patch must be installed in the 11.2.0.1 grid home on all database servers.
  • This patch is RAC rolling installable.
  • Follow installation instructions in the README.
Data Guard - If there is a standby database, run these commands on the standby database servers also.

@div ends here


 

Install and Upgrade Grid Infrastructure to 11.2.0.2

div starts here

The commands in this section will perform the Grid Infrastructure software installation and upgrade to 11.2.0.2.  Grid Infrastructure upgrade from 11.2.0.1 to 11.2.0.2 is performed in a RAC rolling fashion.

Data Guard - If there is a standby database, run these commands on the standby system separately to upgrade the standby system Grid Infrastructure.  The standby Grid Infrastructure upgrade can be performed in parallel with the primary, if desired.

@div ends here

Here are the steps performed in this section.

  • Create the new ORACLE_HOME directory where 11.2.0.2 will be installed
  • Prepare installation software
  • Enable Automatic Memory Management (AMM) for ASM
  • Set cluster_interconnects Explicitly for ASM
  • Perform 11.2.0.2 Grid Infrastructure software installation and upgrade using OUI
  • Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Grid Home

 

Create the new ORACLE_HOME directory where 11.2.0.2 will be installed

div starts here

In this document the new Grid Infrastructure home /u01/app/11.2.0.2/grid is used in all examples.  It is recommended that the new Grid Infrastructure home NOT be located under /opt/oracle.  If it is, review <Document 1281913.1>.

To create the new Grid Infrastructure home, run these commands from the first database server.  You will need to substitute your Grid Infrastructure owner username and Oracle inventory group name in place of oracle and oinstall, respectively.
(root)# dcli -g ~/dbs_group -l root mkdir -p /u01/app/11.2.0.2/grid/
(root)# dcli -g ~/dbs_group -l root chown oracle /u01/app/11.2.0.2/grid
(root)# dcli -g ~/dbs_group -l root chgrp -R oinstall /u01/app/11.2.0.2

@div ends here

Prepare installation software

div starts here

Unzip the 11.2.0.2 grid software.  Run this command on the first database server.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_Linux-x86-64_3of7.zip \
          -d /u01/app/oracle/patchdepot

@div ends here

Enable Automatic Memory Management (AMM) for ASM

div starts here

Enable automatic memory management (AMM) by setting memory_target=1025M, and disable manual memory management by running the ALTER SYSTEM statements below.  The new parameter settings can be made to the SPFILE only.  They will take affect when ASM is restarted later in the upgrade process.

AMM is not compatible with hugepages, hence the ASM instance will not use hugepages.  If you previously configured hugepages on database servers and you currently allocate enough hugepages to accommodate database and ASM, then you should lower the vm.nr_hugepages setting to avoid wasting memory since ASM will no longer use hugepages.  Review <Document 361468.1> for details.

Connect to one ASM instance and run the commands below.  The ALTER SYSTEM statement will report ORA-32010 if the parameter being reset is not currently set in the SPFILE.  This error can be ignored.
SYS@+ASM1> alter system set memory_target=1025M scope=spfile;
SYS@+ASM1> alter system reset sga_target scope=spfile;
SYS@+ASM1> alter system reset sga_max_size scope=spfile;
SYS@+ASM1> alter system reset pga_aggregate_target scope=spfile;
SYS@+ASM1> alter system reset memory_max_target scope=spfile;

SYS@+ASM1> select sid, name, value from v$spparameter
     where name in
       ('memory_target','sga_target','sga_max_size',                        
        'pga_aggregate_target','memory_max_target');

SID    NAME                      VALUE
------ ------------------------- -----------------------------------
*      sga_max_size
*      sga_target
*      memory_target             1074790400
*      memory_max_target
*      pga_aggregate_target

If this step is not followed and the Grid Infrastructure upgrade to 11.2.0.2 fails because ASM failed to start with an ORA-4031 error, refer to <Document 1279525.1> for the manual steps required to complete the Grid Infrastructure upgrade.

@div ends here

Set cluster_interconnects Explicitly for ASM

div starts here

As a result of the redundant interconnect new feature in 11.2.0.2, if initialization parameter cluster_interconnects is not set, 11.2.0.2 will use an address in the automatically configured network 169.254.0.0/16.  Databases running on Exadata upgraded to 11.2.0.2 should continue to use the same cluster interconnect as used in 11.2.0.1.  This is accomplished by explicitly setting the initialization parameter cluster_interconnects.

Perform the steps below for ASM.  The cluster_interconnect parameter for databases will be reset later in this process.

Data Guard - If you are upgrading a Data Guard environment, then also perform the steps below for ASM on the standby system.

The cluster_interconnects parameter is set only in the SPFILE.  It will take effect on the ASM restart that occurs later in the upgrade process.  There is no need to restart ASM before the upgrade.
SYS@+ASM1> select inst_id, name, ip_address from gv$cluster_interconnects;

INST_ID    NAME            IP_ADDRESS
---------- --------------- ----------------
 1         bond0           192.168.10.1
 2         bond0           192.168.10.2

SYS@+ASM1> alter system set cluster_interconnects='192.168.10.1' sid='+ASM1' scope=spfile;
SYS@+ASM1> alter system set cluster_interconnects='192.168.10.2' sid='+ASM2' scope=spfile;

To verify
SYS@+ASM1> select sid, value from v$spparameter where name = 'cluster_interconnects';

SID     VALUE
------- ------------------------------
+ASM1   192.168.10.1
+ASM2   192.168.10.2

@div ends here

Perform 11.2.0.2 Grid Infrastructure software installation and upgrade using OUI

div starts here

Perform these instructions as the grid user (which is oracle in this document) to install the 11.2.0.2 Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 11.2.0.1 to 11.2.0.2.  The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion.  The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 11.2.0.2 grid home the active grid home.

The OUI installation log is located at /u01/app/oraInventory/logs.

To downgrade Oracle Clusterware back to 11.2.0.1 after a successful upgrade, follow the instructions in <Document 1299752.1>.

If the upgrade fails, refer to <Document 969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.

Set the environment then run the installer.
(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/grid
(oracle)$ ./runInstaller
Starting Oracle Universal Installer...
Perform these actions on the installer screens
  1. On Download Software Updates screen, select Skip software updates, and then click Next.
  2. On Select Installation Options screen, select Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management, and then click Next.
  3. On Select Product Languages screen, select languages, and then click Next.
  4. On Grid Infrastructure Node Selection screen, verify all database nodes are shown and selected, and then click Next.
  5. On Privileged Operating System Groups screen, verify group names and change if desired, and then click Next.
  6. On Specify Installation Location screen, enter /u01/app/11.2.0.2/grid as the Software Location for the Grid Infrastructure home, and then click Next.
    • It is recommended that the Grid Infrastructure home NOT be placed under /opt/oracle.
  7. On Perform Prerequisite Checks screen, the two checks noted below may fail. If these are the only failed checks, Click Ignore All, and then click Next.  Other failed checks must be investigated.
    • Swap Size - swap space is 15.99GB, expected 16GB.  This failure is ignorable.
    • Hardware Clock synchronization at shutdown - filed as unpublished bug 10199076.  This failure is ignorable.
  8. On Summary screen, verify information presented about installation, and then click Install.
  9. On Install Product screen, monitor installation progress.
  10. On Execute Configuration scripts screen, perform the following steps in order:

Relink oracle Executable in Grid Home with RDS

Relink oracle executable with RDS.  Run the following command as the grid user from the first database server.  This command will perform the relink on all database servers.
(oracle)$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/11.2.0.2/grid \
          make -C /u01/app/11.2.0.2/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle

Implement Necessary Workarounds

Implement workarounds for unpublished bug 10011084 and unpublished bug 10128494.

1.    On the first database server make a copy of crsconfig_lib.pm.
(oracle)$ dcli -l oracle -g ~/dbs_group \
          cp /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm \
          /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm.orig

(oracle)$ vi /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm
2.    Make the following changes:
From
 @cmdout = grep(/$bugid/, @output);
To
 @cmdout = grep(/(9655006|9413827)/, @output);

From
my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR
To
my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR read_file
3.    Verify the changes.  The output of the diff(1) command should match the following:
(oracle)$ diff crsconfig_lib.pm.orig crsconfig_lib.pm
699c699
< my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR
---
> my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR read_file
13277c13277
< @cmdout = grep(/$bugid/, @output);
---
> @cmdout = grep(/(9655006|9413827)/, @output);
4.    Distribute the changed file to all other database servers
(oracle)$ dcli -l oracle -g ~/dbs_group \
          -f /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm \
          -d /u01/app/11.2.0.2/grid/crs/install

Execute rootupgrade.sh on Each Database Server

Execute rootupgrade.sh on each database server, as indicated in the Execute Configuration scripts screen.

Run the rootupgrade.sh script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.

If the upgrade fails, refer to <Document 969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.


After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for one, which you select as the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node.  Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.

  • Due to <bug 10056593>, rootupgrade.sh will report this error and continue.  This error is ignorable.
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
  • Due to <bug 10241443>, rootupgrade.sh may report the following error when installing the cvuqdisk package.  This error is ignorable.
ls: /usr/sbin/smartctl: No such file or directory
/usr/sbin/smartctl not found.
  • First node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
  • Last node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 11.2.0.2.0

ASM upgrade has finished on last node.

Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Click OK.
  1. On Finish screen, click Close.

@div ends here

Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Grid Home

div starts here

Customized administration and login scripts that reference grid home should be updated to refer to /u01/app/11.2.0.2/grid.
Ensure the following updates are performed:
- init.ora files (copy to new home)
- oratab entries for instances (database entries done by dbua).
- password file (copy from old to new home)

@div ends here


 

Install Database 11.2.0.2 Software

div starts here

The commands in this section will perform the Database software installation of 11.2.0.2 into a new directory.  Beginning with 11.2.0.2 database patch sets are full releases.  It is no longer required to install the base release first before installing a patch set.  Refer to <Document 1189783.1> for additional details.

This section only installs Database 11.2.0.2 software into a new directory.  It does not affect running databases.

Data Guard - If there is a separate system running a standby database, run these steps (step 12 through step 19) on the standby system separately to install the Database 11.2.0.2 software.  The steps in this section can be performed in any of the following ways:
  • Install Database 11.2.0.2 software on the primary system first then the standby system.
  • Install Database 11.2.0.2 software on the standby system first then the primary system.
  • Install Database 11.2.0.2 software on both the primary and standby systems simultaneously.

@div ends here
Here are the steps performed in this section.

  • Prepare Installation Software
  • Perform 11.2.0.2 Database Software Installation with OUI
  • Relink oracle Executable in Database Home with RDS
  • Update OPatch in New Grid Home and New Database Home on All Database Servers
  • Install Latest 11.2.0.2 Bundle Patch - Do Not Perform Post Installation Steps
  • Data Guard only - Apply Fix for Bug 11664046 on Primary and Standby Database Servers
  • Apply 11.2.0.2 Bundle Patch Overlay Patches as Specified in Document 888828.1
  • Apply Customer-specific 11.2.0.2 One-Off Patches

Prepare Installation Software

div starts here

Unzip the 11.2.0.2 database software.  Run these commands on the first database server.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_Linux-x86-64_1of7.zip -d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_Linux-x86-64_2of7.zip -d /u01/app/oracle/patchdepot

@div ends here

Perform 11.2.0.2 Database Software Installation with OUI

div starts here

The OUI installation log is located at /u01/app/oraInventory/logs.

Set the environment then run the installer.
(oracle)$ unset ORACLE_HOME ORACLE_BASE ORACLE_SID
(oracle)$ export DISPLAY=<your_xserver>:0
(oracle)$ cd /u01/app/oracle/patchdepot/database
(oracle)$ ./runInstaller
Perform these actions on the installer screens
  1. On Configure Security Updates screen, fill in required fields, and then click Next.
  2. On Download Software Updates screen, select Skip software updates, and then click Next.
  3. On Select Installation Option screen, select Install database software only, and then click Next.
  4. On Grid Installation Options screen, select Oracle Real Application Clusters database installation, click Select All. Verify all database servers are present in the list and are selected, and then click Next.
  5. On Select Product Languages screen, select languages, and then click Next.
  6. On Select Database Edition, select Enterprise Edition, click Select Options to choose components to install, and then click Next.
  7. On Specify Installation Location, enter /u01/app/oracle/product/11.2.0.2/dbhome_1 as the Software Location for the Database home, and then click Next.
    • It is recommended that the Database home NOT be placed under /opt/oracle.
  8. On Privileged Operating System Groups screen, verify group names, and then click Next.
  9. On Perform Prerequisite Checks screen, the one check noted below may fail.  If this is the only failed check, Click Ignore All, and then click Next.  Other failed checks must be investigated.
    • Swap Size - swap space is 15.99GB, expected 16GB.  This failure is ignorable.
  10. On Summary screen, verify information presented about installation, and then click Install.
  11. On Install Product screen, monitor installation progress.
  12. On Execute Configuration scripts screen, execute root.sh on each database server as instructed, and then click OK
  13. On Finish screen, click Close.

@div ends here

Relink oracle Executable in Database Home with RDS

div starts here

@div ends here

Update OPatch in New Grid Home and New Database Home on All Database Servers

div starts here

Run both of these commands on one database server.
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq \
          -d /u01/app/11.2.0.2/grid /u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip

(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq \
          -d /u01/app/oracle/product/11.2.0.2/dbhome_1 \
          /u01/app/oracle/patchdepot/p6880880_112000_Linux-x86-64.zip

@div ends here

Install Latest 11.2.0.2 Bundle Patch - Do Not Perform Post Installation Steps

div starts here

The example commands below install BP5 (<Patch 11828582>).  At the time of writing it is the latest 11.2.0.2 bundle patch and is the recommended BP for new 11.2.0.2 installations.  Review <Document 888828.1> for the latest release information.

The commands to install the latest 11.2.0.2 BP are run on each database server individually.  They can be run in parallel across database servers if there is no need to install in a RAC rolling manner.

Stage the patch

Unzip the patch on all database servers.
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oq -d /u01/app/oracle/patchdepot \
          /u01/app/oracle/patchdepot/p11828582_112020_Linux-x86-64.zip

Create OCM response file if required

If you do not have the OCM response file, run this command on each database server:
(oracle)$ cd /u01/app/oracle/patchdepot
(oracle)$ /u01/app/oracle/product/11.2.0.2/dbhome_1/OPatch/ocm/bin/emocmrsp

Patch 11.2.0.2 database home

Run this command on each database server.  Note there are no databases running out of this home yet.
(root)# export PATH=$PATH:/u01/app/11.2.0.2/grid/OPatch
(root)# opatch auto /u01/app/oracle/patchdepot/11828582/ \
        -oh /u01/app/oracle/product/11.2.0.2/dbhome_1 \
        -ocmrf /u01/app/oracle/patchdepot/ocm.rsp

Patch 11.2.0.2 grid home

Run this command on each database server.  This command will shutdown Oracle Clusterware, which will impact availability of the instances running on the current database server.
(root)# opatch auto /u01/app/oracle/patchdepot/11828582/ \
        -oh /u01/app/11.2.0.2/grid \
        -ocmrf /u01/app/oracle/patchdepot/ocm.rsp

Skip patch post installation steps

Do not perform patch post installation.  Patch post installation steps will be run after the database is upgraded.

@div ends here

Data Guard only - Apply Fix for Bug 11664046 on Primary and Standby Database Servers

div starts here

Skip this step is you applied 11.2.0.2 BP7 or later.

Review <Document 1288640.1> for additional details.  This fix is applied on top of the bundle patch installed in the previous step.  The specific fix you require depends on the bundle patch installed.
    
Bundle Patch Installed Patch Required
BP5 <Patch 12312927>
BP4 <Patch 11868617>


If you installed a different 11.2.0.2 BP and an overlay fix for unpublished bug 11664046 is not available for that BP, then recreate the standby control file prior to performing a switchover operation.  Review <Document 1288640.1> for additional details.

Stage the patch

Unzip the patch on all database servers.
(oracle)$ dcli -l oracle -g ~/dbs_group unzip -oqu \
          -d /u01/app/oracle/patchdepot \
          /u01/app/oracle/patchdepot/p12312927_112020_Linux-x86-64.zip

Patch 11.2.0.2 database home

Run this command on all database servers.  Note there are no databases running out of this home yet.
(oracle)$ cd /u01/app/oracle/patchdepot/12312927
(oracle)$ opatch apply -oh /u01/app/oracle/product/11.2.0.2/dbhome_1 -all_nodes

@div ends here

Apply 11.2.0.2 Bundle Patch Overlay Patches as Specified in Document 888828.1

div starts here

Review <Document 888828.1> to identify and apply patches that must be installed on top of the bundle patch just installed.  If there is SQL that must be run against the database as part of the patch application, postpone running the SQL until after the database is upgraded.

BP5, the bundle patch example used in this document, currently requires no overlay patches.

@div ends here

Apply Customer-specific 11.2.0.2 One-Off Patches

div starts here

If there are one-offs that need to be applied to the environment, apply them now.  If there is SQL that must be run against the database as part of the patch application, postpone running the SQL until after the database is upgraded.

@div ends here


 

Upgrade Database to 11.2.0.2

div starts here

The commands in this section will perform the database upgrade to 11.2.0.2.

Data Guard - Unless otherwise indicated, run these steps only on the primary database.

@div ends here

Here are the steps performed in this section.

  • Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
  • Set cluster_interconnects Explicitly for All Primary and Standby Databases
  • Data Guard only - Synchronize Standby and Switch to 11.2.0.2
  • Upgrade the Database with Database Upgrade Assistant (DBUA)
  • Review and perform steps in Oracle Upgrade Guide, Chapter 4 'After Upgrading to the New Release'
  • Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Database Home
  • Add Underscore Initialization Parameters Back
  • Run 11.2.0.2 Bundle Patch Post Installation Steps
  • Data Guard only - Enable Fast-Start Failover and Data Guard Broker
  • DBFS only - Perform DBFS Required Updates
  • Run Exadata HealthCheck
  • Deinstall the 11.2.0.1 Database and Grid Homes

Analyze the Database to Upgrade with the Pre-Upgrade Information Tool

div starts here

The pre-upgrade information tool is provided with the 11.2.0.2 software.  It is also provided standalone as an attachment to <Document 884522.1>.  Run this tool to analyze the 11.2.0.1 database prior to upgrade.

Run Pre-Upgrade Information Tool

At this point the database is still running with 11.2.0.1 software.  Connect to the database with your environment set to 11.2.0.1 and run the pre-upgrade information tool that is located in the 11.2.0.2 database home.
SYS@PRIM1> spool preupgrade_info.log
SYS@PRIM1> @/u01/app/oracle/product/11.2.0.2/dbhome_1/rdbms/admin/utlu112i.sql

Handle obsolete and underscore parameters

Obsolete and underscore parameters will be identified by the pre-upgrade information tool.  During the upgrade, DBUA will remove the obsolete and underscore parameters from the primary database initialization parameter file.  Some underscore parameters that DBUA removes will be added back in later in this document after DBUA completes the upgrade.

To avoid unpublished bug 10017332, manually reset cell_partition_large_extents on the primary database.
SYS@PRIM1> alter system reset cell_partition_large_extents scope=spfile;
Data Guard only - DBUA will not affect parameters set on the standby, hence obsolete parameters and some underscore parameters must be removed manually.
SYS@STBY1> alter system reset cell_partition_large_extents scope=spfile;
SYS@STBY1> alter system reset "_arch_comp_dbg_scan" scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufsz" scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufcnt" scope=spfile;

Review pre-upgrade information tool output

Review the remaining output of the pre-upgrade information tool.  Take action on areas identified in the output.

@div ends here

Set cluster_interconnects Explicitly for All Primary and Standby Databases

div starts here

As a result of the redundant interconnect new feature in 11.2.0.2, if initialization parameter cluster_interconnects is not set, 11.2.0.2 will use an address in the automatically configured network 169.254.0.0/16. Databases running on Exadata upgraded to 11.2.0.2 should continue to use the same cluster interconnect as used in 11.2.0.1. This is accomplished by explicitly setting the initialization parameter cluster_interconnects.

Perform the steps below for all primary and standby databases that will be upgraded. It is only necessary, however, to perform this step for one instance for each database that will be upgraded. For example, if two databases (PRIM and DBFS) are upgraded, and PRIM has a standby database STBY, then you will perform the steps below against three instances: PRIM1, STBY1, and DBFS1.

Data Guard - If you are upgrading a Data Guard environment, then also perform the steps below for all standby databases.

The cluster_interconnects parameter is set only in the SPFILE. It will take effect on the database restart that occurs later in the upgrade process. There is no need to restart the database before the upgrade.

SYS@PRIM1> select inst_id, name, ip_address from gv$cluster_interconnects;

INST_ID    NAME            IP_ADDRESS
---------- --------------- ----------------
1          bond0           192.168.10.1
2          bond0           192.168.10.2

SYS@PRIM1> alter system set cluster_interconnects='192.168.10.1' sid='PRIM1' scope=spfile;
SYS@PRIM1> alter system set cluster_interconnects='192.168.10.2' sid='PRIM2' scope=spfile;

To verify
SYS@PRIM1> select sid, value from v$spparameter where name = 'cluster_interconnects';

SID     VALUE
------- ------------------------------
PRIM1   192.168.10.1
PRIM2   192.168.10.2

@div ends here

Data Guard only - Synchronize Standby and Switch to 11.2.0.2

div starts here

Perform these steps only if there is a physical standby database associated with the database being upgraded.

As indicated in the prerequisites section above, the following must be true:
  • The standby database is running in real-time apply mode.
  • The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.

Flush all redo generated on the primary and shutdown

To ensure all redo generated by the primary database running 11.2.0.1 is applied to the standby database running 11.2.0.1, all redo must be flushed from the primary to the standby.
First, verify the standby database is running recovery in real-time apply.  Run the following query connected to the standby database.  If this query returns no rows, then real-time apply is not running.
SYS@STBY1> select dest_name from v$archive_dest_status
  where recovery_mode = 'MANAGED REAL TIME APPLY';

DEST_NAME
------------------------------
LOG_ARCHIVE_DEST_1
Shutdown the primary database and restart just one instance in mount mode.
(oracle)$ srvctl stop database -d PRIM -o immediate
(oracle)$ srvctl start instance -d PRIM -n dm01db01 -o mount
Verify the primary database has specified db_unique_name of the standby database in the log_archive_dest_n parameter setting.
SYS@PRIM1> select value from v$parameter where name = 'log_archive_dest_2';

VALUE
-------------------------------------------------------------------------------
service="gih_stby" LGWR SYNC AFFIRM delay=0 optional compression=disable max_fa
ilure=0 max_connections=1 reopen=300 db_unique_name="STBY" net_timeout=30 valid
_for=(all_logfiles,primary_role)
Flush all redo to the standby database.  Standby database db_unique_name in this example is 'STBY'.
SYS@PRIM1> alter system flush redo to 'STBY';

Shutdown the primary database.

(oracle)$ srvctl stop database -d PRIM -o immediate

Disable fast-start failover and Data Guard broker

Disable Data Guard broker if it is configured.  If fast-start failover is configured, it must be disabled before broker configuration is disabled.
DGMGRL> disable fast_start failover;
DGMGRL> disable configuration;

Shutdown the standby database and restart it with 11.2.0.2

Perform the following steps on the standby database server:

Make backup copy of initialization parameters.
SYS@STBY1> create pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initstby.backup' from spfile;
Shutdown the standby database
(oracle)$ srvctl stop database -d stby
Copy required files from 11.2.0.1 home to 11.2.0.2 home.
(oracle)$ dcli -l oracle -g ~/dbs_group \
          'cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwstby* \
          /u01/app/oracle/product/11.2.0.2/dbhome_1/dbs'

(oracle)$ dcli -l oracle -g ~/dbs_group \
          'cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/initstby*.ora \
          /u01/app/oracle/product/11.2.0.2/dbhome_1/dbs'
Edit standby environment files
  • Edit the standby database entry in /etc/oratab to point to 11.2.0.2.
  • On both the primary and standby database servers, ensure the tnsnames.ora entries are available to the database after it has been upgraded.  If using the default location for tnsnames.ora, $ORACLE_HOME/network/admin, copy tnsnames.ora from the old home to the new home.
  • If using Data Guard Broker to manage the configuration, modify the broker required SID_LIST listener.ora entry on all nodes to point to the new ORACLE_HOME.  For example
SID_LIST_LISTENER =
(SID_LIST =
  (SID_DESC =
  (GLOBAL_DBNAME=PRIM_dgmgrl)
  (SID_NAME = PRIM1)
  (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2/dbhome_1)
  )
)

Update the OCR configuration data for the standby database.
(oracle)$ srvctl upgrade database -d stby -o /u01/app/oracle/product/11.2.0.2/dbhome_1
Start the standby.
(oracle)$ srvctl start database -d stby

Start all primary instances in restricted mode

DBUA requires all RAC instances to be running.  To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode.
(oracle)$ srvctl start database -d PRIM -o restrict

@div ends here

Upgrade the Database with Database Upgrade Assistant (DBUA)

div starts here

Run DBUA to upgrade the primary database.  All database instances should be up.  If there is a standby database, the primary database should be left running in restricted mode, as performed in the previous step.
NOTE: Verify you run DBUA from the new 11.2.0.2 ORACLE_HOME.
(oracle)$ /u01/app/oracle/product/11.2.0.2/dbhome_1/bin/dbua
Perform these actions on the DBUA screens
  1. On Welcome screen, click Next.
  2. On Select Database screen, select the DATABASE (not the instance!) to be upgraded, and then click Next.
    • Enter a local instance name if requested.
  3. On Upgrade Options screen, select the desired options, and then click Next.
    • If you have a standby database, do NOT select to turn off archiving during the upgrade.
  4. On Recovery and Diagnostic Locations screen, click Next.
  5. On Management Options screen, select desired Enterprise Manager options, and then click Next.
  6. On Summary screen, verify information presented about the database upgrade, and then click Finish.
  7. On Progress screen, when the upgrade is complete, click OK.
  8. On Upgrade Results screen, review the upgrade result and investigate any failures, and then click Close.

The database upgrade to 11.2.0.2 is complete.  There are additional actions to perform to complete configuration.

@div ends here

Review and perform steps in Oracle Upgrade Guide, Chapter 4 'After Upgrading to the New Release'

div starts here

The Oracle Upgrade Guide documents required and recommended tasks to perform after upgrading to 11.2.0.2.  Since the database was upgraded from 11.2.0.1, some tasks do not apply.  The following list is the minimum set of tasks that should be reviewed for your environment.
  • Update Environment Variables
  • Upgrade the Recovery Catalog
  • Upgrade the Time Zone File Version
  • Advance the Oracle ASM and Oracle Database Disk Group Compatibility

@div ends here

Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Database Home

div starts here

The primary database is upgraded and is now running out of the 11.2.0.2 database home.  Customized administration and login scripts that reference database home ORACLE_HOME should be updated to refer to /u01/app/oracle/product/11.2.0.2/dbhome_1.

@div ends here

Add Underscore Initialization Parameters Back

div starts here

DBUA removes obsolete and underscore initialization parameters.  Two underscore parameters must be added back in, and one underscore parameter is new.  Run the following to set the proper underscore parameters.
SYS@PRIM1> alter system set "_lm_rcvr_hang_allow_time"=140 scope=both;
SYS@PRIM1> alter system set "_kill_diagnostics_timeout"=140 scope=both;
SYS@PRIM1> alter system set "_file_size_increase_increment"=2143289344 scope=both;
Note - if you did not apply 11.2.0.2 BP2 or later then you must also set _parallel_cluster_cache_policy.  Review <Document 1270094.1> for details.

Data Guard only - DBUA will not affect parameters set on the standby, hence previously set underscore parameters will remain in place.  Only additional underscore parameters need to be added for a standby database.
SYS@STBY1> alter system set "_file_size_increase_increment"=2143289344 scope=both;

@div ends here

Run 11.2.0.2 Bundle Patch Post Installation Steps

div starts here

The bundle patch installation performed before the database upgrade has a post installation step that requires running SQL against the database.  Perform the Patch Post Installation steps documented in the bundle patch README on one database server only.  Review the bundle patch README for details.  The steps below are those required for BP5.
Run catbundle.sql to load the required bundle patch SQL.
SYS@PRIM1> @?/rdbms/admin/catbundle.sql exa apply

Navigate to the <ORACLE_HOME>/cfgtoollogs/catbundle directory (if ORACLE_BASE is defined then the logs will be created under <ORACLE_BASE>/cfgtoollogs/catbundle), and check the following log files for any errors like
"grep ^ORA <logfile> | sort -u". If there are errors, refer to Section 3, "Known Issues". Here, the format of the <TIMESTAMP> is YYYYMMMDD_HH_MM_SS
catbundle_EXA_<database SID>_APPLY_<TIMESTAMP>.log
catbundle_EXA_<database SID>_GENERATE_<TIMESTAMP>.log

@div ends here

Data Guard only - Enable Fast-Start Failover and Data Guard Broker

div starts here

If Data Guard broker and fast-start failover were disabled in a previous step, re-enable them.
DGMGRL> enable configuration;
DGMGRL> enable fast_start failover;

@div ends here

DBFS only - Perform DBFS Required Updates

div starts here

When the DBFS database is upgraded to 11.2.0.2 the following additional actions are required:

Obtain latest mount-dbfs.sh script from Document 1054431.1

Download the latest mount-dbfs.sh script that is attached to <Document 1054431.1> and place it in directory /u01/app/11.2.0.2/grid/crs/script under the new 11.2.0.2 grid home.

Edit mount-dbfs.sh script and Oracle Net files for the new 11.2.0.2 environment

Using the variable settings from the original mount-dbfs.sh script, edit the variable settings in the new mount-dbfs.sh script to match your environment.  The setting for variable ORACLE_HOME must be changed to match the 11.2.0.2 ORACLE_HOME /u01/app/oracle/product/11.2.0.2/dbhome_1. 

Edit tnsnames.ora used for DBFS to change the directory referenced for the parameters PROGRAM and ORACLE_HOME to the new 11.2.0.2 database home.
fsdb.local =
(DESCRIPTION =
   (ADDRESS =
     (PROTOCOL=BEQ)
     (PROGRAM=/u01/app/oracle/product/11.2.0.2/dbhome_1/bin/oracle)
     (ARGV0=oraclefsdb1)
     (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
     (ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/dbhome_1,ORACLE_SID=fsdb1')
   )
   (CONNECT_DATA=(SID=fsdb1))
)

If the location of Oracle Net files changed as a result of the upgrade, then change the setting of TNS_ADMIN in shell scripts and login files.

Modify the dbfs_mount cluster resource

Update the ACTION_SCRIPT attribute of the dbfs-mount cluster resource to refer to the new location of mount-dbfs.sh.
(root)# crsctl modify res dbfs_mount -attr \
"ACTION_SCRIPT=/u01/app/11.2.0.2/grid/crs/script/mount-dbfs.sh"

If using wallet-based authentication, recreate the symbolic link to /sbin/mount.dbfs

If you are using the Oracle Wallet to store the DBFS password, run the following commands:
(root)# dcli -l root -g ~/dbs_group ln -sf /u01/app/oracle/product/11.2.0.2/dbhome_1/bin/dbfs_client /sbin/mount.dbfs
(root)# dcli -l root -g ~/dbs_group ln -sf /u01/app/oracle/product/11.2.0.2/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -l root -g ~/dbs_group ln -sf /u01/app/oracle/product/11.2.0.2/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -l root -g ~/dbs_group ldconfig

@div ends here

Run Exadata HealthCheck

div starts here

Run HealthCheck to validate software, hardware, and firmware, and configuration best practice validation. 

@div ends here Review <Document 1070954.1> for details.

Deinstall the 11.2.0.1 Database and Grid Homes

div starts here

After the upgrade is complete and the database and application have been validated, the 11.2.0.1 database and grid homes can be removed using the deinstall tool.  Run these commands on the first database server.  The deinstall tool will perform the deinstallation on all database servers.  Refer to Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux for additional details of the deinstall tool.

Before running the deinstall tool to remove the old database and grid homes, run deinstall with the -checkonly option to verify the actions it will perform.  Ensure the following:
  • There are no databases configured to use the home.
  • The home is not a configured Grid Infrastructure home.
  • ASM is not detected in the Oracle Home.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p10098816_112020_LINUX_7of7.zip -d /u01/app/oracle/patchdepot

(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/oracle/product/11.2.0/dbhome_1/
(oracle)$ ./deinstall -home /u01/app/oracle/product/11.2.0/dbhome_1/

(root)# dcli -l root -g ~/dbs_group chmod -R 755 /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown -R oracle /u01/app/11.2.0/grid
(root)# dcli -l root -g ~/dbs_group chown oracle /u01/app/11.2.0

(oracle)$ cd /u01/app/oracle/patchdepot/deinstall
(oracle)$ ./deinstall -checkonly -home /u01/app/11.2.0/grid/
(oracle)$ ./deinstall -home /u01/app/11.2.0/grid/

@div ends here

Community Discussions

Still have questions? Use the communities window below to search for similar discussions or start a new discussion on this subject. (Window is the live community not a screenshot)

Click here to open in main browser window

References

@ <BUG:10011084> - SOL-X64-11202 STEP3 MODIFY BINARY AFTER INSTALLATION CANNOT EXCUTE SUCCESSFULLY
@ <BUG:10017332> - CELL_PARTITION_LARGE_EXTENTS NOT IDENTIFIED IN PRE-UPGRADE, CAUSES DBUA FAILURE
<BUG:10056593> - LNX64-112020-UD: FAIL TO ADD OLD_OCR_ID PROPERTY FOR ROOTCRS_OLDHOMEINFO
@ <BUG:10128494> - LNX64-112-OCT:UNDEFINED SUBROUTINE&MAIN::READ_FILE CALLED AT CRSPATCH.PM LINE 86
@ <BUG:10199076> - CVU CANNOT VERIFY HARDWARE CLOCK SYNC
@ <BUG:10241443> - SMARTCTL PACKAGE NOT INSTALLED ON 11.2.2.1.0 IMAGE - REQUIRED BY CVUQDISK
@ <BUG:11664046> - STBH: WRONG SEQUENCE NUMBER GENERATED AFTER DB SWITCHOVER FROM STBY TO PRIMARY
@ <BUG:9329767> - ORA-00600 [KJBMMCHKINTEG:FROM] DURING ROLLING UPGRADE OF ASM FROM 11.1.0.7-11.2
<NOTE:1054431.1> - Configuring DBFS on Oracle Database Machine
<NOTE:1070954.1> - Oracle Exadata Database Machine exachk or HealthCheck
<NOTE:1189783.1> - Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
<NOTE:1270094.1> - Exadata Critical Issues
<NOTE:1279525.1> - Oracle Exadata Database Machine Grid Infrastructure Upgrade to 11.2.0.2 fails with ASM ORA-4031
<NOTE:1281913.1> - root Script (root.sh or rootupgrade.sh) Fails if ORACLE_BASE is set to /opt/oracle
<NOTE:1284070.1> - Updating key software components on database hosts to match those on the cells
<NOTE:1288640.1> - Managed Recovery (MRP) Fails w/ ORA-328 After Upgrade to 11.2.0.2 and Switchover
<NOTE:1299752.1> - How to Downgrade from 11.2.0.2 to 11.2.0.1 Grid Infrastructure using +ASM
<NOTE:1312225.1> - Things to Consider Before Upgrading to 11.2.0.2 Grid Infrastructure/ASM
<NOTE:1314319.1> - Bug Fix List: the 11.2.0.2 Patch Bundles for Oracle Exadata Database Machine
<NOTE:336.1> - Upgrade Advisor: Database (DB) Exadata from 11.2.0.1 to 11.2.0.2
<NOTE:361468.1> - HugePages on Oracle Linux 64-bit
<NOTE:884522.1> - How to Download and Run Oracle's Database Pre-Upgrade Utility
<NOTE:888828.1> - Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions
<NOTE:969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix

Attachments
This solution has no attachment
  Copyright © 2012 Sun Microsystems, Inc.  All rights reserved.
 Feedback