This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.
The software versions and patches installed in the current environment must be at certain minimum levels before upgrade to 11.2.0.2 can begin. Depending on the patch installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime.
Conventions and Assumptions
For all patching example commands below the following is assumed:
The database and grid software owner is oracle.
The Oracle inventory group is oinstall.
The file ~oracle/dbs_group and ~root/dbs_group exists and contains the names of all database servers.
The database home and grid home is the same on all primary and standby servers.
Current database home is /u01/app/oracle/product/11.2.0/dbhome_1
Current grid home is /u01/app/11.2.0/grid
New database home will be /u01/app/oracle/product/11.2.0.2/dbhome_1
New grid home will be /u01/app/11.2.0.2/grid
The primary database to be upgraded is named PRIM.
The standby database associated with primary database PRIM is named STBY.
Review Database 11.2.0.2 Upgrade Prerequisites
The following prerequisites must be in place prior to performing the steps in this document to upgrade Database or Grid Infrastructure to 11.2.0.2.
Sun Datacenter InfiniBand Switch 36 is running software version 1.1.3-2 or later.
If you must update InfiniBand switch software to meet this requirement, install the most recent version indicated in <Document 888828.1>.
Database 11.2.0.1 software has applied at a minimum bundle patches DB_BP6 and GI_BP4 in the grid home and all database homes on all database servers.
If you must update Database 11.2.0.1 or Grid Infrastructure 11.2.0.1 software to meet this requirement, install the most recent version indicated in <Document 888828.1>.
Apply all overlay and additional patches required for the installed bundle patch. The list of required overlay and additional patches is found in <Document 888828.1>.
Verify that one-off patches currently installed on top of 11.2.0.1 are fixed in 11.2.0.2. Review <Document 1314319.1> for the list of fixes provided with 11.2.0.2 bundle patches. If you are unable to determine if a one-off patch is still required on top of 11.2.0.2, contact Oracle Support.
Exadata Storage Server software is version 11.2.2.1.1 or later.
If you must update Exadata Storage Server software to meet this requirement, install the most recent version indicated in <Document 888828.1>.
If your database servers currently run Oracle Linux 5.3 (kernel version 2.6.18-128), in order to maintain the recommended practice that OFED software version on database servers and Exadata Storage Servers is the same, then your database server must be updated to run Oracle Linux 5.5 (kernel version 2.6.18-194). Follow the steps in <Document 1284070.1> to perform this update.
Ensure network 169.254.x.x is not currently used on database servers.
The Oracle Clusterware Redundant Interconnect Usage feature, new in 11.2.0.2, uses network 169.254.0.0/16 for the highly available virtual IP (HAIP). Although this feature is currently not used on Exadata, Oracle Clusterware 11.2.0.2 will manage a virtual interface for the HAIP with an IP address in the 169.254.0.0/16 network. To avoid conflicting with the HAIP address, ensure the 169.254. x.x network is currently not in use on the database servers (Note - Per RFC 3927, network 169.254.x.x should not be used for static addresses). For each database server, this can be confirmed by making sure that the command "/sbin/ifconfig | grep ' 169.254' " returns no rows. If 169.254.x.x is in use, it must be changed prior to 11.2.0.2 upgrade or the upgrade will fail. Follow instructions in the section with the title "Changing InfiniBand IP Addresses and Host Names" in chapter 7 of the Oracle Exadata Database Machine Owner's Guide.
Do not place the new ORACLE_HOME under /opt/oracle.
If this is done, see <Document 1281913.1> for additional steps required after software is installed.
Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:
The standby database is running in real-time apply mode as determined by verifying v$archive_dest_status.recovery_mode='MANAGED REAL TIME APPLY' for the local archive destination on the standby database.
The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.
Download Required Files
Download and stage the following files in /u01/app/oracle/patchdepot on database servers.
Data Guard - If there is a standby database, stage the files on standby database servers also.
Files staged on first database server only
<Patch 10098816> - Oracle Database 11g, Release 2 (11.2.0.2) Patch Set 1 for Linux x86-64
Fix 12652740 for 11.2.0.1 CRS rolling upgrade for unpublished bug 9329767
This fix is not required if you currently have 11.2.0.1 Database BP12 installed in the grid home. 11.2.0.1 Database BP12 includes patch 12652740, which fixes this bug.
Note this patch cannot be installed in a rolling manner.
Latest Database 11.2.0.2 bundle patch for Exadata latest
Refer to <Document 888828.1> for the latest 11.2.0.2 bundle patch
<Patch 11828582> - Bundle Patch 5 is used within this document - p11828582_112020_Linux-x86-64.zip
Data Guard only - Bundle patch overlay fix for unpublished bug 11664046. See <Document 1288640.1> for details.
The fix for bug 11664046 is included in 11.2.0.2 BP7 and later. If installing 11.2.0.2 BP7 or later, a separate overlay patch is not required.
If installing 11.2.0.2 BP6 or earlier, the overlay patch required must match the 11.2.0.2 bundle patch installed. At the time of publication there are two overlay patches available.
BP4 overlay <Patch 11868617>
BP5 overlay <Patch 12312927>
If an overlay patch for the 11.2.0.2 bundle patch you will install is not listed above, either contact Oracle Support to check on availability of an overlay patch for your bundle patch, or utilize the workaround described in <Document 1288640.1>. This is described in more detail later in this document.
<Patch 12312927> - Overlay fix for unpublished bug 11664046 on top of 11.2.0.2 BP5 is used within this document - p12312927_112020_Linux-x86-64.zip
For files that are staged on all database servers, use the following command to distribute the files to all database servers (using p6880880_112000_Linux-x86-64.zip in this example):
Run HealthCheck to validate software, hardware, and firmware, and configuration best practice validation. Resolve any issues identified by HealthCheck before proceeding with the upgrade.
The exachk or HealthCheck bundles attached to this note contain detailed documentation, instructions, and examples.
Both the Oracle Exadata Database Machine exachk and HealthCheck collect data regarding key software, hardware, firmware, and configurations. The exachk or HealthCheck output assists customers to review and cross reference current collected data against supported version levels and recommended Oracle Exadata best practices.
Both the Oracle Exadata Database Machine exachk and HealthCheck can be executed as desired and should be executed regularly as part of the maintenance program for an Oracle Exadata Database Machine.
exachk and HealthCheck reference an Oracle Exadata Database Machine deployed per Oracle standards.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a database, network, or SQL performance analysis tool.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a continuous monitoring utility and do not duplicate the work of other monitoring or alerting tools (e.g.: Enterprise Manager).
exachk is the current version. HealthCheck is frozen and retained for backward compatibility with HP hardware based Oracle Exadata Database Machines. An 11.1 "custom" Exadata Database Machine is outside the scope of HealthCheck.
exachk Initial Deployment
Download the attached "exachk_bundle.zip" file to your desktop computer and unzip the file. Follow the documentation, training materials, and readme files to understand how to deploy and execute the exachk utility to an Oracle Exadata Database Machine.
HealthCheck Initial Deployment
Download the attached "HealthCheck_bundle.zip" file to your desktop computer and unzip the file. Follow the documentation, training materials, and readme files to understand how to deploy and execute the HealthCheck utility to an Oracle Exadata Database Machine.
Download Known Issues
Case 1: Extra File Extension
Download from MOS note 1070954.1 to PC environment where file extensions are not displayed. Decompression software errors out.
ISSUE: Download settings and process attached an extra ".zip" file extension to the name of the file: "exachk_bundle.zip.zip".
The solution is to remove the second ".zip" file extension.
Case 2: "exachk_bundle.zip" Wrapped in Another Layer of Compression
Download from MOS note 1070954.1, transfer to Linux server. "unzip" fails with an error like:
# unzip exachk_bundle.zip Archive: exachk_bundle.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of exachk_bundle.zip or exachk_bundle.zip.zip, and cannot find exachk_bundle.zip.ZIP, period.
ISSUE: Download settings and processes resulted in the "exachk_bundle.zip" file being placed inside another compression container also named "exachk_bundle.zip". The damaged file had a partial file path within the compression utility of "../some_directory/exachk_bundle.zip\" and shows one file "exachk_bundle". A correctly downloaded file has the termination of the file path in the compression utility as ".../some_directory/exachk_bundle.zip\" and shows four files: "exachk.zip", "ExachkBestPracticeChecks.xls", "ExachkUserGuide.pdf", and "Exachk_Tool_How_To.pdf".
The solution varies by customer and requires modification of the environment to avoid encasing the "exachk_bundle.zip" file in another layer of compression.
Update OPatch in Grid Home and Database Home on All Database Servers
Run both of these commands on one database server.
Data Guard - If there is a standby database, run these commands on one standby database server also.
Apply Fix 12652740 (for unpublished bug 9329767) to Grid Home on All Database Servers
This step is not required if you currently have 11.2.0.1 Database BP12 installed in the grid home.
This patch must be installed in the 11.2.0.1 grid home on all database servers. Shut down all the services (database, ASM, listeners, nodeapps, and CRS daemons) running from the Oracle home of the node you want to patch. After you patch this node, start the services on this node. Repeat this process for each of the other nodes of the Oracle RAC system. OPatch is used on only one node at a time. More details are available in Readme of the patch
Data Guard - If there is a standby database, run these commands on the standby database servers also.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/p12652740_112012_Linux-x86-64.zip (oracle)$ cd /u01/app/oracle/patchdepot/12652740 (oracle)$ opatch apply
This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.
The commands in this section will perform the Grid Infrastructure software installation and upgrade to 11.2.0.2. Grid Infrastructure upgrade from 11.2.0.1 to 11.2.0.2 is performed in a RAC rolling fashion.
Data Guard - If there is a standby database, run these commands on the standby system separately to upgrade the standby system Grid Infrastructure. The standby Grid Infrastructure upgrade can be performed in parallel with the primary, if desired.
The commands in this section will perform the Database software installation of 11.2.0.2 into a new directory. Beginning with 11.2.0.2 database patch sets are full releases. It is no longer required to install the base release first before installing a patch set. Refer to <Document 1189783.1> for additional details.
This section only installs Database 11.2.0.2 software into a new directory. It does not affect running databases.
Data Guard - If there is a separate system running a standby database, run these steps (step 12 through step 19) on the standby system separately to install the Database 11.2.0.2 software. The steps in this section can be performed in any of the following ways:
Install Database 11.2.0.2 software on the primary system first then the standby system.
Install Database 11.2.0.2 software on the standby system first then the primary system.
Install Database 11.2.0.2 software on both the primary and standby systems simultaneously.
Create the new ORACLE_HOME directory where 11.2.0.2 will be installed
In this document the new Grid Infrastructure home /u01/app/11.2.0.2/grid is used in all examples. It is recommended that the new Grid Infrastructure home NOT be located under /opt/oracle. If it is, review <Document 1281913.1>.
To create the new Grid Infrastructure home, run these commands from the first database server. You will need to substitute your Grid Infrastructure owner username and Oracle inventory group name in place of oracle and oinstall, respectively.
Enable automatic memory management (AMM) by setting memory_target=1025M, and disable manual memory management by running the ALTER SYSTEM statements below. The new parameter settings can be made to the SPFILE only. They will take affect when ASM is restarted later in the upgrade process.
AMM is not compatible with hugepages, hence the ASM instance will not use hugepages. If you previously configured hugepages on database servers and you currently allocate enough hugepages to accommodate database and ASM, then you should lower the vm.nr_hugepages setting to avoid wasting memory since ASM will no longer use hugepages. Review <Document 361468.1> for details.
Connect to one ASM instance and run the commands below. The ALTER SYSTEM statement will report ORA-32010 if the parameter being reset is not currently set in the SPFILE. This error can be ignored.
SYS@+ASM1> alter system set memory_target=1025M scope=spfile; SYS@+ASM1> alter system reset sga_target scope=spfile; SYS@+ASM1> alter system reset sga_max_size scope=spfile; SYS@+ASM1> alter system reset pga_aggregate_target scope=spfile; SYS@+ASM1> alter system reset memory_max_target scope=spfile;
SYS@+ASM1> select sid, name, value from v$spparameter where name in ('memory_target','sga_target','sga_max_size', 'pga_aggregate_target','memory_max_target');
SID NAME VALUE ------ ------------------------- ----------------------------------- * sga_max_size * sga_target * memory_target 1074790400 * memory_max_target * pga_aggregate_target
If this step is not followed and the Grid Infrastructure upgrade to 11.2.0.2 fails because ASM failed to start with an ORA-4031 error, refer to <Document 1279525.1> for the manual steps required to complete the Grid Infrastructure upgrade.
Set cluster_interconnects Explicitly for ASM
As a result of the redundant interconnect new feature in 11.2.0.2, if initialization parameter cluster_interconnects is not set, 11.2.0.2 will use an address in the automatically configured network 169.254.0.0/16. Databases running on Exadata upgraded to 11.2.0.2 should continue to use the same cluster interconnect as used in 11.2.0.1. This is accomplished by explicitly setting the initialization parameter cluster_interconnects.
Perform the steps below for ASM. The cluster_interconnect parameter for databases will be reset later in this process.
Data Guard - If you are upgrading a Data Guard environment, then also perform the steps below for ASM on the standby system.
The cluster_interconnects parameter is set only in the SPFILE. It will take effect on the ASM restart that occurs later in the upgrade process. There is no need to restart ASM before the upgrade.
SYS@+ASM1> select inst_id, name, ip_address from gv$cluster_interconnects;
SYS@+ASM1> alter system set cluster_interconnects='192.168.10.1' sid='+ASM1' scope=spfile; SYS@+ASM1> alter system set cluster_interconnects='192.168.10.2' sid='+ASM2' scope=spfile;
To verify
SYS@+ASM1> select sid, value from v$spparameter where name = 'cluster_interconnects';
SID VALUE ------- ------------------------------ +ASM1 192.168.10.1 +ASM2 192.168.10.2
Perform 11.2.0.2 Grid Infrastructure software installation and upgrade using OUI
Perform these instructions as the grid user (which is oracle in this document) to install the 11.2.0.2 Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 11.2.0.1 to 11.2.0.2. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion. The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 11.2.0.2 grid home the active grid home.
The OUI installation log is located at /u01/app/oraInventory/logs.
To downgrade Oracle Clusterware back to 11.2.0.1 after a successful upgrade, follow the instructions in <Document 1299752.1>.
If the upgrade fails, refer to <Document 969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.
On Download Software Updates screen, select Skip software updates, and then click Next.
On Select Installation Options screen, select Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management, and then click Next.
On Select Product Languages screen, select languages, and then click Next.
On Grid Infrastructure Node Selection screen, verify all database nodes are shown and selected, and then click Next.
On Privileged Operating System Groups screen, verify group names and change if desired, and then click Next.
On Specify Installation Location screen, enter /u01/app/11.2.0.2/grid as the Software Location for the Grid Infrastructure home, and then click Next.
It is recommended that the Grid Infrastructure home NOT be placed under /opt/oracle.
On Perform Prerequisite Checks screen, the two checks noted below may fail. If these are the only failed checks, Click Ignore All, and then click Next. Other failed checks must be investigated.
Swap Size - swap space is 15.99GB, expected 16GB. This failure is ignorable.
Hardware Clock synchronization at shutdown - filed as unpublished bug 10199076. This failure is ignorable.
On Summary screen, verify information presented about installation, and then click Install.
On Install Product screen, monitor installation progress.
On Execute Configuration scripts screen, perform the following steps in order:
Relink oracle Executable in Grid Home with RDS
Relink oracle executable with RDS. Run the following command as the grid user from the first database server. This command will perform the relink on all database servers.
Execute rootupgrade.sh on each database server, as indicated in the Execute Configuration scripts screen.
Run the rootupgrade.sh script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
If the upgrade fails, refer to <Document 969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.
After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for one, which you select as the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node. Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.
Due to <bug 10056593>, rootupgrade.sh will report this error and continue. This error is ignorable.
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
Due to <bug 10241443>, rootupgrade.sh may report the following error when installing the cvuqdisk package. This error is ignorable.
ls: /usr/sbin/smartctl: No such file or directory /usr/sbin/smartctl not found.
First node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Last node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. Started to upgrade the CRS. The CRS was successfully upgraded. Oracle Clusterware operating version was successfully set to 11.2.0.2.0
ASM upgrade has finished on last node.
Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Click OK.
On Finish screen, click Close.
Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Grid Home
Customized administration and login scripts that reference grid home should be updated to refer to /u01/app/11.2.0.2/grid.
Prepare installation software
Unzip the 11.2.0.2 database software. Run these commands on the first database server.
On Configure Security Updates screen, fill in required fields, and then click Next.
On Download Software Updates screen, select Skip software updates, and then click Next.
On Select Installation Option screen, select Install database software only, and then click Next.
On Grid Installation Options screen, select Oracle Real Application Clusters database installation, click Select All. Verify all database servers are present in the list and are selected, and then click Next.
On Select Product Languages screen, select languages, and then click Next.
On Select Database Edition, select Enterprise Edition, click Select Options to choose components to install, and then click Next.
On Specify Installation Location, enter /u01/app/oracle/product/11.2.0.2/dbhome_1 as the Software Location for the Database home, and then click Next.
It is recommended that the Database home NOT be placed under /opt/oracle.
On Privileged Operating System Groups screen, verify group names, and then click Next.
On Perform Prerequisite Checks screen, the one check noted below may fail. If this is the only failed check, Click Ignore All, and then click Next. Other failed checks must be investigated.
Swap Size - swap space is 15.99GB, expected 16GB. This failure is ignorable.
On Summary screen, verify information presented about installation, and then click Install.
On Install Product screen, monitor installation progress.
On Execute Configuration scripts screen, execute root.sh on each database server as instructed, and then click OK
This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.
Relink oracle Executable in Database Home with RDS
Run the following command as the oracle user from the first database server. This command will perform the relink on all database servers
Install Latest 11.2.0.2 Bundle Patch - Do Not Perform Post Installation Steps
The example commands below install BP5 (<Patch 11828582>). At the time of writing it is the latest 11.2.0.2 bundle patch and is the recommended BP for new 11.2.0.2 installations. Review <Document 888828.1> for the latest release information.
The commands to install the latest 11.2.0.2 BP are run on each database server individually. They can be run in parallel across database servers if there is no need to install in a RAC rolling manner.
Run this command on each database server. This command will shutdown Oracle Clusterware, which will impact availability of the instances running on the current database server.
(root)# opatch auto /u01/app/oracle/patchdepot/11828582/ \ -oh /u01/app/11.2.0.2/grid \ -ocmrf /u01/app/oracle/patchdepot/ocm.rsp
Skip patch post installation steps
Do not perform patch post installation. Patch post installation steps will be run after the database is upgraded.
Data Guard only - Apply Fix for Bug 11664046 on Primary and Standby Database Servers
Skip this step is you applied 11.2.0.2 BP7 or later.
Review <Document 1288640.1> for additional details. This fix is applied on top of the bundle patch installed in the previous step. The specific fix you require depends on the bundle patch installed.
Bundle Patch Installed
Patch Required
BP5
<Patch 12312927>
BP4
<Patch 11868617>
If you installed a different 11.2.0.2 BP and an overlay fix for unpublished bug 11664046 is not available for that BP, then recreate the standby control file prior to performing a switchover operation. Review <Document 1288640.1> for additional details.
Run this command on all database servers. Note there are no databases running out of this home yet.
(oracle)$ cd /u01/app/oracle/patchdepot/12312927 (oracle)$ opatch apply -oh /u01/app/oracle/product/11.2.0.2/dbhome_1 -all_nodes
Apply 11.2.0.2 Bundle Patch Overlay Patches as Specified in Document 888828.1
Review <Document 888828.1> to identify and apply patches that must be installed on top of the bundle patch just installed. If there is SQL that must be run against the database as part of the patch application, postpone running the SQL until after the database is upgraded.
BP5, the bundle patch example used in this document, currently requires no overlay patches.
Apply Customer-specific 11.2.0.2 One-Off Patches
If there are one-offs that need to be applied to the environment, apply them now. If there is SQL that must be run against the database as part of the patch application, postpone running the SQL until after the database is upgraded.
This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.
The commands in this section will perform the database upgrade to 11.2.0.2.
Data Guard - Unless otherwise indicated, run these steps only on the primary database.
Database upgrade from 11.2.0.1 to 11.2.0.2 requires database-wide downtime.
Rolling upgrade with Logical Standby or Golden Gate may be used to reduce database downtime during this section. This topic is not covered in this document.
Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
The pre-upgrade information tool is provided with the 11.2.0.2 software. It is also provided standalone as an attachment to <Document 884522.1>. Run this tool to analyze the 11.2.0.1 database prior to upgrade.
Run Pre-Upgrade Information Tool
At this point the database is still running with 11.2.0.1 software. Connect to the database with your environment set to 11.2.0.1 and run the pre-upgrade information tool that is located in the 11.2.0.2 database home.
Obsolete and underscore parameters will be identified by the pre-upgrade information tool. During the upgrade, DBUA will remove the obsolete and underscore parameters from the primary database initialization parameter file. Some underscore parameters that DBUA removes will be added back in later in this document after DBUA completes the upgrade.
To avoid unpublished bug 10017332, manually reset cell_partition_large_extents on the primary database.
SYS@PRIM1> alter system reset cell_partition_large_extents scope=spfile;
Data Guard only - DBUA will not affect parameters set on the standby, hence obsolete parameters and some underscore parameters must be removed manually.
SYS@STBY1> alter system reset cell_partition_large_extents scope=spfile; SYS@STBY1> alter system reset "_arch_comp_dbg_scan" scope=spfile; SYS@STBY1> alter system reset "_backup_ksfq_bufsz" scope=spfile; SYS@STBY1> alter system reset "_backup_ksfq_bufcnt" scope=spfile;
Review pre-upgrade information tool output
Review the remaining output of the pre-upgrade information tool. Take action on areas identified in the output.
Set cluster_interconnects Explicitly for All Primary and Standby Databases
As a result of the redundant interconnect new feature in 11.2.0.2, if initialization parameter cluster_interconnects is not set, 11.2.0.2 will use an address in the automatically configured network 169.254.0.0/16. Databases running on Exadata upgraded to 11.2.0.2 should continue to use the same cluster interconnect as used in 11.2.0.1. This is accomplished by explicitly setting the initialization parameter cluster_interconnects.
Perform the steps below for all primary and standby databases that will be upgraded. It is only necessary, however, to perform this step for one instance for each database that will be upgraded. For example, if two databases (PRIM and DBFS) are upgraded, and PRIM has a standby database STBY, then you will perform the steps below against three instances: PRIM1, STBY1, and DBFS1.
Data Guard - If you are upgrading a Data Guard environment, then also perform the steps below for all standby databases.
The cluster_interconnects parameter is set only in the SPFILE. It will take effect on the database restart that occurs later in the upgrade process. There is no need to restart the database before the upgrade.
SYS@PRIM1> select inst_id, name, ip_address from gv$cluster_interconnects;
SYS@PRIM1> alter system set cluster_interconnects='192.168.10.1' sid='PRIM1' scope=spfile; SYS@PRIM1> alter system set cluster_interconnects='192.168.10.2' sid='PRIM2' scope=spfile;
To verify
SYS@PRIM1> select sid, value from v$spparameter where name = 'cluster_interconnects';
SID VALUE ------- ------------------------------ PRIM1 192.168.10.1 PRIM2 192.168.10.2
Data Guard only - Synchronize Standby and Switch to 11.2.0.2
Perform these steps only if there is a physical standby database associated with the database being upgraded.
As indicated in the prerequisites section above, the following must be true:
The standby database is running in real-time apply mode.
The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.
Flush all redo generated on the primary and shutdown
To ensure all redo generated by the primary database running 11.2.0.1 is applied to the standby database running 11.2.0.1, all redo must be flushed from the primary to the standby.
First, verify the standby database is running recovery in real-time apply. Run the following query connected to the standby database. If this query returns no rows, then real-time apply is not running.
SYS@STBY1> select dest_name from v$archive_dest_status where recovery_mode = 'MANAGED REAL TIME APPLY';
Edit the standby database entry in /etc/oratab to point to 11.2.0.2.
On both the primary and standby database servers, ensure the tnsnames.ora entries are available to the database after it has been upgraded. If using the default location for tnsnames.ora, $ORACLE_HOME/network/admin, copy tnsnames.ora from the old home to the new home.
If using Data Guard Broker to manage the configuration, modify the broker required SID_LIST listener.ora entry on all nodes to point to the new ORACLE_HOME. For example
DBUA requires all RAC instances to be running. To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode.
(oracle)$ srvctl start database -d PRIM -o restrict
Upgrade the Database with Database Upgrade Assistant (DBUA)
Run DBUA to upgrade the primary database. All database instances should be up. If there is a standby database, the primary database should be left running in restricted mode, as performed in the previous step.
NOTE: Verify you run DBUA from the new 11.2.0.2 ORACLE_HOME.
This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.
Review and perform steps in Oracle Upgrade Guide, Chapter 4 : After Upgrading to the New Release
The Oracle Upgrade Guide documents required and recommended tasks to perform after upgrading to 11.2.0.2. Since the database was upgraded from 11.2.0.1, some tasks do not apply. The following list is the minimum set of tasks that should be reviewed for your environment.
Update Environment Variables
Upgrade the Recovery Catalog
Upgrade the Time Zone File Version
Advance the Oracle ASM and Oracle Database Disk Group Compatibility
Change Custom Scripts and Environment Variables to Reference the 11.2.0.2 Database Home
The primary database is upgraded and is now running out of the 11.2.0.2 database home. Customized administration and login scripts that reference database home ORACLE_HOME should be updated to refer to /u01/app/oracle/product/11.2.0.2/dbhome_1.
Add Underscore Initialization Parameters Back
DBUA removes obsolete and underscore initialization parameters. Two underscore parameters must be added back in, and one underscore parameter is new. Run the following to set the proper underscore parameters.
SYS@PRIM1> alter system set "_lm_rcvr_hang_allow_time"=140 scope=both; SYS@PRIM1> alter system set "_kill_diagnostics_timeout"=140 scope=both; SYS@PRIM1> alter system set "_file_size_increase_increment"=2143289344 scope=both;
Note - if you did not apply 11.2.0.2 BP2 or later then you must also set _parallel_cluster_cache_policy. Review <Document 1270094.1> for details.
Data Guard only - DBUA will not affect parameters set on the standby, hence previously set underscore parameters will remain in place. Only additional underscore parameters need to be added for a standby database.
SYS@STBY1> alter system set "_file_size_increase_increment"=2143289344 scope=both;
Run 11.2.0.2 Bundle Patch Post Installation Steps
The bundle patch installation performed before the database upgrade has a post installation step that requires running SQL against the database. Perform the Patch Post Installation steps documented in the bundle patch README on one database server only. Review the bundle patch README for details. The steps below are those required for BP5.
Run catbundle.sql to load the required bundle patch SQL.
SYS@PRIM1> @?/rdbms/admin/catbundle.sql exa apply
Navigate to the <ORACLE_HOME>/cfgtoollogs/catbundle directory (if ORACLE_BASE is defined then the logs will be created under <ORACLE_BASE>/cfgtoollogs/catbundle), and check the following log files for any errors like "grep ^ORA <logfile> | sort -u". If there are errors, refer to Section 3, "Known Issues". Here, the format of the <TIMESTAMP> is YYYYMMMDD_HH_MM_SS
When the DBFS database is upgraded to 11.2.0.2 the following additional actions are required:
Obtain latest mount-dbfs.sh script from Document 1054431.1
Download the latest mount-dbfs.sh script that is attached to <Document 1054431.1> and place it in directory /u01/app/11.2.0.2/grid/crs/script under the new 11.2.0.2 grid home.
Edit mount-dbfs.sh script and Oracle Net files for the new 11.2.0.2 environment
Using the variable settings from the original mount-dbfs.sh script, edit the variable settings in the new mount-dbfs.sh script to match your environment. The setting for variable ORACLE_HOME must be changed to match the 11.2.0.2 ORACLE_HOME /u01/app/oracle/product/11.2.0.2/dbhome_1.
Edit tnsnames.ora used for DBFS to change the directory referenced for the parameters PROGRAM and ORACLE_HOME to the new 11.2.0.2 database home.
This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure version 11.2.0.1 to 11.2.0.2 on Exadata Database Machine.
Run post-patching Exadata HealthCheck
Run HealthCheck to validate software, hardware, and firmware, and configuration best practice validation.
The exachk or HealthCheck bundles attached to this note contain detailed documentation, instructions, and examples.
Both the Oracle Exadata Database Machine exachk and HealthCheck collect data regarding key software, hardware, firmware, and configurations. The exachk or HealthCheck output assists customers to review and cross reference current collected data against supported version levels and recommended Oracle Exadata best practices.
Both the Oracle Exadata Database Machine exachk and HealthCheck can be executed as desired and should be executed regularly as part of the maintenance program for an Oracle Exadata Database Machine.
exachk and HealthCheck reference an Oracle Exadata Database Machine deployed per Oracle standards.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a database, network, or SQL performance analysis tool.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a continuous monitoring utility and do not duplicate the work of other monitoring or alerting tools (e.g.: Enterprise Manager).
exachk is the current version. HealthCheck is frozen and retained for backward compatibility with HP hardware based Oracle Exadata Database Machines. An 11.1 "custom" Exadata Database Machine is outside the scope of HealthCheck.
exachk Initial Deployment
Download the attached "exachk_bundle.zip" file to your desktop computer and unzip the file. Follow the documentation, training materials, and readme files to understand how to deploy and execute the exachk utility to an Oracle Exadata Database Machine.
HealthCheck Initial Deployment
Download the attached "HealthCheck_bundle.zip" file to your desktop computer and unzip the file. Follow the documentation, training materials, and readme files to understand how to deploy and execute the HealthCheck utility to an Oracle Exadata Database Machine.
Download Known Issues
Case 1: Extra File Extension
Download from MOS note 1070954.1 to PC environment where file extensions are not displayed. Decompression software errors out.
ISSUE: Download settings and process attached an extra ".zip" file extension to the name of the file: "exachk_bundle.zip.zip".
The solution is to remove the second ".zip" file extension.
Case 2: "exachk_bundle.zip" Wrapped in Another Layer of Compression
Download from MOS note 1070954.1, transfer to Linux server. "unzip" fails with an error like:
# unzip exachk_bundle.zip Archive: exachk_bundle.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of exachk_bundle.zip or exachk_bundle.zip.zip, and cannot find exachk_bundle.zip.ZIP, period.
ISSUE: Download settings and processes resulted in the "exachk_bundle.zip" file being placed inside another compression container also named "exachk_bundle.zip". The damaged file had a partial file path within the compression utility of "../some_directory/exachk_bundle.zip\" and shows one file "exachk_bundle". A correctly downloaded file has the termination of the file path in the compression utility as ".../some_directory/exachk_bundle.zip\" and shows four files: "exachk.zip", "ExachkBestPracticeChecks.xls", "ExachkUserGuide.pdf", and "Exachk_Tool_How_To.pdf".
The solution varies by customer and requires modification of the environment to avoid encasing the "exachk_bundle.zip" file in another layer of compression.
Deinstall the 11.2.0.1 Database and Grid Homes
After the upgrade is complete and the database and application have been validated, the 11.2.0.1 database and grid homes can be removed using the deinstall tool. Run these commands on the first database server. The deinstall tool will perform the deinstallation on all database servers. Refer to Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux for additional details of the deinstall tool.
Before running the deinstall tool to remove the old database and grid homes, run deinstall with the -checkonly option to verify the actions it will perform. Ensure the following:
There are no databases configured to use the home.
The home is not a configured Grid Infrastructure home.