This document provides step-by-step instructions for upgrading from Oracle Database and Oracle Grid Infrastructure release 11.2.0.1 and/or 11.2.0.2 to 11.2.0.3 on the Oracle Exadata Database Machine for the Linux and Solaris Express operating systems. This document does not cover Exadata on Sparc Super Cluster (SSC). This procedure is tested for Linux as well as Solaris 11 Express. Solaris 11 Express installations need to upgrade to 11.2.0.3 first and then upgrade the operating system to Solaris 11. See Document 1431284.1 "Upgrading Solaris Exadata Database nodes from Oracle Solaris 11 Express to Solaris 11".
Prerequisites
Existing 11.2.0.1 and 11.2.0.2 installations require the proper patches in order to upgrade successfully to 11.2.0.3.
Depending on the current bundle patch level an additional patch may need to be applied on top of your existing installation.
The patch your installation may require depends on your operating system version, database and grid infrastructure version and current bundle patch level.
Review the 11.2.0.1 or 11.2.0.2 patch matrix in this document to obtain the right patch for your installation.
Conventions and Assumptions
Conventions
The steps mentioned will apply as well for 11.2.0.1 to 11.2.0.3 as for 11.2.0.2 to 11.2.0.3 unless specified differently
New database home will be /u01/app/oracle/product/11.2.0.3/dbhome_1
New grid home will be /u01/app/11.2.0.3/grid
Enterprise Manager Cloud Control 12c OMS and agents were in place during the upgrade procedure. The installation and configuration of these components is based on "Oracle Enterprise Manager Cloud Control 12c Setup Automation kit for Oracle Exadata Database Machine" and are beyond the scope of this document. Having Enterprise manager configured is not a requirement. Enterprise Manager cannot be used as a tool to do the upgrade.
An example of how to apply Bundle Patch 1 (Patch 13343057) on 11.2.0.3 will be given, however: for 11.2.0.3 recommended patches Document 888828.1 needs to be consulted.
Assumptions
The database and grid software owner is oracle.
The Oracle inventory group is oinstall.
The files ~oracle/dbs_group and ~root/dbs_group exist and contain the names of all database servers.
The database home and grid home is the same on all primary and standby servers.
Current database home is /u01/app/oracle/product/11.2.0/dbhome_1, this can be either an 11.2.0.1 or an 11.2.0.2 home
Current grid home is /u01/app/11.2.0/grid or /u01/app/11.2.0.2/grid, this can be either an 11.2.0.1 or an 11.2.0.2 home
The primary database to be upgraded is named PRIM.
The standby database associated with primary database PRIM is named STBY.
The software releases and patches installed in the current environment must be at certain minimum levels before upgrading to 11.2.0.3 can begin. Depending on the patch installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime. In this section recommendations for storing base line execution plans will be done as well as the advise to make sure restores can be done when the upgrade fails.
Planning
In relation to planning the following items are recommended:
Testing on non-production first
Upgrades or patches should always be applied first on test environments. Testing on non-production environments allows people to become familiar with the patching steps and learn how the patching will impact their system and their applications in terms of regression. You need a series of carefully designed tests to validate all stages of the upgrade process. Executed rigorously and completed successfully, these tests ensure that the process of upgrading the production database is well understood, predictable, and successful. Perform as much testing as possible before upgrading the production database. Do not underestimate the importance of a complete and repeatable testing process. The types of tests to perform are the same whether you use Real Application Testing features like Database Replay or SQL Performance Analyzer, or perform testing manually.
There is an estimated 30 minutes of downtime required for the software upgrade. Additional downtime maybe required for post upgrade steps. This varies on factors such as the amount of PL/SQL that requires recompilation.
Resource management plans are expected to be persistent after the upgrade.
SQL Plan Management
SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information. SQL plan management is a preventative mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines composed of a set of existing plans known to be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL statements, regardless of changes occurring in the system. See the Oracle Database Performance Tuning Guide for more information about using SQL Plan Management
Recoverability
The ultimate success of your upgrade depends greatly on the design and execution of an appropriate backup strategy. Even though the Database Home and Grid Infrastructure Home will be upgraded out of place from 11.2.0.1 and therefore make rollback easier, the database and the filesystem should be secured before committing the upgrade. See the Oracle Database Backup and Recovery Users Guide for information on database backups.
Account Access
During the upgrade procedure access to SYS account is required. Depending on what other components are upgraded access to ASMSNMP and DBSNMP is also required. Passwords in the password file are expected to be the same for all instances.
Review Database 11.2.0.3 Upgrade Prerequisites
The prerequisites mentioned below must be in place prior to performing the steps in this document to upgrade Database or Grid Infrastructure to 11.2.0.3 without failures
Sun Datacenter InfiniBand Switch 36 is running software release 1.3.3-2 or later.
If you must update InfiniBand switch software to meet this requirement, then install the most recent release indicated in Document 888828.1.
For 11.2.0.1: software has applied at a minimum bundle patches DB_BP6 and GI_BP4 in the grid home and all database homes on all database servers.
Additional to 11.2.0.1 with a minimum of bundle patches DB_BP6 and GI_BP4 a patch for bug 9329767/12652740 is required for the Grid home. For this patch there are multiple releases:
The fix for bug 9329767/12652740 is included in BP12, so if you are on BP12 there is nothing you have to do. This is a recommended approach
Customers who are on other Bundle Patch levels can apply the (rolling) release of the patch for bug 12652740 which is available via:
For 11.2.0.2 software in the Grid home and all database homes on all database servers.
A fix for bug 12539000 is required. This patch is included in BP12 onwards. For BP 7,8, 9,10 and 11 one-off patches are available to be applied on top of your bundle patch
patch is included in BP12 for Linux / does not apply for Solaris
patch is included in BP13 and higher
For Linux: Note that BP12 has the fix for 12539000 included but CVU and OUI will complain this fix is not included. This message can be safely ignored.
For Solaris 11 Express: Because of bug 9795321 customers are recommend not to patch to BP12 before upgrading to 11.2.0.3. For those installations it is recommended to apply the patch for bug 12539000. Another option is to go to BP13.
Customers on Database 11.2.0.2 bundle patch 15 upgrading Grid Infrastructure to 11.2.0.3 need to apply a minimum of BP4 on top of the 11.2.0.3 Grid Infrastructure home
Generic :
If you must update Database 11.2.0.1/11.2.0.2 or Grid Infrastructure 11.2.0.1/11.2.0.2 software to meet the patching requirements, then install the most recent release indicated in Document 888828.1.
Apply all overlay and additional patches required for the installed bundle patch. The list of required overlay and additional patches can be found in Document 888828.1 and Exadata Critical Issues Document 1270094.1.
Verify that one-off patches currently installed on top of 11.2.0.1 or 11.2.0.2 are fixed in 11.2.0.3. Review Document 1348303.1 for the list of fixes provided with 11.2.0.3. For a list of provided fixes on top of 11.2.0.3 review the README 13667791(BP4). If you are unable to determine if a one-off patch is still required on top of 11.2.0.3 then contact Oracle Support. BP1 is only an example: see Document 888828.1 for recommended bundles patches available today.
Exadata Storage Server software is release 11.2.2.4.0 or later.
If you must update Exadata Storage Server software to meet this requirement then install the most recent release indicated in Document 888828.1. Note that 11.2.0.1 grid infrastructure and database software homes must run BP12 with Patch 13998273 before upgrading to Exadata Storage Server release 11.2.3.1.0. See Document 1429907.1 for details.
If your database servers currently run Oracle Linux 5.3 (kernel release 2.6.18-128) then in order to maintain the recommended practice that OFED software release on database servers and Exadata Storage Servers is the same, then your database server must be updated to run Oracle Linux 5.5 or later (kernel release 2.6.18-194 or later). Follow the steps in Document 1284070.1 to perform this update. Note that updating to Oracle Linux to 5.5 is not required but highly recommended.
For upgrades from 11.2.0.1: Ensure network 169.254.x.x is not currently used on database servers.
The Oracle Clusterware Redundant Interconnect Usage feature, new in 11.2.0.2, uses network 169.254.0.0/16 for the highly available virtual IP (HAIP). Although this feature is currently not used on Oracle Exadata Database Machine, Oracle Clusterware 11.2.0.3 will manage a virtual interface for the HAIP with an IP address in the 169.254.0.0/16 network. To avoid conflicting with the HAIP address when upgrading from 11.2.0.1, ensure the 169.254. x.x network is currently not in use on the database servers (Note - Per RFC 3927, network 169.254.x.x can not be used for static addresses). For each database server, this can be confirmed by making sure that the command "/sbin/ifconfig | grep ' 169.254' " returns no rows. If 169.254.x.x is in use then it must be changed prior to 11.2.0.3 upgrade or the upgrade will fail.Follow instructions in the section with the title "Changing InfiniBand IP Addresses and Host Names" in chapter 7 of the Oracle Exadata Database Machine Owners Guide.
Do not place the new ORACLE_HOME under /opt/oracle.
If this is done then see Document 1281913.1 for additional steps required after software is installed.
Data Guard only - If there is a physical standby database associated with the databases being upgraded, then the following must be true:
The standby database is running in real-time apply mode as determined by verifying v$archive_dest_status.recovery_mode='MANAGED REAL TIME APPLY' for the local archive destination on the standby database.
The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.
Download Required Files
Download the following software into a staging area you prefer on one of the database servers in your cluster. As an example we use /u01/app/oracle/patchdepot but you can specify your own.
Data Guard - If there is a standby database then stage the files on one of the database servers from standby site also.
"Oracle Database", typically the installation media for the database comes with two zip files.
"Oracle Grid Infrastructure"
"Deinstall tool"
See the README of Patch 10404530 for the exact filenames you need for your platform.
In the examples that follow the Linux installation files will be used, however, for Solaris installations the mentioned file should be replaced for its Solaris equivalent.
Fix for 11.2.0.1 (with a minimum of bundle patches DB_BP6 and GI_BP4) - 'ASM rolling upgrade' unpublished bug 9329767/12652740
The fix for bug 9329767/12652740 is included in BP12, so if you are on BP12 there is nothing you have to do. This is a recommended approach
Customers who are on other Bundle Patch levels can apply the (rolling) release of the patch for bug 12652740 which is available via:
* BP12 is not recommended for Solaris. Recommend is BP13
**Installations not on one of the mentioned Bundle Patch releases in the list are recommended to either upgrade to a listed Bundle Patch release or request a fix for their release.
*** Customers on Database 11.2.0.2 bundle patch 15 upgrading Grid Infrastructure to 11.2.0.3 need to apply a minimum of BP4 on top of the 11.2.0.3 Grid Infrastructure home
Use the following command to distribute Opatch to a staging area all database servers and all Oracle Homes (using p6880880_112000_Linux-x86-64.zip in this example for user 'oracle').
From now on we use '/u01/app/oracle/patchdepot' as staging area but that is just an example:
For V2 or later: obtain the latest release of Exachk and run this to validate software, hardware, firmware, and configuration best practice validation. Resolve any issues identified by Exachk before proceeding with the upgrade.
Since Exachk is not certified on V1, HealthCheck needs to be used to collect data regarding key software, hardware, and firmware releases.
div ends here Review Document 1070954.1 for details. The exachk or HealthCheck bundles attached to this note contain detailed documentation, instructions, and examples.
Both the Oracle Exadata Database Machine exachk and HealthCheck collect data regarding key software, hardware, firmware, and configurations. The exachk or HealthCheck output assists customers to review and cross reference current collected data against supported version levels and recommended Oracle Exadata best practices.
Both the Oracle Exadata Database Machine exachk and HealthCheck can be executed as desired and should be executed regularly as part of the maintenance program for an Oracle Exadata Database Machine.
exachk and HealthCheck reference an Oracle Exadata Database Machine deployed per Oracle standards.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a database, network, or SQL performance analysis tool.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a continuous monitoring utility and do not duplicate the work of other monitoring or alerting tools (e.g.: Enterprise Manager).
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a security configuration analysis or monitoring utility and do not duplicate the work of other security tools. Please reference the secure configuration component of the Database Lifecycle Managment pack for Oracle Enterprise Manager.
exachk is the current version. HealthCheck is frozen and retained for backward compatibility with HP hardware based Oracle Exadata Database Machines. An 11.1 "custom" Exadata Database Machine is outside the scope of HealthCheck.
exachk Version Notes
As of November 23, 2011, this note will contain the exachk current production version and the current beta version of the next release, if the beta is available. If the beta version of the next release is not yet available, this note will contain the exachk current production version and the prior production version.
Validate Readiness for Oracle Clusterware upgrade
Use the cluster verification utility (CVU) to validate readiness for the Oracle Clusterware upgrade. Review the Oracle Grid Infrastructure Installation Guide, chapter F 'How to upgrade to Oracle Grid Infrastructure' and see 'Using CVU to Validate Readiness for Oracle Clusterware Upgrades'. Unzip p10404530_112030_Linux-x86-64_3of7.zip (Oracle Grid Infrastructure) and also unzip the CVU zip file downloaded before to a staging area. Before executing CVU as the owner of the Grid Infrastructure unset ORACLE_HOME, ORACLE_BASE and ORACLE_SID.
An example of running the pre-upgrade check, as follows:
OS Kernel parameter checks may fail (unpublished bug 13323698).
If the checks fail a possible cause can be the lack of read permissions on /etc/sysctl.conf for others. This can be solved by running 'chmod o+r /etc/sysctl.conf' as root on all compute nodes. Remember to undo this change when the patching is finished.
If the kernel parameter checks keep failing, it is recommend to double check for these CVU recommended minimal settings manually. If there are CVU suggested changes in the fixup script, it is recommended to match them with the current settings. The commands to obtain the kernel details can be found in the Database Installation Guide for Linux and can be matched against the CVU suggested minimal values. If these are proven valid, then this failed checks can be ignored. For upgrades from 11.2.0.2 BP12 CVU/OUI is not aware the fix for bug 12539000 is included so for installations on 11.2.0.2 BP12 the message about missing patch 12539000 can be ignored.
For Solaris 11 Express:
CVU will incorrectly state that the "Soft limits check" for "maximum open file descriptors" failed on remote nodes. If the check on remote nodes fails then verify if the value of max-file-descriptor on the remote nodes is equal to the node that passed the check. This information can be found in /etc/project. If all values are equal to the value on from passed node, then the CVU message can be ignored.
CVU may complain about missing "'slewalways yes' & 'disable pll'". If this is the case then this message can be ignored. Solaris 11 Express has an SMF property for configuring slew NTP settings, see bug 13612271
Data Guard - If there is a standby database, then run the command on one of the nodes of the standby database cluster also.
The software releases and patches installed in the current environment must be at certain minimum levels before upgrading to 11.2.0.3 can begin. Depending on the patch installed, updates performed during this section may be performed in a rolling manner or may require database-wide downtime. In this section recommendations for storing base line execution plans will be done as well as the advise to make sure restores can be done when the upgrade fails.
Update OPatch in Grid Home and Database Home on All Database Servers
If the latest OPatch release is not in place and patches (below) need to be applied, then first update OPatch.
Run both of these commands on one database server.
Data Guard - If there is a standby database, then run these commands on the standby database servers also, as follows:
For 11.2.0.1: Apply Patch for bug 9329767/12652740 to the Grid Home
This patch must be installed in the 11.2.0.1 Grid home on all database servers.
The fix for bug 9329767/12652740 is included in BP12, so if you are on BP12 there is nothing you have to do. This is a recommended approach
Customers who are on other Bundle Patch levels can apply the (rolling) release of the patch for bug 12652740 which is available via:
Data Guard - If there is a standby database, then run the following commands on the standby database servers also.
Be sure to choose the database names that belong to the site that is being patched. For patching instructions see the README.
For 11.2.0.2: Apply Patch for bug 12539000 to both Grid and Database home
This step is not required if you currently have 11.2.0.2 Database BP12 or later installed in the grid home. One-offs are available for BP 7, 8, 9, 10 and BP11. If this patch is not available for your current release, then either upgrade or request a patch for your release. This patch must be installed in the 11.2.0.2 Grid infrastructure Home and Database Home on all database servers. It will be installed in a RAC rolling fashion so no downtime is required:
This fix is not required if you currently have 11.2.0.2 BP12 or later installed in both database and grid home.
Data Guard - If there is a standby database, then run these commands on the standby database servers also.
Follow the patch README for patching instructions.
For Solaris installations change to /tmp as working directory before applying the patch
Analyze the Database to Upgrade with the Pre-Upgrade Information Tool
The pre-upgrade information tool is provided with the 11.2.0.3 software. It is also provided standalone as an attachment to Document 884522.1. Run this tool to analyze the 11.2.0.1 or 11.2.0.2 database prior to upgrade.
Run Pre-Upgrade Information Tool
At this point the database is still running with 11.2.0.1 or 11.2.0.2 software. Connect to the database with your environment set to 11.2.0.1 or 11.2.0.2 and run the pre-upgrade information tool that is located in the 11.2.0.3 database home, as follows:
During the pre-upgrade steps, the pre-upgrade tool (utlu112i.sql) will warn to set the CLUSTER_DATABASE parameter to FALSE. However when using DBUA this is done automatically so the warning can be ignored.
Handle obsolete and underscore parameters
Obsolete and underscore parameters will be identified by the pre-upgrade information tool. During the upgrade, DBUA will remove the obsolete and underscore parameters from the primary database initialization parameter file. Some underscore parameters that DBUA removes will be added back in later after DBUA completes the upgrade.
Data Guard only - DBUA will not affect parameters set on the standby, hence obsolete parameters and some underscore parameters must be removed manually if set. Typical values that need to be unset before starting the upgrade are as follows:
SYS@STBY1> alter system reset cell_partition_large_extents scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufsz" scope=spfile;
SYS@STBY1> alter system reset "_backup_ksfq_bufcnt" scope=spfile;
SYS@STBY1> alter system reset "_lm_rcvr_hang_allow_time" scope=spfile;
SYS@STBY1> alter system reset "_kill_diagnostics_timeout" scope=spfile;
SYS@STBY1> alter system reset "_arch_comp_dbg_scan" scope=spfile;
Review pre-upgrade information tool output
Review the remaining output of the pre-upgrade information tool. Take action on areas identified in the output.
The commands in this section will perform the Grid Infrastructure software installation and upgrade to 11.2.0.3. Grid Infrastructure upgrade from 11.2.0.1 or 11.2.0.2 to 11.2.0.3 is performed in a RAC rolling fashion, this procedure does not require downtime.
Data Guard - If there is a standby database, then run these commands on the standby system separately to upgrade the standby system Grid Infrastructure. The standby Grid Infrastructure upgrade can be performed in parallel with the primary if desired. However, the Grid Infrastructure home always needs to be on later or equal level than the Database home.Therefore upgrading Grid Infrastructure home needs to be done before a database upgrade can be performed.
Create the new GI_HOME directory where 11.2.0.3 will be installed
In this document the new Grid Infrastructure home /u01/app/11.2.0.3/grid is used in all examples. It is recommended that the new Grid Infrastructure home NOT be located under /opt/oracle. If it is, then review Document 1281913.1.
To create the new Grid Infrastructure home, run the following commands from the first database server. You will need to substitute your Grid Infrastructure owner username and Oracle inventory group name in place of oracle and oinstall, respectively.
Unzip all 11.2.0.3 software. Run the following command on the database server where the software is staged. An example for the Grid Infrastructure follows, but the same needs to be done for the database software.
Permanently Disable Automatic Memory Management (AMM) for ASM
If 'automatic memory management' (AMM) is in use for the ASM instance, it needs to be changed so that 'automatic shared memory management' (ASMM) will be used from now on. Again, this is a permanent change, not only for the purpose of the upgrade. Disabling AMM needs to be done by setting sga_target, pga_aggregate_target to the recommended values that follow, resetting memory_max_target and setting setting memory_target to zero.
Note that before memory_max_target can be reset with an 'alter system reset' this value has to be initialized to a value of zero first. Example instructions will follow.
For 11.2.0.2 on Linux only: During the upgrade, the ASM instance will be configured to use Hugepages, but ASM will onlyuse them when they are available. This way, if there are no or not enough Hugepages available ASM will start independent of successfully acquiring hugepages or not. This is accomplished by setting the parameter 'use_large_pages=true' in 11.2.0.2 ASM instances.
For upgrades from 11.2.0.1 the 'use_large_pages' parameter will be set after the upgrade as this setting is not supported for 11.2.0.1.
If the number of system wide hugepages configured are just enough for the database then be sure to allocate additional hugepages so that both ASM and database instance can use hugepages. . If your database isn't already configured to use hugepages it is recommended to do so after the upgrade. See Document 361468.1 and Document 401749.1 for details on hugepages
The new parameter settings can be made to the SPFILE only. They will take affect when ASM is restarted as part of the upgrade process.
However, if there is an option to make the changes and validate them before performing the upgrade that would be recommended.
Before changing any value in the ASM init.ora a backup of the this file will be made.
Connect to one ASM instance and run the commands to change from AMM to ASMM. Be sure to use the exact same values as suggested as follows:
SYS@+ASM1> create pfile='asm_init_backup.ora' from spfile /* create a backup init.ora file in $OH/dbs */;
SYS@+ASM1> alter system set sga_target=1250M sid='*' scope=spfile;
SYS@+ASM1> alter system set pga_aggregate_target=400M sid='*' scope=spfile; SYS@+ASM1> alter system set memory_target=0 sid='*' scope=spfile;
SYS@+ASM1> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */; SYS@+ASM1> alter system reset memory_max_target sid='*' scope=spfile;
For 11.2.0.2 Linux installations only:
SYS@+ASM1> alter system set use_large_pages=true sid='*' scope=spfile /* 11.2.0.2 Linux only */;
SYS@+ASM1> create pfile='asm_init_new.ora' from spfile /* file created to verify init.ora settings $OH/dbs */;
SYS@+ASM1> col sid format a10
SYS@+ASM1> col name format a30
SYS@+ASM1> col value format a30
SYS@+ASM1> set linesize 200
SYS@+ASM1> select sid, name, value from v$spparameter where name in ('memory_target','sga_target','sga_max_size', 'pga_aggregate_target','memory_max_target',
'use_large_pages');
Be sure to validate the above settings. sga_max_size should be unset.
Before continuing the new ASM init.ora needs to be verified for old '__*pool_size' settings.
Open the file '$ORACLE_HOME/dbs/asm_init_new.ora' created earlier and double check for '__shared_pool_size' and '__large_pool_size' settings.
If these values are still in the init.ora file they are also in the spfile and need to be removed. This needs to be done on a per ASM instance basis in the spfile.
The number of ASM instances depends on the type of system you have. Example for a quarter rack is as follows:
ALTER SYSTEM RESET "__shared_pool_size" SCOPE=SPFILE SID='+ASM1';
ALTER SYSTEM RESET "__large_pool_size" SCOPE=SPFILE SID='+ASM1';
ALTER SYSTEM RESET "__shared_pool_size" SCOPE=SPFILE SID='+ASM2';
ALTER SYSTEM RESET "__large_pool_size" SCOPE=SPFILE SID='+ASM2';
Set cluster_interconnects Explicitly for ASM
As a result of the redundant interconnect new feature introduced in 11.2.0.2, if initialization parameter cluster_interconnects is not set, then 11.2.0.3 will use an address in the automatically configured network 169.254.0.0/16. Databases running on Oracle Exadata Database Machine upgraded to 11.2.0.3 should continue to use the same cluster interconnect as used in 11.2.0.1 or 11.2.0.2. This is accomplished by explicitly setting the initialization parameter cluster_interconnects.
Below is an example, be sure to use the specific values of the bonded infiniband network interfaces for your hosts (typically bondib0).
Perform the following for ASM. The cluster_interconnect parameter for databases will be reset later in the upgrade process.
Data Guard - If you are upgrading a Data Guard environment, then also perform the steps below for ASM on the standby system.
The cluster_interconnects parameter is set only in the SPFILE. It will take effect on the ASM restart that occurs later in the upgrade process. There is no need to restart ASM before the upgrade.
Set the cluster_interconnects parameter for each ASM instance to the specific value of the bonded infiniband network interface of your hosts (typically bondib0). Be sure to use the ip-numbers that match your installation, an example follows:
SYS@+ASM1> select inst_id, name, ip_address from gv$cluster_interconnects;
SYS@+ASM1> alter system set cluster_interconnects='192.168.10.1' sid='+ASM1' scope=spfile;
SYS@+ASM1> alter system set cluster_interconnects='192.168.10.2' sid='+ASM2' scope=spfile;
To verify, use the following command:
SYS@+ASM1> select sid, value from v$spparameter where name = 'cluster_interconnects';
SID VALUE
------- ------------------------------
+ASM1 192.168.10.1
+ASM2 192.168.10.2
Besides checking for memory and cluster_interconnect settings, this is an appropriate time to apply the recommended ASM settings coming from Exachk. See also Document 1274318.1 for Oracle Sun Database Machine X2-2 Setup/Configuration Best Practices. Please
Perform 11.2.0.3 Grid Infrastructure software installation and upgrade using OUI
Perform these instructions as the grid user (which is oracle in this document) to install the 11.2.0.3 Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 11.2.0.1 or 11.2.0.2 to 11.2.0.3. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion. The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 11.2.0.3 Grid Infrastructure Home the active Grid Infrastructure Home. For systems with a standby database in place this step can be performed either before, at the same time or after installation of Grid Infrastructure on the primary system.
The OUI installation log is located at /u01/app/oraInventory/logs.
To downgrade Oracle Clusterware back to the previous release after a successful upgrade, follow the instructions in Document 1364946.1.
If the upgrade fails, then refer to Document 969254.1 - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.
Set the environment then run the installer, as follows:
Perform the exact actions as described below on the installer screens:
On Download Software Updates screen, select Skip software updates, and then click Next.
On Select Installation Options screen, select Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management, and then click Next.
On Select Product Languages screen, select languages, and then click Next.
On Grid Infrastructure Node Selection screen, verify all database nodes are shown and selected, and then click Next.
On Privileged Operating System Groups screen, verify group names and change if desired, and then click Next.
On Specify Installation Location screen, enter /u01/app/11.2.0.3/grid as the Software Location for the Grid Infrastructure home, and then click Next.
The directory should already exist and it is recommended that the Grid Infrastructure home NOT be placed under /opt/oracle
On Perform Prerequisite Checks screen, similar to the CVU check discussed earlier: OS Kernel parameter checks may fail. See the instructions listed during CVU verification earlier: try the workaround of changing read permissions for sysctl.conf on all compute nodes and as a last alternative to double check these minimum values manually. If these are the only failures: then Click 'Ignore All', and then click Next. Be aware failed checks other than OS Kernel parameter settings must be investigated.
For Linux:
CVU: OS KERNEL PARAMETER CHECK IN GRID INFRA INSTALLATION INCORRECT - filed as bug 13323698.
For upgrades from 11.2.0.2 BP12 CVU/OUI is not aware the fix for bug 12539000 is included so for installations on 11.2.0.2 BP12 the message about missing patch 12539000 can be ignored
Prereq checks saying "multicast checks" failed can be ignored if there errors were not found by manually executing the latest version of CVU.
For Solaris 11 Express:
OUI will incorrectly state that the "Soft limits check" for "maximum open file descriptors" failed on remote nodes. If check on remote nodes then verify if the value of max-file-descriptor on the remote nodes is equal to the node that passed the check. This information can be found in /etc/project. If all values are equal to the value on from passed node, then the OUI message can be ignored.
OUI may complain about missing "'slewalways yes' & 'disable pll'". If this is the case then this message can be ignored. Solaris 11 Express has an SMF property for configuring slew NTP settings, see bug 13612271
On Summary screen, verify information presented about installation, and then click Install.
On Install Product screen, monitor installation progress.
On Execute Configuration scripts screen, perform the following steps in order:
Relink oracle Executable in Grid Home with RDS
Relink oracle executable with RDS. Run the following command as the grid user from the first database server. This command will perform the relink on all database servers.
If the md5sum of the binaries do not match find out if the relink failed.
Apply the latest Bundle Patch to the Grid Infrastructure Home using 'napply'
In order to save an extra stop and start of the Grid Infrastructure and Databases later on in the process the latest Bundle Patch needs to be applied to the Grid Infrastructure home at this stage. However, the current phase the installation is in now does not allow to apply the Bundle Patch using the 'opatch auto' functionality. In this phase the patches in the Bundle Patch can only be applied as the GI home owner using the 'opatch napply' command for each . Consult the Bundle Patch README (Chapter 'Manual Steps for Apply/Rollback Patch') for what patch modules need to be applied for the Grid Infrastructure home using 'opatch napply'. Note that no other actions other than 'opatch napply' need to be done. An example applying BP4 as follows:
No other steps other than 'opatch napply' from the README should be executed. Don't execute the commands rootcrs.pl -unlock and rootcrs.pl -patch for example.
See Document 1410202.1 for more information on how to apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is executed. Note that Document 1410202.1 talks about Patch Set Updates (PSU) while Exadata only has bundle patches (BP)
Apply 11.2.0.3 Bundle Patch Overlay Patches to the Grid Infrastructure Home as Specified in Document 888828.1
@div starts here
Review Document 888828.1 to identify and apply patches that must be installed on top of the Bundle Patch just installed.
div ends here
Apply Customer-specific 11.2.0.3 One-Off Patches to the Grid Infrastructure Home
div starts here
If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now.
div ends here
Checks to do before executing rootupgrade.sh on Each Database Server
Before running rootupgrade.sh verify no active rebalance is running
Query gv$asm_operation to verify no active rebalance is running :
SYS@+ASM1> select count(*) from gv$asm_operation;
COUNT(*)
----------
0
Execute rootupgrade.sh on each database server, as indicated in the Execute Configuration scripts screen. For Solaris installations change to /tmp as working directory before executing the script.
Run the rootupgrade.sh script on the local node first. The script shuts down the earlier release installation, updates configuration details, and starts the new Oracle Clusterware installation.
If the upgrade fails, refer to Document 969254.1 - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure (CRS) on Linux/Unix.
After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for one, which you select as the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node. Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.
First node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Last node rootupgrade.sh will complete with this output
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Oracle Clusterware operating version was successfully set to 11.2.0.3.0
ASM upgrade has finished on last node.
PRKO-2116 : OC4J is already enabled
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Click OK.
On Finish screen, click Close.
Perform an extra check on the status of the Grid infrastructure post upgrade by executing the following command from one of the compute nodes:
The above command should show an online status for Cluster Ready Services, Cluster Synchronization Services and Event Manager on all nodes in the cluster, example as follows:
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
When the cluster is not showing an online status for any of the components on any of the nodes, the issue needs to be researched before continuing. For troubleshooting see the MOS notes mentioned in the reference chapter.
Change Custom Scripts and Environment Variables to Reference the 11.2.0.3 Grid Home
Customized administration, login scripts, static instance registrations in listener.ora files and CRS resources that reference the Grid Infrastructure Home should be updated to refer to /u01/app/11.2.0.3/grid.
For DBFS configurations is it recommended to review the chapter "Steps to Perform If Grid Home or Database Home Changes" in Document 1054431.1 - "Configuring DBFS on Oracle Database Machine" as the shell script used to mount the DBFS filesystem may be located in the original Grid Infrastructure home and needs to be relocated. The following steps are performed to update the location of the CRS resource script to mount dbfs:
Modify the dbfs_mount cluster resource
Update the ACTION_SCRIPT attribute of the dbfs-mount cluster resource to refer to the new location of mount-dbfs.sh, as follows:
(oracle)$ crsctl modify res dbfs_mount -attr \
"ACTION_SCRIPT=/u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh"
(oracle)$ crsctl modify res dbfs_mount -attr \ "RESTART_ATTEMPTS=10"
The commands in this section will perform the Database software installation of 11.2.0.3 into a new directory. Beginning with 11.2.0.2 database patch sets are full releases. It is no longer required to install the base release first before installing a patch set. Refer to Document 1189783.1 for additional details.
This section only installs Database 11.2.0.3 software into a new directory. It does not affect running databases hence all the steps below can be done without downtime.
Data Guard - If there is a separate system running a standby database and that system already has Grid Infrastructure upgraded to 11.2.0.3, then run these steps on the standby system separately to install the Database 11.2.0.3 software. The steps in this section can be performed in any of the following ways:
Install Database 11.2.0.3 software on the primary system first then the standby system.
Install Database 11.2.0.3 software on the standby system first then the primary system.
Install Database 11.2.0.3 software on both the primary and standby systems simultaneously.
Prepare installation software
Unzip the 11.2.0.3 database software. Run the following command on the database server where the software is staged.
Perform the exact actions as described below on the installer screens:
On Configure Security Updates screen, fill in required fields, and then click Next.
On Download Software Updates screen, select Skip software updates, and then click Next.
On Select Installation Option screen, select Install database software only, and then click Next.
On Grid Installation Options screen, select Oracle Real Application Clusters database installation, click Select All. Verify all database servers are present in the list and are selected, and then click Next.
On Select Product Languages screen, select languages, and then click Next.
On Select Database Edition, select Enterprise Edition, click Select Options to choose components to install, and then click Next.
On Specify Installation Location, enter /u01/app/oracle/product/11.2.0.3/dbhome_1 as the Software Location for the Database home, and then click Next.
It is recommended that the Database home NOT be placed under /opt/oracle.
On Privileged Operating System Groups screen, verify group names, and then click Next.
On Perform Prerequisite Checks screen, similar to the CVU check discussed earlier: OS Kernel parameter checks may fail. See the instructions listed during CVU verification earlier: try the workaround of changing read permissions for sysctl.conf on all compute nodes and as a last alternative to double check these minimum values manually. If these are the only failures: then click 'Ignore All', and then click Next. Be aware failed checks other than OS Kernel parameter settings must be investigated.
For Linux:
OS KERNEL PARAMETER CHECK IN GRID INFRA INSTALLATION INCORRECT - filed as unpublished bug 13323698.
For upgrades from 11.2.0.2 BP12 CVU/OUI is not aware the fix for bug 12539000 is included so for installations on 11.2.0.2 BP12 the message about missing patch 12539000 can be ignored
On Summary screen, verify information presented about installation, and then click Install.
On Install Product screen, monitor installation progress.
On Execute Configuration scripts screen, execute root.sh on each database server as instructed, and then click OK
for Solaris installations change to /tmp as working directory before executing the script
On Finish screen, click Close.
Relink oracle Executable in Database Home with RDS
Run the following command as the oracle user from the first database server. This command will perform the relink on all database servers.
If the md5sum of the binaries do not match find out if the relink failed.
Update OPatch in New Grid Home and New Database Home on All Database Servers
Run both of these commands on one database server and update OPatch in the Database Home and Grid Infrastructure Home (Database Home example ), as follows:
Install Latest 11.2.0.3 Bundle Patch - Do Not Perform Post Installation Steps
The example commands below installs Bundle Patch 4 for Oracle Exadata Database Machine Patch 13667791. At the time of writing 11.2.0.3 Database Bundle Patch 4 is the latest available and used as an example. Review Document 888828.1 for the latest release information. The commands to install the latest 11.2.0.3 Bundle Patch are run on each database server individually, hence the Bundle Patch must be copied to all database servers. They can be run in parallel across database servers if there is no need to install in a RAC rolling manner. Applying the latest bundle patch also require the latest OPatch to be installed.
Stage the patch
Unzip the patch on all database servers, as follows:
If you do not have the OCM response file, then run the following command on each database server.
(oracle)$ cd /u01/app/oracle/patchdepot
(oracle)$ /u01/app/oracle/product/11.2.0.3/dbhome_1/OPatch/ocm/bin/emocmrsp
Coming from 11.2.0.1 only
For installations coming from 11.2.0.1 on Linux the parameter 'use_large_pages' needs to be set to TRUE in the ASM init.ora file. The subsequent restart being done as part of the bundle patch apply will activate the setting provided the operating system is configured with enough hugepages. Example as follows:
SYS@+ASM1> alter system set use_large_pages=true sid='*' scope=spfile;
SYS@+ASM1> col sid format a10
SYS@+ASM1> col name format a30
SYS@+ASM1> col value format a30
SYS@+ASM1> set linesize 200
SYS@+ASM1> select sid, name, value from v$spparameter
where name in
('use_large_pages');
SID NAME VALUE
------ ------------------------- -----------------------------------
* use_large_pages TRUE
Patch 11.2.0.3 database home
Run the following command on each database server. Note there are no databases running out of this home yet. For Solaris installations change to /tmp as working directory before applying the patch. It is recommended to run this command from is a new session to make sure no settings from previous steps remain. Example as follows:
Do not perform patch post-installation. Patch post-installation steps will be run after the database is upgraded.
Apply 11.2.0.3 Bundle Patch Overlay Patches as Specified in Document 888828.1
Review Document 888828.1 to identify and apply patches that must be installed on top of the Bundle Patch just installed. If there are SQL command that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.
Apply Customer-specific 11.2.0.3 One-Off Patches
If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them now. If there are SQL statements that must be run against the database as part of the patch application, postpone running the SQL commands until after the database is upgraded.
The commands in this section will perform the database upgrade to 11.2.0.3.
Data Guard - Unless otherwise indicated, run these steps only on the primary database. Database upgrade from 11.2.0.1 or 11.2.0.2 to 11.2.0.3 requires database-wide downtime.
Rolling upgrade with (Transient) Logical Standby or Golden Gate may be used to reduce database downtime during this section.
For details on a transient logical rolling upgrade process see Document 949322.1 Oracle11g Data Guard: Database Rolling Upgrade Shell Script.
These topics are not covered in this document.
The database will be offline during the upgrade (DBUA) steps. Course estimate of actual application downtime is 30 mins but required downtime may depend on factors such as the amount of PL/SQL that needs recompilation. Note that it is not a requirement all databases are upgraded to the latest release. It is possible to have multiple releases of the Oracle Database Homes running on the same system. The benefit of having multiple Oracle Oracle Homes is that multiple releases of different databases can run. The disadvantage is that more planned maintenance is required in terms of patching. Older database releases may lapse out of the regular patching lifecycle policy in time. Having multiple Oracle homes on the same node requires more disk space.
Set cluster_interconnects Explicitly for All Primary and Standby Databases
As a result of the new redundant interconnect feature introduced in 11.2.0.2, if initialization parameter cluster_interconnects is not set, then 11.2.0.3 will use an address in the automatically configured network 169.254.0.0/16. Databases running on Oracle Exadata Database Machine upgraded to 11.2.0.3 should continue to use the same cluster interconnect as used in 11.2.0.1 or 11.2.0.2. This is accomplished by explicitly setting the initialization parameter cluster_interconnects to the 192.* address as shown in the following example.
Perform the steps below for all primary and standby databases that will be upgraded. It is only necessary, however, to perform this step for one instance for each database that will be upgraded. For example, if two databases (PRIM and DBFS) are upgraded, and PRIM has a standby database STBY, then you will perform the steps below against three instances: PRIM1, STBY1, and DBFS1.
Data Guard - If you are upgrading a Data Guard environment, then also perform the following steps for all standby databases.
The cluster_interconnects parameter is set only in the SPFILE. It will take effect on the database restart that occurs later in the upgrade process. There is no need to restart the database before the upgrade. Example as follows (your ip-numbers may differ):
SYS@PRIM1> select inst_id, name, ip_address from gv$cluster_interconnects;
SYS@PRIM1> alter system set cluster_interconnects='192.168.10.1' sid='PRIM1' scope=spfile;
SYS@PRIM1> alter system set cluster_interconnects='192.168.10.2' sid='PRIM2' scope=spfile;
To verify, as follows:
SYS@PRIM1> select sid, value from v$spparameter where name = 'cluster_interconnects';
SID VALUE
------- ------------------------------
PRIM1 192.168.10.1
PRIM2 192.168.10.2
Data Guard only - Synchronize Standby and Change the Standby Database to use the new 11.2.0.3 Database Home
Perform these steps only if there is a physical standby database associated with the database being upgraded.
As indicated in the prerequisites section above, the following must be true:
The standby database is running in real-time apply mode.
The value of the LOG_ARCHIVE_DEST_n database initialization parameter on the primary database that corresponds to the standby database must contain the DB_UNIQUE_NAME attribute, and the value of that attribute must match the DB_UNIQUE_NAME of the standby database.
Flush all redo generated on the primary and disable the broker
To ensure all redo generated by the primary database running 11.2.0.1 or 11.2.0.2 is applied to the standby database running 11.2.0.1 or 11.2.0.2, all redo must be flushed from the primary to the standby.
First, verify the standby database is running recovery in real-time apply. Run the following query connected to the standby database. If this query returns no rows, then real-time apply is not running. Example as follows:
SYS@STBY1> select dest_name from v$archive_dest_status
where recovery_mode = 'MANAGED REAL TIME APPLY';
Data Guard only - Disable Fast-Start Failover and Data Guard Broker
Disable Data Guard broker if it is configured as Data Guard broker is incompatible with running from different releases. If fast-start failover is configured, it must be disabled before broker configuration is disabled, as follows.
Also, disable the init.ora setting dg_broker_start in both primary and standby as follows:
SYS@PRIM1> alter system set dg_broker_start = false;
SYS@STBY1> alter system set dg_broker_start = false;
Flush all redo to the standby database using the following command. Standby database db_unique_name in this example is 'STBY'. Monitor the alert.log of the standby database to verify for the 'End-of-Redo' message. Example as follows:
SYS@PRIM1> alter system flush redo to 'STBY';
Shutdown the primary database.
Wait until the 'End-of-Redo' on the standby is confirmed, as follows:
End-Of-REDO archived log file has not been recovered
Incomplete recovery SCN:0:1371457 archive SCN:0:1391461
Physical Standby did not apply all the redo from the primary.
Tue Nov 22 13:03:51 2011
Media Recovery Log +RECO/prim/archivelog/2011_11_22/thread_2_seq_39.1090.767883831
Identified End-Of-Redo (move redo) for thread 2 sequence 39 at SCN 0x0.153b65
Resetting standby activation ID 338172592 (0x14281ab0)
Media Recovery Waiting for thread 2 sequence 40
Tue Nov 22 13:03:52 2011
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.
Standby switchover readiness check: Checking whether recoveryapplied all redo..
Physical Standby applied all the redo from the primary.
Then shutdown the primary database, as follows:
(oracle)$ srvctl stop database -d PRIM -o immediate
Shutdown the standby database and restart it with 11.2.0.3
Perform the following steps on the standby database server:
Shutdown the standby database, as follows:
(oracle)$ srvctl stop database -d stby
Copy required files from 11.2.0.1 or 11.2.0.2 home to 11.2.0.3 home.
The following example shows the copying of the password file, but also other files like init.ora files may be required to copy:
Edit the standby database entry in /etc/oratab (Linux) or /var/opt/oracle/oratab (Solaris) to point to the new 11.2.0.3 home.
On both the primary and standby database servers, ensure the tnsnames.ora entries are available to the database after it has been upgraded. If using the default location for tnsnames.ora, $ORACLE_HOME/network/admin, then copy tnsnames.ora from the old home to the new home.
If using Data Guard Broker to manage the configuration, then modify the broker required SID_LIST listener.ora entry on all nodes to point to the new ORACLE_HOME. For example
Start the standby, as follows (add "-o mount" option for databases running Active Data Guard):
(oracle)$ srvctl start database -d stby
Start all primary instances in restricted mode
DBUA requires all RAC instances to be running from the current database home before starting the upgrade. To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode, as follows:
(oracle)$ srvctl start database -d PRIM -o restrict
Upgrade the Database with Database Upgrade Assistant (DBUA)
Run DBUA to upgrade the primary database. All database instances of the database you are upgrading must be brought up or DBUA may hang. If there is a standby database, the primary database should be left running in restricted mode, as performed in the previous step.
Oracle recommends removing the value for the init.ora parameter 'listener_networks' before starting DBUA. The value will be restored after running DBUA. Be sure to obtain the original value before removing, as follows:
SYS@PRIM1> show parameter listener_networks
If the value for parameter listener_networks was set, then the value needs to be removed as follows:
SYS@PRIM1> alter system set listener_networks='' sid='*' scope=both;
Data Guard Broker configurations only, Oracle recommends removing log_archive_dest_n as destination for the standby database. The value will be restored after running DBUA. Be sure to obtain the original value before removing, as follows:
SYS@PRIM1> show parameter log_archive_dest
If for example log_archive_dest_2 was used then the value for log_archive_dest_2 needs to be removed for Data Guard Broker configurations. Do not use reset because reset only works on spfiles:
SYS@PRIM1> alter system set log_archive_dest_2='' sid='*' scope=both;
Run DBUA from the new 11.2.0.3 ORACLE_HOME as follows:
Perform these mandatory actions on the DBUA screens:
On Welcome screen, click Next.
On Select Database screen, select the database to be upgraded, and then click Next.
Enter a local instance name if requested.
On Upgrade Options screen, select the desired options, and then click Next.
If you have a standby database, then do NOT select to turn off archiving during the upgrade.
On Recovery and Diagnostic Locations screen, click Next.
On Management Options screen, do not select any Enterprise Manager option as this will be done later using the Database Configuration Assistent (DBCA), then click Next. Note that depending on your configuration you may or may not be prompted for this information.
On Summary screen, verify information presented about the database upgrade, and then click Finish.
On Progress screen, when the upgrade is complete, click OK.
On Upgrade Results screen, review the upgrade result and investigate any failures, and then click Close.
The database upgrade to 11.2.0.3 is complete. There are additional actions to perform to complete configuration.
Review and perform steps in Oracle Upgrade Guide, Chapter 4 : After Upgrading to the New Release
The Oracle Upgrade Guide documents required and recommended tasks to perform after upgrading to 11.2.0.3. Since the database was upgraded from 11.2.0.1 or 11.2.0.2, some tasks do not apply. The following list is the minimum set of tasks that should be reviewed for your environment.
Update Environment Variables
Upgrade the Recovery Catalog
Upgrade the Time Zone File Version
For upgrades done by DBUA tnsnames.ora entries for that particular database will be updated in the tnsnames.ora in the new home. However entries not related to a database upgrade or entries related to standby database will not be updated as part of any DBUA action. The synchronisation of these entries needs to be done manually. Ifile directives used in tnsnames.ora, for example in the grid home, need to be updated to point to the new database home.
Change Custom Scripts and Environment Variables to Reference the 11.2.0.3 Database Home
The primary database is upgraded and is now running out of the 11.2.0.3 database home. Customized administration and login scripts that reference database home ORACLE_HOME should be updated to refer to /u01/app/oracle/product/11.2.0.3/dbhome_1.
Add Underscore Initialization Parameters Back
During the upgrade DBUA removes obsolete and underscore initialization parameters. One new underscore parameter needs to be double checked for and added if not set.
Run 11.2.0.3 Bundle Patch Post Installation Steps
The Bundle Patch for Oracle Exadata Database Machine installation performed before the database upgrade has a post-installation step that require running SQL against each database. Perform the Patch Post-Installation steps documented in the patch README on one database server only. Review the patch README for most up to date details and exact details. The steps below are those required for BP1.
Run catbundle.sql to load the required bundle patch SQL, as follows.
(oracle)$ cd $ORACLE_HOME
SYS@PRIM1> @?/rdbms/admin/catbundle.sql exa apply
Navigate to the <ORACLE_HOME>/cfgtoollogs/catbundle directory (if ORACLE_BASE is defined then the logs will be created under <ORACLE_BASE>/cfgtoollogs/catbundle), and check the following log files for any errors like "grep ^ORA <logfile> | sort -u". If there are errors, then refer to Section 3 of the README, "Known Issues". Here, the format of the <TIMESTAMP> is YYYYMMMDD_HH_MM_SS
Data Guard only - Enable Fast-Start Failover and Data Guard Broker
Update the static listener entry in the listener.ora file on all nodes where a standby instance can run so that it reflects the new ORACLE_HOME used, as follows:
For Data Guard Broker Configurations Only - The workaround for unpublished bug 13387526 must now be undone so log_archive_dest_n for the standby database must now be restored to its original value. When the broker is re-enabled this parameter will be reset to its original value automatically.
If Data Guard broker and fast-start failover were disabled in a previous step, then re-enable them in SQL-Plus and dgmgrl, as follows:
SYS@PRIM1> alter system set dg_broker_start = true sid='*';
SYS@STBY1> alter system set dg_broker_start = true sid='*';
Includes both required and optional steps to perform following the Upgrade, such as updating DBFS, performing a general health check, re-configuring for Cloud Control, and cleaning up the old, unused home areas.
DBFS only - Perform DBFS Required Updates
When the DBFS database is upgraded to 11.2.0.3 the following additional actions are required:
Obtain latest mount-dbfs.sh script from Document 1054431.1
Download the latest mount-dbfs.sh script that is attached to Document 1054431.1 and place it in directory /u01/app/11.2.0.3/grid/crs/script under the new 11.2.0.3 grid home.
Edit mount-dbfs.sh script and Oracle Net files for the new 11.2.0.3 environment
Using the variable settings from the original mount-dbfs.sh script, edit the variable settings in the new mount-dbfs.sh script to match your environment. The setting for variable ORACLE_HOME must be changed to match the 11.2.0.3 ORACLE_HOME /u01/app/oracle/product/11.2.0.3/dbhome_1.
Edit tnsnames.ora used for DBFS to change the directory referenced for the parameters PROGRAM and ORACLE_HOME to the new 11.2.0.3 database home.
If the location of Oracle Net files changed as a result of the upgrade, then change the setting of TNS_ADMIN in shell scripts and login files.
If using wallet-based authentication, recreate the symbolic link to /sbin/mount.dbfs. If you are using the Oracle Wallet to store the DBFS password, then run the following commands:
For V2 or later: Run Exachk again to validate software, hardware, firmware, and configuration best practice validation after the upgrade.
Since Exachk is not certified on V1 hardware HealthCheck needs to be used to collect data regarding key software, hardware and firmware releases.
Until release 2.1.4 Exachk is available for Linux only The exachk or HealthCheck bundles attached to this note contain detailed documentation, instructions, and examples.
Both the Oracle Exadata Database Machine exachk and HealthCheck collect data regarding key software, hardware, firmware, and configurations. The exachk or HealthCheck output assists customers to review and cross reference current collected data against supported version levels and recommended Oracle Exadata best practices.
Both the Oracle Exadata Database Machine exachk and HealthCheck can be executed as desired and should be executed regularly as part of the maintenance program for an Oracle Exadata Database Machine.
exachk and HealthCheck reference an Oracle Exadata Database Machine deployed per Oracle standards.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a database, network, or SQL performance analysis tool.
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a continuous monitoring utility and do not duplicate the work of other monitoring or alerting tools (e.g.: Enterprise Manager).
Neither the Oracle Exadata Database Machine exachk nor HealthCheck is a security configuration analysis or monitoring utility and do not duplicate the work of other security tools. Please reference the secure configuration component of the Database Lifecycle Managment pack for Oracle Enterprise Manager.
exachk is the current version. HealthCheck is frozen and retained for backward compatibility with HP hardware based Oracle Exadata Database Machines. An 11.1 "custom" Exadata Database Machine is outside the scope of HealthCheck.
exachk Version Notes
As of November 23, 2011, this note will contain the exachk current production version and the current beta version of the next release, if the beta is available. If the beta version of the next release is not yet available, this note will contain the exachk current production version and the prior production version.
Re-configure the database for Enterprise Manager Cloud Control 12c using DBCA.
Perform these actions on the DBCA screens to update the properties of an already registered database in 12c OMS after the database upgrade.
1. On the Welcome screen, select the database type you like to administer, for most deployments that will be 'RAC'.
2. On the 'Select the operation that you want to perform screen', select 'Configure Database Options', and then click Next.
3. On 'Select the database that you want to configure options for' screen, select the database you want to re-configure EM for, provide the details for SYSDBA and then click Next.
4. On the 'Enterprise Manager' screen, select 'Configure Enterprise Manager', verify the management service to register with for centralized management, click Next. When prompted specify ASMSNMP password specific for ASM.
5. On 'Database Components' screen, do not select the 'Enterprise Manager Repository' option, then click Next.
6. Specify administrative passwords, click Next
7. Do not change the mode in which the database needs to operate, click Finish. On 'Configure new settings for database', click 'OK'
8. On Progress screen, when 'Completing Database Configuration' is you will be prompted for another operation, click 'NO'.
Performing this step will also cause the properties for the Grid Infrastructure targets to be updated to their new values.
Monitoring properties for standby databases need to be updated manually in the EM Console. Go to Target and right click. Select database and then target setup. Select monitoring configuration.
Deinstall the 11.2.0.1/11.2.0.2 Database and Grid Homes
After the upgrade is complete and the database and application have been validated, the 11.2.0.1 or 11.2.0.2 database and grid homes can be removed using the deinstall tool. Run these commands on the first database server. The deinstall tool will perform the deinstallation on all database servers. Refer to Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux for additional details of the deinstall tool.
Before running the deinstall tool to remove the old database and grid homes, run deinstall with the -checkonly option to verify the actions it will perform. Ensure the following:
There are no databases configured to use the home.
The home is not a configured Grid Infrastructure home.
ASM is not detected in the Oracle Home.
To deinstall Database and Grid infrastructure, the example steps are as follows:
Database installations that use an RMAN Media Management Library (MML) may require re-configuration of the Oracle Database Home after the upgrade. Most often recreating a symbolic link to vendor provided MML is sufficient.
For specific details see the MML documentation.
The approach where a new software release is installed out of place (in a new home) will help against failed installations. Any type of installation problem should not impact availability, failed installations can easily be rolled back and restarted. The rootupgrade script that needs to run after installing a new Grid Infrastructure is the critical part of the upgrade. When this fails normal problem solving applies and the notes mentioned below maybe helpful:
Document 1364946.1 - How to Downgrade 11.2.0.3 Grid Infrastructure to lower 11.2 GI or Pre-11.2 CRS
<NOTE:1364946.1> - How to Downgrade 11.2.0.3 Grid Infrastructure Cluster to Lower 11.2 GI or Pre-11.2 CRS <NOTE:401749.1> - Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration <NOTE:884522.1> - How to Download and Run Oracle's Database Pre-Upgrade Utility <NOTE:969254.1> - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix <NOTE:361468.1> - HugePages on Oracle Linux 64-bit <NOTE:888828.1> - Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions @ <BUG:9329767> - ORA-00600 [KJBMMCHKINTEG:FROM] DURING ROLLING UPGRADE OF ASM FROM 11.1.0.7-11.2 @ <BUG:13387526> - 11.2.0.3 DBUA REPORTS ORA-16025 DURING "STARTUP UPGRADE" <BUG:9795321> - MTU SIZE FOR VIP UNDER 11GR2 GRID INFRASTRUCTURE <NOTE:1050908.1> - Troubleshoot Grid Infrastructure Startup Issues <NOTE:1054431.1> - Configuring DBFS on Oracle Database Machine @ <BUG:12539000> - 11203:ASM UPGRADE FAILED ON FIRST NODE WITH ORA-03113 @ <BUG:12652740> - PLACE HOLDER BUG FOR NEW 11.2.0.1 BLR OF BASE BUG 9329767 <BUG:13323698> - CVU: OS KERNEL PARAMETER CHECK IN GRID INFRA INSTALLATION INCORRECT ON EXADATA <NOTE:1274318.1> - Oracle Sun Database Machine Setup/Configuration Best Practices <NOTE:1281913.1> - root Script (root.sh or rootupgrade.sh) Fails if ORACLE_BASE is set to /opt/oracle <NOTE:1284070.1> - Updating key software components on database hosts to match those on the cells @ <BUG:13612271> - CVU/OUI UNWARE OF NEW SMF PROPERTY FOR NTP CONFIGURATION <NOTE:949322.1> - Oracle11g Data Guard: Database Rolling Upgrade Shell Script <NOTE:1070954.1> - Oracle Exadata Database Machine exachk or HealthCheck <NOTE:1189783.1> - Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 <NOTE:1348303.1> - 11.2.0.3 Patch Set - List of Bug Fixes by Problem Type
Attachments