Upgrading the Oracle Database Appliance to Version 2.3.0.0.0

By | August 5, 2012

As you may have guessed, applying patches on the Oracle Database Appliance can be a little bit different from your standard Oracle environment.  Oracle releases a software version that covers all aspects of the ODA – firmware, operating system, and Oracle software stack (grid infrastructure, rdbms).  Versions are numbered like this (image courtesy MOS note #1397680.1:

 

The ODA was initially released with version 2.1.0.0.0, and has seen several releases over the last year:

Patch Features
2.1.0.3.0 CPU bugfix, 11.2.0.2.5 GI PSU5
2.1.0.3.1 OAK software updates
2.2.0.0.0 11.2.0.3 GI/RDBMS, OEL 5.8, UEK kernel
2.3.0.0.0 July 2012 PSU for 11.2.0.2/11.2.0.3, firmware upgrades, multiple database home support

In this post, we’ll discuss upgrading an ODA running RAC or RAC one node to version 2.3.0.0.0.  Note that before going to 2.3, users must upgrade to 2.2 first.  This is because the 2.3 patch upgrade does not include some of the files used for the OEL 5.8 upgrade, among other things.

As with many functions on the ODA, patching is performed using the oakcli utility.  OAK stands for “Oracle Appliance Kit” and it is at the heart of the manageability enhancements provided by the ODA.  Check MOS note 888888.1 for links to all patches for the ODA.  Download patch 13982331 and place the zip file on each of the nodes in the ODA.  Using oakcli, unpack the patch file on each node:

[root@oda1 ~]# oakcli unpack -package p13982331_23000_Linux-x86-64.zip
Unpacking takes a while,  pls wait....
Successfully unpacked the files to repository.
[root@oda2 ~]# oakcli unpack -package p13982331_23000_Linux-x86-64.zip
Unpacking takes a while,  pls wait....
Successfully unpacked the files to repository.

Next, we will patch the infrastructure. This includes patches for the operating system, firmware, ASR, and the Oracle Appliance Kit. Run this only from the first node. The entire cluster will be shut down because the shared hard disks will have their firmware patched to a new version. The infrastructure is patched using “oakcli update -patch 2.3.0.0.0 –infra”

[root@oda1 ~]# oakcli update -patch 2.3.0.0.0 --infra
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: y
INFO: User has confirmed the reboot
INFO: Patch bundle must be unpacked on the second node also before applying this patch
Did you unpack the patch bundle on the second node?: [Y/N]?: y
 
Please enter the 'root' user password: 
Please re-enter the 'root' user password: 
INFO: Setting up the SSH
............done
INFO: Running pre-install scripts
INFO: 2012-07-26 21:56:17: Running pre patch script for 2.3.0.0.0
INFO: 2012-07-26 21:56:17: Completed pre patch script for 2.3.0.0.0
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /opt/oracle/oak/pkgrepos/System/2.3.0.0.0/bin/prepatch -v 2.3.0.0.0
 
INFO: 2012-07-26 21:56:20: ------------------Patching HMP-------------------------
SUCCESS: 2012-07-26 21:56:50: Successfully upgraded the HMP
 
INFO: 2012-07-26 21:56:51: ----------------------Patching OAK---------------------
SUCCESS: 2012-07-26 21:57:19: Succesfully upgraded OAK 
 
INFO: 2012-07-26 21:57:20: ----------------------Patching ASR---------------------
INFO: 2012-07-26 21:57:20: ASR is already upgraded or running with latest version 
 
INFO: 2012-07-26 21:57:20: ----------------------Patching IPMI---------------------
INFO: 2012-07-26 21:57:20: IPMI is already upgraded or running with latest version 
 
INFO: 2012-07-26 21:57:28: ----------------Patching the Storage-------------------
INFO: 2012-07-26 21:57:28: ....................Patching SSDs...............
INFO: 2012-07-26 21:57:28: Disk : d20  is already running with : ZeusIOPs G3 E125
INFO: 2012-07-26 21:57:28: Disk : d21  is already running with : ZeusIOPs G3 E125
INFO: 2012-07-26 21:57:28: Disk : d22  is already running with : ZeusIOPs G3 E125
INFO: 2012-07-26 21:57:28: Disk : d23  is already running with : ZeusIOPs G3 E125
INFO: 2012-07-26 21:57:29: ....................Patching shared HDDs...............
INFO: 2012-07-26 21:57:29: Updating the  Disk : d0 with the firmware : ST360057SSUN600G 0B25
INFO: 2012-07-26 21:57:31: Clusterware is running on one or more nodes of the cluster
INFO: 2012-07-26 21:57:31: Attempting to stop clusterware and its resources across the cluster
SUCCESS: 2012-07-26 21:59:38: Successfully stopped the clusterware
 
SUCCESS: 2012-07-26 21:59:59: Successfully updated the firmware on  Disk : d0 to ST360057SSUN600G 0B25
INFO: 2012-07-26 21:59:59: Updating the  Disk : d1 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:00:23: Successfully updated the firmware on  Disk : d1 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:00:23: Updating the  Disk : d2 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:00:44: Successfully updated the firmware on  Disk : d2 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:00:44: Updating the  Disk : d3 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:01:04: Successfully updated the firmware on  Disk : d3 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:01:05: Updating the  Disk : d4 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:01:26: Successfully updated the firmware on  Disk : d4 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:01:26: Updating the  Disk : d5 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:01:46: Successfully updated the firmware on  Disk : d5 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:01:47: Updating the  Disk : d6 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:02:09: Successfully updated the firmware on  Disk : d6 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:02:09: Updating the  Disk : d7 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:02:29: Successfully updated the firmware on  Disk : d7 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:02:29: Updating the  Disk : d8 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:02:48: Successfully updated the firmware on  Disk : d8 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:02:48: Updating the  Disk : d9 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:06:07: Successfully updated the firmware on  Disk : d9 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:06:07: Updating the  Disk : d10 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:06:27: Successfully updated the firmware on  Disk : d10 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:06:27: Updating the  Disk : d11 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:06:46: Successfully updated the firmware on  Disk : d11 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:06:46: Updating the  Disk : d12 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:07:07: Successfully updated the firmware on  Disk : d12 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:07:07: Updating the  Disk : d13 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:07:26: Successfully updated the firmware on  Disk : d13 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:07:26: Updating the  Disk : d14 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:07:46: Successfully updated the firmware on  Disk : d14 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:07:46: Updating the  Disk : d15 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:08:07: Successfully updated the firmware on  Disk : d15 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:08:07: Updating the  Disk : d16 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:08:27: Successfully updated the firmware on  Disk : d16 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:08:27: Updating the  Disk : d17 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:08:47: Successfully updated the firmware on  Disk : d17 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:08:47: Updating the  Disk : d18 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:09:07: Successfully updated the firmware on  Disk : d18 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:09:08: Updating the  Disk : d19 with the firmware : ST360057SSUN600G 0B25
SUCCESS: 2012-07-26 22:09:27: Successfully updated the firmware on  Disk : d19 to ST360057SSUN600G 0B25
INFO: 2012-07-26 22:09:27: ....................Patching local HDDs...............
INFO: 2012-07-26 22:09:27: Disk : c0d0  is already running with : ST95001N SA03
INFO: 2012-07-26 22:09:27: Disk : c0d1  is already running with : ST95001N SA03
INFO: 2012-07-26 22:09:27: ....................Patching Expanders...............
INFO: 2012-07-26 22:09:27: Expander : x0  is already running with : T4 Storage 0342
INFO: 2012-07-26 22:09:27: Expander : x1  is already running with : T4 Storage 0342
INFO: 2012-07-26 22:09:27: ....................Patching Controllers...............
INFO: 2012-07-26 22:09:27: No-update for the Controller: c0 
INFO: 2012-07-26 22:09:27: Controller : c1  is already running with : 0x0072 05.00.29.00
INFO: 2012-07-26 22:09:27: Controller : c2  is already running with : 0x0072 05.00.29.00
INFO: 2012-07-26 22:09:27: ------------Finished the storage Patching------------
 
INFO: 2012-07-26 22:09:29: -----------------Patching Ilom & Bios-----------------
INFO: 2012-07-26 22:09:29: Getting the ILOM Ip address
INFO: 2012-07-26 22:09:29: Updating the Ilom using LAN+ protocol
INFO: 2012-07-26 22:09:30: Updating the ILOM. It takes a while
INFO: 2012-07-26 22:14:25: Verifying the updated Ilom Version, it may take a while if ServiceProcessor is booting
INFO: 2012-07-26 22:14:27: Waiting for the service processor to be up
SUCCESS: 2012-07-26 22:18:16: Successfully updated the ILOM with the firmware 3.0.16.22 r73911
 
INFO: Patching the infrastructure on node: 192.168.16.25 , it may take upto 30 minutes. Please wait
...
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /tmp/prepare_summary
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /opt/oracle/oak/pkgrepos/System/2.3.0.0.0/bin/oakpatch -p infrastructure -f 2.3.0.0.0
 
............done
 
INFO: Infrastructure patching summary on node: 192.168.16.24
SUCCESS: 2012-07-26 22:29:19:  Successfully upgraded the HMP
SUCCESS: 2012-07-26 22:29:19:  Succesfully updated the OAK
INFO: 2012-07-26 22:29:19:  ASR is already upgraded
INFO: 2012-07-26 22:29:19:  IPMI is already upgraded
INFO: 2012-07-26 22:29:19:  Storage patching summary
SUCCESS: 2012-07-26 22:29:19:  No failures during storage upgrade
SUCCESS: 2012-07-26 22:29:19:  Successfully updated the ILOM & Bios
 
INFO: Infrastructure patching summary on node: 192.168.16.25
SUCCESS: 2012-07-26 22:29:19:  Successfully upgraded the HMP
SUCCESS: 2012-07-26 22:29:19:  Succesfully updated the OAK
INFO: 2012-07-26 22:29:19:  IPMI is already upgraded
INFO: 2012-07-26 22:29:19:  Storage patching summary
SUCCESS: 2012-07-26 22:29:19:  No failures during storage upgrade
SUCCESS: 2012-07-26 22:29:19:  Successfully updated the ILOM & Bios
 
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /opt/oracle/oak/pkgrepos/System/2.3.0.0.0/bin/postpatch -v 2.3.0.0.0
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.24 /opt/oracle/oak/pkgrepos/System/2.3.0.0.0/bin/postpatch -v 2.3.0.0.0
INFO: Some of the patched components require node reboot. Rebooting the nodes
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.24 /tmp/pending_actions
INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /tmp/pending_actions
 
Broadcast message from root (Thu Jul 26 22:32:49 2012):
 
The system is going down for system halt NOW!

Patching the infrastructure took a little more than 30 minutes for both nodes. After the infrastructure has been patched, both nodes will reboot, and the cluster will come up. Before moving on, check to make sure that the serial number is visible on both nodes, and that they show the same number:

[root@oda1 ~]# dmidecode -t 1|grep 'Serial Number'
	Serial Number: **********
[root@oda2 ~]# dmidecode -t 1|grep 'Serial Number'
	Serial Number: **********

We are now ready to upgrade the Grid Infrastructure on the cluster. This portion is performed in a rolling fashion, patching the first node, followed by the second. The GI home is patched using “oakcli update -patch 2.3.0.0.0 –gi”

[root@oda1 ~]# oakcli update -patch 2.3.0.0.0 --gi
 
Please enter the 'root' user password: 
Please re-enter the 'root' user password: 
INFO: Setting up the SSH
............done
 
Please enter the 'grid' user password: 
Please re-enter the 'grid' user password: 
...
...
 
..........done
...
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2012-07-26 22:52:09: Setting up the ssh for grid user
..........done
...
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2012-07-26 22:52:29: Patching the GI home on node oda1
INFO: 2012-07-26 22:52:29: Updating the opatch
INFO: 2012-07-26 22:53:10: Performing the conflict checks
SUCCESS: 2012-07-26 22:53:25: Conflict checks passed for all the homes
INFO: 2012-07-26 22:53:25: Checking if the patch is already applied on any of the homes
INFO: 2012-07-26 22:53:31: No home is already up-to-date
SUCCESS: 2012-07-26 22:54:23: Successfully stopped the dbconsoles
INFO: 2012-07-26 22:54:28: Applying patch on the homes: /u01/app/11.2.0.3/grid
INFO: 2012-07-26 22:54:28: It may take upto 15 mins
SUCCESS: 2012-07-26 23:05:56: Successfully applied the patch on home: /u01/app/11.2.0.3/grid
SUCCESS: 2012-07-26 23:06:18: Successfully started the dbconsoles
INFO: 2012-07-26 23:06:18: Patching the GI home on node oda2
 
...
 
............done
 
INFO: GI patching summary on node: 192.168.16.24
SUCCESS: 2012-07-26 23:18:41:  Successfully applied the patch on home /u01/app/11.2.0.3/grid
 
INFO: GI patching summary on node: 192.168.16.25
SUCCESS: 2012-07-26 23:18:41:  Successfully applied the patch on home /u01/app/11.2.0.3/grid

This process took around 25-30 minutes to complete on our system, depending on the number of databases being started/stopped. Finally, we are ready to patch the database homes to the latest patch release. If your databases are running 11.2.0.3, they will be updated to 11.2.0.3.3, and 11.2.0.2 homes will be updated to 11.2.0.2.7. Note that the database patches are in-place, so your home will not change with this patch. Before patching the database homes, check that your databases are registered with the Oracle appliance manager:

[root@oda2 ~]# oakcli show databases
Database Name    Database Type   Database HomeName    Database HomeLocation                               Database Version                   
----------------  -----------     ----------------     ---------------------------------------             ---------------------              
SCRATCH          SINGLE          DEMO                 /u01/app/oracle/product/11.2.0/dbhome_1            11.2.0.2.5(13343424,13343447)      
oakdb            RAC             dbhome11203          /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.2(13696216,13696251)      
SLOB             SINGLE          dbhome11203          /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.2(13696216,13696251)      
DEMO             RAC             DEMO                 /u01/app/oracle/product/11.2.0/dbhome_1            11.2.0.2.5(13343424,13343447)      
CDH3             RAC             dbhome11203          /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.2(13696216,13696251)

You can see that we have both 11.2.0.3 and 11.2.0.2 databases installed on this system. Issue the “oakcli update -patch 2.3.0.0.0 –db” command to update all of your database homes. Once again, this portion of the patch is applied in a rolling fashion.

[root@oda1 ~]# oakcli update -patch 2.3.0.0.0 --database
 
Please enter the 'root' user password: 
Please re-enter the 'root' user password: 
INFO: Setting up the SSH
............done
 
Please enter the 'oracle' user password: 
Please re-enter the 'oracle' user password: 
...
...
 
..........done
...
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2012-07-26 23:26:42: Getting the possible database homes for patching
...
INFO: 2012-07-26 23:26:53: Patching 11.2.0.2 Database homes on node oda1
 
Found the following 11.2.0.2 homes possible for patching:
 
HOME_NAME                      HOME_LOCATION                                          
---------                      -------------                                          
DEMO                           /u01/app/oracle/product/11.2.0/dbhome_1                
 
[Please note that few of the above database homes may be already up-to-date. They will be automatically ignored]
 
Would you like to patch all the above homes: Y | N ? :y
INFO: 2012-07-26 23:27:06: Setting up ssh for the user oracle
..........done
...
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2012-07-26 23:27:26: Updating the opatch
INFO: 2012-07-26 23:27:55: Performing the conflict checks
SUCCESS: 2012-07-26 23:28:22: Conflict checks passed for all the homes
INFO: 2012-07-26 23:28:22: Checking if the patch is already applied on any of the homes
INFO: 2012-07-26 23:28:28: No home is already up-to-date
SUCCESS: 2012-07-26 23:28:59: Successfully stopped the dbconsoles
INFO: 2012-07-26 23:29:04: Applying patch on the homes: /u01/app/oracle/product/11.2.0/dbhome_1
INFO: 2012-07-26 23:29:04: It may take upto 15 mins
 
SUCCESS: 2012-07-26 23:42:27: Successfully applied the patch on home: /u01/app/oracle/product/11.2.0/dbhome_1
SUCCESS: 2012-07-26 23:42:46: Successfully started the dbconsoles
INFO: 2012-07-26 23:42:49: Patching 11.2.0.2 Database homes on node oda2
INFO: 2012-07-26 23:52:51: Running the catbundle.sql
INFO: 2012-07-26 23:52:54: Running catbundle.sql on the database DEMO
INFO: 2012-07-26 23:54:00: Running catbundle.sql on the database SCRATCH
...
INFO: 2012-07-26 23:54:43: Patching 11.2.0.3 Database homes on node oda1
 
Found the following 11.2.0.3 homes possible for patching:
 
HOME_NAME                      HOME_LOCATION                                          
---------                      -------------                                          
dbhome11203                    /u01/app/oracle/product/11.2.0.3/dbhome_1              
 
[Please note that few of the above database homes may be already up-to-date. They will be automatically ignored]
 
Would you like to patch all the above homes: Y | N ? :y
INFO: 2012-07-26 23:54:50: Updating the opatch
INFO: 2012-07-26 23:55:32: Performing the conflict checks
SUCCESS: 2012-07-26 23:55:47: Conflict checks passed for all the homes
INFO: 2012-07-26 23:55:47: Checking if the patch is already applied on any of the homes
INFO: 2012-07-26 23:55:53: No home is already up-to-date
SUCCESS: 2012-07-26 23:56:02: Successfully stopped the dbconsoles
INFO: 2012-07-26 23:56:07: Applying patch on the homes: /u01/app/oracle/product/11.2.0.3/dbhome_1
INFO: 2012-07-26 23:56:07: It may take upto 15 mins
SUCCESS: 2012-07-27 00:01:00: Successfully applied the patch on home: /u01/app/oracle/product/11.2.0.3/dbhome_1
SUCCESS: 2012-07-27 00:01:00: Successfully started the dbconsoles
INFO: 2012-07-27 00:01:04: Patching 11.2.0.3 Database homes on node oda2
INFO: 2012-07-27 00:05:14: Running the catbundle.sql
INFO: 2012-07-27 00:05:17: Running catbundle.sql on the database CDH3
INFO: 2012-07-27 00:05:48: Running catbundle.sql on the database oakdb
INFO: 2012-07-27 00:06:19: Running catbundle.sql on the database SLOB
 
............done
 
INFO: DB patching summary on node: 192.168.16.24
SUCCESS: 2012-07-27 00:07:08:  Successfully applied the patch on home /u01/app/oracle/product/11.2.0/dbhome_1
SUCCESS: 2012-07-27 00:07:08:  Successfully applied the patch on home /u01/app/oracle/product/11.2.0.3/dbhome_1
 
INFO: DB patching summary on node: 192.168.16.25
SUCCESS: 2012-07-27 00:07:08:  Successfully applied the patch on home /u01/app/oracle/product/11.2.0/dbhome_1
SUCCESS: 2012-07-27 00:07:08:  Successfully applied the patch on home /u01/app/oracle/product/11.2.0.3/dbhome_1

For our 5 databases, the patch application process took around 40 minutes. Nothing was required of the administrator, not even running the post-installation scripts. Once everything is done, run “oakcli show version -detail”

[root@oda1 ~]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version        
--------------  ---------------           ------------------        -----------------        
2.3.0.0.0                                                                                    
                Controller                05.00.29.00               Up-to-date               
                Expander                  0342                      Up-to-date               
                SSD_SHARED                E125                      Up-to-date               
                HDD_LOCAL                 SA03                      Up-to-date               
                HDD_SHARED                0B25                      Up-to-date               
                ILOM                      3.0.16.22 r73911          Up-to-date               
                BIOS                      12010309                  Up-to-date               
                IPMI                      1.8.10.4                  Up-to-date               
                HMP                       2.2.4                     Up-to-date               
                OAK                       2.3.0.0.0                 Up-to-date               
                OEL                       5.8                       Up-to-date               
                GI_HOME                   11.2.0.3.3(13923374,      Up-to-date               
                                          13919095)                                          
                DB_HOME                   11.2.0.3.3(13923374,      Up-to-date               
                                          13919095)

End to end, the process took a little over 2 hours to apply the patch.

Be Sociable, Share!

2 thoughts on “Upgrading the Oracle Database Appliance to Version 2.3.0.0.0

Leave a Reply