Exadata Upgrade to 12.2.0.1 – The Missing Step

By | March 29, 2017

I decided this week to be a little brave and upgrade one of the Enkitec Exadata racks to 12.2.0.1.  I installed the 12.2.1.1.0 Exadata image a few weeks ago, and have been waiting for a chance to upgrade clusterware to 12.2.  Thankfully, Oracle provides a very good note for this, but I did hit one large snag that should be documented.

The process for upgrading GI to 12.2 includes the following steps:

  • Create /u01/app/12.2.0.1/grid directory on all nodes in the cluster
  • Unzip Grid Infrastructure software bundle to /u01/app/12.2.0.1/grid directory
  • Run prerequisite checks for ASM memory, CVU, etc
  • Run the gridSetup.sh via VNC or X-Windows
  • *before* running rootupgrade.sh on each node, install latest patches
  • Run rootupgrade.sh on local node
  • Run rootupgrade.sh on each remaining node sequentially
  • Click "OK" in the gridSetup.sh window to finish running upgrade process

Things were going along great, until I ran rootupgrade.sh on the first node.  The rootupgrade.sh script has come a long way since the first time that I ran it, and it now includes numbered tasks and a much cleaner output than previous versions.  I saw the following output while running rootupgrade.sh, which seemed to be stuck at the last line about starting high availability services:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/enkdb03/crsconfig/rootcrs_enkdb03_2017-03-29_09-02-51PM.log
2017/03/29 21:02:56 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/03/29 21:02:57 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:04:00 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:04:00 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/03/29 21:04:10 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/03/29 21:04:10 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/03/29 21:04:11 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/03/29 21:04:18 CLSRSC-515: Starting OCR manual backup.
2017/03/29 21:04:24 CLSRSC-516: OCR manual backup successful.
2017/03/29 21:04:36 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2017/03/29 21:04:36 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2017/03/29 21:04:36 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2017/03/29 21:04:37 CLSRSC-615:
 3. The last node to downgrade cannot be a Leaf node.
2017/03/29 21:04:42 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/03/29 21:04:42 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/03/29 21:04:44 CLSRSC-363: User ignored prerequisites during installation
2017/03/29 21:04:52 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/03/29 21:04:58 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/03/29 21:05:09 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2017/03/29 21:05:13 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2017/03/29 21:05:13 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 12.2.0.1.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2017/03/29 21:05:19 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0.2/grid -oldCRSVersion 12.1.0.2.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2017/03/29 21:05:22 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2017/03/29 21:05:29 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/03/29 21:06:08 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/03/29 21:06:11 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/03/29 21:06:11 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/03/29 21:06:36 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/03/29 21:06:36 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/03/29 21:06:41 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/03/29 21:06:46 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/03/29 21:07:01 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
2017/03/29 21:07:36 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'enkdb03'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'enkdb03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/29 21:08:18 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/03/29 21:08:30 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'enkdb03'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'enkdb03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources

I decided to look a little closer, and peeked inside the alert log for CRS itself (/u01/app/oracle/diag/crs/enkdb03/crs/trace/alert.log) and saw the following messages:

2017-03-29 21:09:09.637 [OCTSSD(96409)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 96409
2017-03-29 21:09:09.653 [OCSSD(96209)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
2017-03-29 21:09:10.624 [OCTSSD(96409)]CRS-2403: The Cluster Time Synchronization Service on host enkdb03 is in observer mode.
2017-03-29 21:09:11.728 [OCTSSD(96409)]CRS-2407: The new Cluster Time Synchronization Service reference node is host enkdb04.
2017-03-29 21:09:11.729 [OCTSSD(96409)]CRS-2401: The Cluster Time Synchronization Service started on host enkdb03.
2017-03-29 21:09:23.484 [ORAAGENT(95753)]CRS-5017: The resource action "ora.asm start" encountered the following error:
2017-03-29 21:09:23.484+ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:o^? failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at:
ORA-27303: additional information: }��^?
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/enkdb03/crs/trace/ohasd_oraagent_oracle.trc".
2017-03-29 21:09:24.543 [ORAROOTAGENT(95633)]CRS-5019: All OCR locations are on ASM disk groups [DBFS_DG], and none of these disk groups are mounted. Details are at "(:CLSN00140:)" in "/u01/app/oracle/diag/crs/enkdb03/crs/trace/ohasd_orarootagent_root.trc".

That was interesting - it looks like ASM failed to start for some reason.  That led me to the ASM alert log, which showed that it did in fact crash:

Cluster Communication is configured to use IPs from: CI
IP: 192.168.12.6         Subnet: 192.168.12.0
KSIPC Loopback IP addresses(OSD):
127.0.0.1
KSIPC Available Transports: RC:RDS:UDP:XRC:TCP
KSIPC: Client: KCL       Transport: NONE
KSIPC: Client: DLM       Transport: NONE
KSIPC CAPABILITIES :SKGXP:GRPCLSS:TOPO:DLL
KSXP: ksxpsg_ipclwtrans: 0 NONE
KSXP: setting socket save mode to 0
cluster interconnect IPC version: Oracle UDP/IP (generic)     <-------------------- UDP????
IPC Vendor 1 proto 2
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0
============================================================
NOTE: PatchLevel of this instance 3825832998
============================================================
Dumping list of patches:
============================================================
25556203
============================================================
KSXP: ksxpsg_ipclwtrans: 0 NONE
KSXP: setting socket save mode to 0
2017-03-29T21:09:23.311351-05:00
USER (ospid: 96518): terminating the instance due to error 27504
2017-03-29T21:09:23.481892-05:00
Instance terminated by USER, pid = 96518

Looking at the output, I noticed something strange.  On Exadata, ASM traffic goes over the InfiniBand network, which should use the RDS protocol.  The messages above show that ASM is still using the non-Exadata standard protocol of UDP.  Beginning with Oracle 11.2.0.4, the installer will validate whether the RDS modules are enabled on the kernel.  If they are loaded, then the installer will automatically relink for RDS.  It looks like that may not be the case with 12.2.  I was able to easily validate this with the skgxpinfo command:

[root@enkdb03 ~]# /u01/app/12.2.0.1/grid/bin/skgxpinfo
udp

Sure enough, the home was linked for UDP.  The other node in the cluster was still online (linked with RDS), and Oracle does not support databases/ASM running in a mixed mode of UDP and RDS.  This caused ASM to crash on startup, leaving my clusterware upgrade in a bad state.   What the documentation should have in fact included is:

  • Create /u01/app/12.2.0.1/grid directory on all nodes in the cluster
  • Unzip Grid Infrastructure software bundle to /u01/app/12.2.0.1/grid directory
  • Run prerequisite checks for ASM memory, CVU, etc
  • Run the gridSetup.sh via VNC or X-Windows
  • *before* running rootupgrade.sh on each node, install latest patches
  • *before* running rootupgrade.sh on each node, relink for RDS
  • Run rootupgrade.sh on local node
  • Run rootupgrade.sh on each remaining node sequentially
  • Click "OK" in the gridSetup.sh window to finish running upgrade process

This can be easily done with the following command as oracle:

$ dcli -l oracle -g ~/dbs_group ORACLE_HOME=/u01/app/12.1.0.2/grid \
make -C /u01/app/12.1.0.2/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle

If you relink before running rootupgrade.sh, you should be fine with the upgrade process.  Unfortunately, the first node in my cluster was in the middle of an upgrade.  Having fired off the rootupgrade.sh script on my first node made changes that kept me from being able to simply relink the home. One of the actions that is performed by rootupgrade.sh is to change ownership of the GI home from oracle/grid to root.  This change in ownership caused the relink operation on this node to fail:

$ ORACLE_HOME=/u01/app/12.2.0.1/grid make -C /u01/app/12.2.0.1/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
make: Entering directory `/u01/app/12.2.0.1/grid/rdbms/lib'
rm -f /u01/app/12.2.0.1/grid/lib/libskgxp12.so
rm: cannot remove `/u01/app/12.2.0.1/grid/lib/libskgxp12.so': Permission denied
make: [ipc_rds] Error 1 (ignored)
cp /u01/app/12.2.0.1/grid/lib//libskgxpr.so /u01/app/12.2.0.1/grid/lib/libskgxp12.so
cp: cannot create regular file `/u01/app/12.2.0.1/grid/lib/libskgxp12.so': Permission denied
make: *** [ipc_rds] Error 1
make: Leaving directory `/u01/app/12.2.0.1/grid/rdbms/lib'

To get around this, I needed to go back and unlock the home.  This can be done via the rootcrs.pl script.  Passing it the -unlock parameter will reset the permissions and allow me to relink the home:

[root@enkdb03 cfgtoollogs]# /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl -unlock
Can't locate parent.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /u01/app/12.2.0.1/grid/crs/install /u01/app/12.2.0.1/grid/crs/install/../../perl/lib) at /u01/app/12.2.0.1/grid/crs/install/oraacfs.pm line 170.
BEGIN failed--compilation aborted at /u01/app/12.2.0.1/grid/crs/install/oraacfs.pm line 170.
Compilation failed in require at /u01/app/12.2.0.1/grid/crs/install/crsutils.pm line 764.
BEGIN failed--compilation aborted at /u01/app/12.2.0.1/grid/crs/install/crsutils.pm line 764.
Compilation failed in require at /u01/app/12.2.0.1/grid/crs/install/crsinstall.pm line 290.
BEGIN failed--compilation aborted at /u01/app/12.2.0.1/grid/crs/install/crsinstall.pm line 290.
Compilation failed in require at /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl line 165.
BEGIN failed--compilation aborted at /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl line 165.

Well, that's a new error.  Apparently, rootcrs.pl doesn't like the default version of perl that I have installed in the Exadata 12.2.1.1.0 image.  Maybe the version installed inside the Oracle home is a newer version.  Sure enough, /usr/bin/perl was version 5.10.1, while /u01/app/12.2.0.1/grid/perl/bin/perl is 5.22.0.  Calling rootcrs.pl with the newer version of perl did the trick:

[root@enkdb03 ~]# /u01/app/12.2.0.1/grid/perl/bin/perl /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl -unlock
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/enkdb03/crsconfig/crsunlock_enkdb03_2017-03-29_09-21-57PM.log
2017/03/29 21:21:58 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:22:07 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:22:07 CLSRSC-347: Successfully unlock /u01/app/12.2.0.1/grid

Now, I can relink the Oracle home, and see if it shows RDS:

> ORACLE_HOME=/u01/app/12.2.0.1/grid make -C /u01/app/12.2.0.1/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
make: Entering directory `/u01/app/12.2.0.1/grid/rdbms/lib'
rm -f /u01/app/12.2.0.1/grid/lib/libskgxp12.so
cp /u01/app/12.2.0.1/grid/lib//libskgxpr.so /u01/app/12.2.0.1/grid/lib/libskgxp12.so
chmod 755 /u01/app/12.2.0.1/grid/bin

 - Linking Oracle
rm -f /u01/app/12.2.0.1/grid/rdbms/lib/oracle
/u01/app/12.2.0.1/grid/bin/orald  -o /u01/app/12.2.0.1/grid/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/12.2.0.1/grid/rdbms/lib/ -L/u01/app/12.2.0.1/grid/lib/ -L/u01/app/12.2.0.1/grid/lib/stubs/   -Wl,-E /u01/app/12.2.0.1/grid/rdbms/lib/opimai.o /u01/app/12.2.0.1/grid/rdbms/lib/ssoraed.o /u01/app/12.2.0.1/grid/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--no-whole-archive /u01/app/12.2.0.1/grid/lib/nautab.o /u01/app/12.2.0.1/grid/lib/naeet.o /u01/app/12.2.0.1/grid/lib/naect.o /u01/app/12.2.0.1/grid/lib/naedhs.o /u01/app/12.2.0.1/grid/rdbms/lib/config.o  -ldmext -lserver12 -lodm12 -lofs -lcell12 -lnnet12 -lskgxp12 -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv /u01/app/12.2.0.1/grid/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12  -lrt -lplp12 -ldmext -lserver12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 `if [ -f /u01/app/12.2.0.1/grid/lib/libavserver12.a ] ; then echo "-lavserver12" ; else echo "-lavstub12"; fi` `if [ -f /u01/app/12.2.0.1/grid/lib/libavclient12.a ] ; then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12  -lrt -lplp12 -ljavavm12 -lserver12  -lwwg  `cat /u01/app/12.2.0.1/grid/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnro12 `cat /u01/app/12.2.0.1/grid/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lztkg12 `cat /u01/app/12.2.0.1/grid/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnro12 `cat /u01/app/12.2.0.1/grid/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lngsmshd12 -lnnzst12 -lzt12 -lztkg12   -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar tv /u01/app/12.2.0.1/grid/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo12 -lserver12"; fi` -L/u01/app/12.2.0.1/grid/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -lgeneric12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -lgeneric12  -lgeneric12 -lorazip -loraz -llzopro5 -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lsnls12 -lunls12  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12  -laio -lons  -lfthread12   `cat /u01/app/12.2.0.1/grid/lib/sysliblist` -Wl,-rpath,/u01/app/12.2.0.1/grid/lib -lm    `cat /u01/app/12.2.0.1/grid/lib/sysliblist` -ldl -lm   -L/u01/app/12.2.0.1/grid/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/12.2.0.1/grid/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs`
test ! -f /u01/app/12.2.0.1/grid/bin/oracle || (\
	   mv -f /u01/app/12.2.0.1/grid/bin/oracle /u01/app/12.2.0.1/grid/bin/oracleO &&\
	   chmod 600 /u01/app/12.2.0.1/grid/bin/oracleO )
mv /u01/app/12.2.0.1/grid/rdbms/lib/oracle /u01/app/12.2.0.1/grid/bin/oracle
chmod 6751 /u01/app/12.2.0.1/grid/bin/oracle
make: Leaving directory `/u01/app/12.2.0.1/grid/rdbms/lib'

$ /u01/app/12.2.0.1/grid/bin/skgxpinfo
rds

Success!  Now, one of the really nice things about newer versions of rootupgrade.sh is that it can be run again without having to roll anything back.  I figured that I might as well give it a try, and sure enough...it worked!

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/enkdb03/crsconfig/rootcrs_enkdb03_2017-03-29_09-23-06PM.log
2017/03/29 21:23:08 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/03/29 21:23:08 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:23:08 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:23:38 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:23:49 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/03/29 21:23:49 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/03/29 21:23:52 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/03/29 21:23:53 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/03/29 21:23:53 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/03/29 21:24:02 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/03/29 21:24:02 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/03/29 21:24:02 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/03/29 21:24:04 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/03/29 21:24:04 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2017/03/29 21:24:05 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/03/29 21:24:05 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/03/29 21:24:06 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/03/29 21:24:52 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/03/29 21:24:55 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/03/29 21:24:56 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/03/29 21:25:11 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
2017/03/29 21:25:23 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
2017/03/29 21:25:24 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/03/29 21:25:25 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'enkdb03'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'enkdb03'
CRS-2677: Stop of 'ora.drivers.acfs' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.evmd' on 'enkdb03'
CRS-2677: Stop of 'ora.evmd' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'enkdb03'
CRS-2677: Stop of 'ora.diskmon' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'enkdb03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'enkdb03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'enkdb03'
CRS-2672: Attempting to start 'ora.evmd' on 'enkdb03'
CRS-2676: Start of 'ora.mdnsd' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.evmd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'enkdb03'
CRS-2676: Start of 'ora.gpnpd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'enkdb03'
CRS-2676: Start of 'ora.gipcd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'enkdb03'
CRS-2676: Start of 'ora.cssdmonitor' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'enkdb03'
CRS-2672: Attempting to start 'ora.diskmon' on 'enkdb03'
CRS-2676: Start of 'ora.diskmon' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.cssd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'enkdb03'
CRS-2672: Attempting to start 'ora.ctssd' on 'enkdb03'
CRS-2676: Start of 'ora.ctssd' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'enkdb03'
CRS-2676: Start of 'ora.asm' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'enkdb03'
CRS-2676: Start of 'ora.storage' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'enkdb03'
CRS-2676: Start of 'ora.crf' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'enkdb03'
CRS-2676: Start of 'ora.crsd' on 'enkdb03' succeeded
CRS-6017: Processing resource auto-start for servers: enkdb03
CRS-6016: Resource auto-start has completed for server enkdb03
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'enkdb03'
CRS-2673: Attempting to stop 'ora.crsd' on 'enkdb03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'enkdb03'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'enkdb03'
CRS-2677: Stop of 'ora.DATA.dg' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.DBFS_DG.dg' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.RECO.dg' on 'enkdb03'
CRS-2677: Stop of 'ora.RECO.dg' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.DBFS_DG.dg' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'enkdb03'
CRS-2677: Stop of 'ora.asm' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'enkdb03'
CRS-2677: Stop of 'ora.net1.network' on 'enkdb03' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'enkdb03' has completed
CRS-2677: Stop of 'ora.crsd' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.crf' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'enkdb03'
CRS-2677: Stop of 'ora.drivers.acfs' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.crf' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.asm' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'enkdb03'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.evmd' on 'enkdb03'
CRS-2677: Stop of 'ora.ctssd' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'enkdb03'
CRS-2677: Stop of 'ora.cssd' on 'enkdb03' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.gipcd' on 'enkdb03'
CRS-2677: Stop of 'ora.gipcd' on 'enkdb03' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'enkdb03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'enkdb03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'enkdb03'
CRS-2672: Attempting to start 'ora.evmd' on 'enkdb03'
CRS-2676: Start of 'ora.mdnsd' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.evmd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'enkdb03'
CRS-2676: Start of 'ora.gpnpd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'enkdb03'
CRS-2676: Start of 'ora.gipcd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'enkdb03'
CRS-2676: Start of 'ora.cssdmonitor' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'enkdb03'
CRS-2672: Attempting to start 'ora.diskmon' on 'enkdb03'
CRS-2676: Start of 'ora.diskmon' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.cssd' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'enkdb03'
CRS-2672: Attempting to start 'ora.ctssd' on 'enkdb03'
CRS-2676: Start of 'ora.ctssd' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'enkdb03'
CRS-2676: Start of 'ora.asm' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'enkdb03'
CRS-2676: Start of 'ora.storage' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'enkdb03'
CRS-2676: Start of 'ora.crf' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'enkdb03'
CRS-2676: Start of 'ora.crsd' on 'enkdb03' succeeded
CRS-6017: Processing resource auto-start for servers: enkdb03
CRS-2672: Attempting to start 'ora.ons' on 'enkdb03'
CRS-2673: Attempting to stop 'ora.enkdb03.vip' on 'enkdb04'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'enkdb04'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'enkdb04' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'enkdb04'
CRS-2677: Stop of 'ora.enkdb03.vip' on 'enkdb04' succeeded
CRS-2672: Attempting to start 'ora.enkdb03.vip' on 'enkdb03'
CRS-2677: Stop of 'ora.scan1.vip' on 'enkdb04' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'enkdb03'
CRS-2676: Start of 'ora.enkdb03.vip' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'enkdb03'
CRS-2676: Start of 'ora.scan1.vip' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'enkdb03'
CRS-2676: Start of 'ora.ons' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.mbach.db' on 'enkdb03'
CRS-2672: Attempting to start 'ora.dbm01.db' on 'enkdb03'
CRS-2672: Attempting to start 'ora.randy.db' on 'enkdb03'
CRS-2676: Start of 'ora.mbach.db' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.mbach.actest_both.svc' on 'enkdb03'
CRS-2672: Attempting to start 'ora.mbach.both_instances.svc' on 'enkdb03'
CRS-2672: Attempting to start 'ora.mbach.fcf_srv.svc' on 'enkdb03'
CRS-2676: Start of 'ora.mbach.fcf_srv.svc' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.mbach.actest_both.svc' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.mbach.both_instances.svc' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.randy.db' on 'enkdb03' succeeded
CRS-2679: Attempting to clean 'ora.bdt.db' on 'enkdb03'
CRS-2672: Attempting to start 'ora.maurrro.db' on 'enkdb03'
CRS-2672: Attempting to start 'ora.dbm01.db' on 'enkdb03'
CRS-2681: Clean of 'ora.bdt.db' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.bdt.db' on 'enkdb03'
CRS-5017: The resource action "ora.maurrro.db start" encountered the following error:
ORA-27106: system pages not available to allocate memory
Additional information: 5846
Additional information: 1
Additional information: 3
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/enkdb03/crs/trace/crsd_oraagent_oracle.trc".
CRS-2674: Start of 'ora.maurrro.db' on 'enkdb03' failed
CRS-2679: Attempting to clean 'ora.maurrro.db' on 'enkdb03'
CRS-2681: Clean of 'ora.maurrro.db' on 'enkdb03' succeeded
CRS-2676: Start of 'ora.dbm01.db' on 'enkdb03' succeeded
CRS-2672: Attempting to start 'ora.dbm01.demosrv.svc' on 'enkdb03'
CRS-2676: Start of 'ora.dbm01.demosrv.svc' on 'enkdb03' succeeded
===== Summary of resource auto-start failures follows =====
CRS-2807: Resource 'ora.maurrro.db' failed to start automatically.
CRS-6016: Resource auto-start has completed for server enkdb03
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/03/29 21:34:30 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/03/29 21:34:30 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2017/03/29 21:34:34 CLSRSC-474: Initiating upgrade of resource types
2017/03/29 21:35:46 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/03/29 21:35:46 CLSRSC-475: Upgrade of resource types successfully initiated.
2017/03/29 21:37:47 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/03/29 21:37:57 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Overall, I'm not entirely sure why the Oracle binaries aren't linked for RDS on Exadata any more.  They still do that for 12.1.0.2 - I ran one of those upgrades a few weeks ago, and didn't have to relink.  It's most likely just an oversight by Oracle, but it should be an easy fix.  In the meantime, there's no harm in manually performing the relink just in case.  As always, new versions of Oracle can have a few bugs to work out when testing!

Leave a Reply

Your email address will not be published.