When upgrading databases, my preferred method of late has been the autoupgrade tool. Autoupgrade gives DBAs the ability to upgrade databases in batches, automatically performing prechecks, postchecks, and custom database tasks during the upgrade process.
Autoupgrade can be downloaded from MOS note #2485457.1, and tests shown here used the most current version at the time of writing, version 19.10.0.
I was going through an upgrade of a RAC database from 12.1.0.2. to 19.6.0.0.200114, and hit a strange issue - the database upgrade had completed all prechecks, and failed with the following message:
upg> ------------------------------------------------- Errors in database [acup1211] Stage [DBUPGRADE] Operation [STOPPED] Status [ERROR] Info [ Error: UPG-1401 Opening Database acup121 in upgrade mode failed Cause: Opening database for upgrade in the target home failed For further details, see the log file located at /u01/app/oracle/autoupgrade/acup1211/acup1211/101/autoupgrade_20201112_user.log] ------------------------------------------------- Logs: [/u01/app/oracle/autoupgrade/acup1211/acup1211/101/autoupgrade_20201112_user.log] -------------------------------------------------
As we can see from the error, the database attempted to start in upgrade mode, but failed to open. I checked the database alert log, and didn't see any errors of note in there. From there, I went to the log that the output told me to check (a novel concept, I know), and the error was in there, clear as day:
2020-11-12 15:11:31.123 ERROR DATABASE NAME: acup1211 CAUSE: ERROR at Line 1 in [Buffer] REASON: LRM-00121: '12.1.0.2.1' is not an allowable value for 'optimizer_features_enable' ACTION: [MANUAL] DETAILS: 121, 0, "'%.*s' is not an allowable value for '%.*s'" // *Cause: The value is not a legal value for this parameter. // *Action: Refer to the manual for allowable values.
It seems as though my problem was due to the optimizer_features_enable parameter being set to 12.1.0.2.1. This is a valid value in 12.1.0.2, but apparently isn't accepted in 19c (checking from a different database):
[oracle@enkx4db01 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 13 09:42:52 2020 Version 19.6.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.6.0.0.0 SQL> alter system set optimizer_features_enable='12.1.0.2.1' sid='*' scope=spfile; alter system set optimizer_features_enable='12.1.0.2.1' sid='*' scope=spfile * ERROR at line 1: ORA-00096: invalid value 12.1.0.2.1 for parameter optimizer_features_enable, must be from among 19.1.0.1, 19.1.0, 18.1.0, 12.2.0.1, 12.1.0.2, 12.1.0.1, 11.2.0.4, 11.2.0.3, 11.2.0.2, 11.2.0.1, 11.1.0.7, 11.1.0.6, 10.2.0.5, 10.2.0.4, 10.2.0.3, 10.2.0.2, 10.2.0.1, 10.1.0.5, 10.1.0.4, 10.1.0.3, 10.1.0, 9.2.0.8, 9.2.0, 9.0.1, 9.0.0, 8.1.7, 8.1.6, 8.1.5, 8.1.4, 8.1.3, 8.1.0, 8.0.7, 8.0.6, 8.0.5, 8.0.4, 8.0.3, 8.0.0
I went back to the database that I was upgrading, and tried to start it up and change the setting manually.
SQL> startup mount; ORACLE instance started. Total System Global Area 2.5770E+10 bytes Fixed Size 6870952 bytes Variable Size 3690989656 bytes Database Buffers 2.1877E+10 bytes Redo Buffers 194453504 bytes Database mounted. SQL> show parameter optimizer_features_enable NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 12.1.0.2.1 SQL> alter system set optimizer_features_enable='12.1.0.2' sid='*' scope=spfile; System altered. SQL> select name, value from v$spparameter where name='optimizer_features_enable'; NAME VALUE ------------------------- --------------- optimizer_features_enable 12.1.0.2 SQL> shutdown immediate;
Once the database was shut down again, I attempted to restart the job in the autoupgrade console. Sure enough, it failed again with the same error:
upg> resume -job 101 Resuming job: [101][acup1211] upg> ------------------------------------------------- Errors in database [acup1211] Stage [DBUPGRADE] Operation [STOPPED] Status [ERROR] Info [ Error: UPG-1401 Opening Database acup121 in upgrade mode failed Cause: Opening database for upgrade in the target home failed For further details, see the log file located at /u01/app/oracle/autoupgrade/acup1211/acup1211/101/autoupgrade_20201112_user.log] ------------------------------------------------- Logs: [/u01/app/oracle/autoupgrade/acup1211/acup1211/101/autoupgrade_20201112_user.log] -------------------------------------------------
I double checked the log, and saw that it was still failing on the same issue. If I start the instance up in mount mode and check the parameter value, I can see that optimizer_features_enable is set to 12.1.0.2. If that's the case, why is the upgrade saying that the instance is still trying to start with a value of 12.1.0.2.1?
It turns out that autoupgrade uses a temporary pfile to start the instance in upgrade mode. The instance is trying to start using a pfile from the 19.6.0.0 database home, and this value for optimizer_features_enable doesn't pass validation, so the instance doesn't even get to the point of adding anything to the alert log.
There are actually 3 pfiles located in <autoupg_log_dir>/<sid>/<sid>/temp:
[oracle@enkx4db01 ~]$ cd /u01/app/oracle/autoupgrade/acup1211/acup1211/temp [oracle@enkx4db01 temp]$ ls -al *.ora total 24 drwx------ 2 oracle oinstall 4096 Nov 12 15:02 . drwx------ 5 oracle oinstall 4096 Nov 12 15:02 .. -rwx------ 1 oracle oinstall 2265 Nov 12 15:02 after_upgrade_pfile_acup1211.ora -rwx------ 1 oracle oinstall 1942 Nov 12 14:29 before_upgrade_pfile_acup1211.ora -rwx------ 1 oracle oinstall 1946 Nov 12 15:05 during_upgrade_pfile_acup1211.ora
If I modify the optimizer_features_enable setting to 12.1.0.2 in during_upgrade_pfile_acup1211.ora, I can restart the upgrade. I also needed to modify the after_upgrade_pfile_acup1211.ora file - leaving the invalid setting there will cause a failure in the postupgrade phase when it creates the final spfile.
While not ideal, using autoupgrade still makes the process fairly painless, giving DBAs the ability to resume the upgrade where the job failed. I ran in to this issue on a separate cluster where the bad parameter value was only set on one of five databases. The four databases without a problem proceeded to upgrade without being affected by the database having problems.