SCAN Listener Crash After Applying July 2016 PSU

By | August 15, 2016

I ran in to a small issue while applying the July 2016 quarterly patch to a couple of Exadata racks last week.  The systems were running GI 12.1.0.2, previously with the January 2016 PSU.  The patches applied successfully, and we were beginning the process of running the post-patch scripts on the databases in the cluster.  This process involves manually starting the database in upgrade mode, and we saw a message in SQL*Plus that the database could not start due to an invalid remote_listener parameter.  I verified that nothing had changed with the parameters, and noticed that all of the SCAN listeners were down on the cluster:

[oracle@exadb02 ~]$ ps -ef | grep lsnr
oracle    99204  92012  0 01:46 pts/1    00:00:00 grep lsnr
oracle   352240      1  0 01:12 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracle   352497      1  0 01:12 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle   352503      1  0 01:12 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LISTENER_HS -no_crs_notify -inherit
 
[oracle@exadb02 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node exadb02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node exadb02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node exadb02
 
[oracle@exadb02 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is not running
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is not running

I attempted to start the SCAN listeners, but received the following error message that the agent was unable to start them up on either node:

[oracle@exadb02 ~]$ srvctl start scan_listener
PRCR-1079 : Failed to start resource ora.LISTENER_SCAN1.lsnr
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
CRS-2674: Start of 'ora.LISTENER_SCAN1.lsnr' on 'exadb02' failed
CRS-2632: There are no more servers to try to place resource 'ora.LISTENER_SCAN1.lsnr' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.LISTENER_SCAN2.lsnr
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
CRS-2674: Start of 'ora.LISTENER_SCAN2.lsnr' on 'exadb02' failed
CRS-2632: There are no more servers to try to place resource 'ora.LISTENER_SCAN2.lsnr' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.LISTENER_SCAN3.lsnr
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
CRS-2674: Start of 'ora.LISTENER_SCAN3.lsnr' on 'exadb02' failed
CRS-2632: There are no more servers to try to place resource 'ora.LISTENER_SCAN3.lsnr' on that would satisfy its placement policy

The agent trace file was pretty chatty, so I ran a tail -f on it and tried to start the listeners again. This time, I saw what looked like an error message from the listener trying to start:

2016-08-06 01:45:12.094827 :CLSDYNAM:4156544768: [ora.LISTENER_SCAN3.lsnr]{2:13144:3172} [start] Calling Lsnrctl start {
 
2016-08-06 01:45:13.798299 :CLSDYNAM:4156544768: [ora.LISTENER_SCAN3.lsnr]{2:13144:3172} [start] (:CLSN00010:)Utils:execCmd scls_process_join() uret 1
 
2016-08-06 01:45:13.798525 :CLSDYNAM:4156544768: [ora.LISTENER_SCAN3.lsnr]{2:13144:3172} [start] clsnUtils::error Exception type=2 string=
CRS-5016: Process "/u01/app/12.1.0.2/grid/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/oracle/diag/crs/exadb02/crs/trace/crsd_oraagent_oracle.trc"
 
2016-08-06 01:45:13.798724 :CLSDYNAM:4156544768: [ora.LISTENER_SCAN3.lsnr]{2:13144:3172} [start] Lsnrctl::start lsnrname:LISTENER_SCAN3 output:
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 06-AUG-2016 01:45:12
 
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 
Starting /u01/app/12.1.0.2/grid/bin/tnslsnr: please wait...
 
TNS-12537: TNS:connection closed
 TNS-12560: TNS:protocol adapter error
  TNS-00507: Connection closed
   Linux Error: 29: Illegal seek

I hadn’t seen an “Illegal seek” error from the listener before, and immediately noticed MOS note #2166451.1 – “SCAN Listener or local listener fails to start after applying Patch 23273629 – Oracle Grid Infrastructure Patch Set Update 12.1.0.2.160719 (Jul2016).” It looks like there is a known issue with this patch that causes this behavior. What’s interesting is that it only seems to affect environments that have been upgraded from 11.2 to 12.1. This behavior is consistent with what I saw – I have had this issue on two systems that had previously been upgraded, and didn’t see it on 3 racks that started out with 12.1.0.2 GI. Thankfully, there is already a patch available (20734332), or a workaround can be performed by adding the following entires to end of the listener.ora on each node:

CRS_NOTIFICATION_LISTENER=OFF
CRS_NOTIFICATION_LISTENER_SCAN1=OFF
CRS_NOTIFICATION_LISTENER_SCAN2=OFF
CRS_NOTIFICATION_LISTENER_SCAN3=OFF

In my environment, we were already beginning the process of checking out the databases and didn’t want to shut down GI on the cluster again for a patch, so we implemented the change to the listener.ora file on all nodes and were immediately able to start the listeners.

[oracle@exadb02 ~]$ srvctl start scan_listener
[oracle@exadb02 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node exadb02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node exadb01
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node exadb02

 

Be Sociable, Share!

6 thoughts on “SCAN Listener Crash After Applying July 2016 PSU

  1. Nariman Narimanov

    Hi,

    Why did you apply Patch 23273629?
    You should have applied “Patch 23273686: DATABASE PROACTIVE BUNDLE PATCH 12.1.0.2.160719 (JUL2016)”

    Nariman

    Reply
  2. acolvin Post author

    Nariman,

    This was on an Exadata rack, so we applied Patch #23273629: GRID INFRASTRUCTURE PATCH SET UPDATE 12.1.0.2.160719 (JUL2016). This PSU includes the database PSU that you mentioned, plus additional fixes for Grid Infrastructure. If you have GI installed, the recommendation is to install the GI PSU.

    Thanks,

    Andy

    Reply
  3. San

    Thanks Acolvin !!
    We faced similar issue with 12.1.0.2.160816 (August2016). Followed this article and able to start the scan listeners.

    Regards,
    San

    Reply
  4. noxenco

    Hello Acolvin,
    Great ! Your explanation was very good. We ran in the same problem on exadata. Which the same Patch Set Update 12.1.0.2.160719 (Jul2016). The workaround works. Thank you for publishing.

    Reply
  5. Rodrigo Paczek

    Thank you for your post my friend.
    I was very helpful after applying the patch JUL2016.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *