dbnodeupdate.sh on Exadata Compute Nodes

By | May 27, 2013

Rene Kundersma at Oracle just published a nifty new utility named dbnodeupdate.sh that will assist with the sometimes-cumbersome process of updating the compute nodes in an Exadata environment.  Starting last year with 11.2.3.1.0, Oracle introduced yum updates for the Exadata compute nodes.  Previously, each Exadata storage server patch came with a “minimal” or “convenience” pack that included a shell script that forced a bunch of new RPMs onto the compute node.  While that worked on vanilla installations, if users started installing many of their own packages, the installation could fail, leaving admins stuck in RPM dependency hell.

Introducing yum into the picture helped significantly, but that also introduced some challenges.  First, users had to either run a manual set of steps to configure yum, or trust Oracle’s bootstrap scripts, which had some issues in the beginning.  After that, you would have to decide if you wanted to update directly from Oracle’s Unbreakable Linux Network (not likely), a local yum repository, or working directly from the ISOs linked in MOS note #888828.1 (my preference).  The nice thing about dbnodeupdate.sh is that it can do many of these tasks – and more.

Taking a look at the support note for dbnodeupdate (#1553103.1) shows several examples of what it can do, with examples.  It can run the bootstrap phase to directly update you from version 11.2.2.4.2 up to 11.2.3.2.1 (the latest version as of now, covering more than a year of patches) in 2 reboots, running only 3 commands.  It takes advantage of the free space within the compute node volume groups to take a backup of the root volume, providing an easy rollback method similar to that of the storage servers.  Also, it will disable CRS on reboot, and relink your Oracle homes when you’re done. When it’s finished relinking, it will enable CRS on startup.

Anyway, on with the demos!  I first tested this on a compute node running the original release of 11.2.3.2.1 (11.2.3.2.1.130109) to the one-off release, containing patches for the NFS bug, and a few other things.  To update the node, I just had to download the latest version of the 11.2.3.2.1 ISO file (patch #16432033) and run the dbnodeupdate.sh script:

[root@enkx3db01 patch_11.2.3.2.1]# ./dbnodeupdate.sh -u -l /u01/stage/patches/patch_11.2.3.2.1/p16432033_112321_Linux-x86-64.zip -s
  (*) 2013-05-27 20:50:03: Collecting system configuration details...
  (*) 2013-05-27 20:50:05: Checking free space in /u01
  (*) 2013-05-27 20:50:05: Unzipping /u01/stage/patches/patch_11.2.3.2.1/p16432033_112321_Linux-x86-64.zip to /u01/app/oracle/stage.824, this may take a while
 
Active Image version   : 11.2.3.2.1.130109
Active Kernel version  : 2.6.32-400.11.1.el5uek
Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : n/a
Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
Current user id        : root
Action                 : upgrade
Upgrading to           : 11.2.3.2.1.130302
Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.824/x86_64/ (iso)
Iso file               : /u01/app/oracle/stage.824/112_latest_repo_130302.iso
Create a backup        : Yes
Shutdown stack         : Yes (Currently stack is down)
Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 270513205002)
Diagfile               : /var/log/cellos/dbnodeupdate.270513205002.diag
Server model           : SUN FIRE X4170 M3
dbnodeupdate.sh rel.   : 1.35 (always check MOS 1553103.1 for the latest release)
Automatic checks incl. : Issue 1.8 - Hotspare not reclaimed
                       : Issue 1.10 - Cell and Database image versions 11.2.2.2.2 or lower require workaround before patching
                       : Database servers with an ofa rpm earlier than 1.5.1-4.0.28 can encounter a file system corruption
                       : Issue 1.14 - Upgrade to 11.2.3.x failed due to Sas Exp. FW not upgrd. first to 5.7.0 on X4800 and X4800 M2
                       : Issue 1.15 - Filesystem checks not disabled on database servers
                       : Issue 1.16 - Verify the vm.min_free_kbytes kernel parameter on database servers to make sure 512MB is reserved
                       : Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Manual checks todo     : Issue 1.11 - Database Server upgrades to 11.2.2.3.0 or higher may hit network routing issues after the upgrade
 
Note                   : After upgrading and rebooting run 'dbnodeupdate.sh -c' to finish post steps
 
 
Continue ? [Y/n]
y
  (*) 2013-05-27 20:50:25: Verifying GI and DB's are shutdown
  (*) 2013-05-27 20:50:27: GI and DB already shutdown
  (*) 2013-05-27 20:50:27: Collecting console history for diag purposes
  (*) 2013-05-27 20:51:03: Performing backup to /dev/mapper/VGExaDb-LVDbSys2
  (*) 2013-05-27 20:54:59: Backup successful
  (*) 2013-05-27 20:55:00: Verifying and updating yum.conf (backup in /etc/yum.conf.270513_205002)
  (*) 2013-05-27 20:55:00: Disabling other repositories, generating Exadata repos
  (*) 2013-05-27 20:55:00: Generating /etc/yum.repos.d/Exadata-computenode.repo
  (*) 2013-05-27 20:55:00: Verifying baseurl
  (*) 2013-05-27 20:55:01: Disabling stack from starting
  (*) 2013-05-27 20:55:02: OSWatcher stopped successful
  (*) 2013-05-27 20:55:02: Emptying the yum cache
  (*) 2013-05-27 20:55:02: Removing rpm libcxgb3-static.x86_64 (if installed)
  (*) 2013-05-27 20:55:02: Removing rpm rpm-build.x86_64 (if installed)
  (*) 2013-05-27 20:55:02: Performing yum update. Node is expected to reboot when finished
  (*) 2013-05-27 20:56:35: All above steps finished.
  (*) 2013-05-27 20:56:35: system will reboot automatically for changes to take effect
  (*) 2013-05-27 20:56:35: After reboot run "./dbnodeupdate.sh -c" to complete the upgrade
[root@enkx3db01 patch_11.2.3.2.1]#
Remote broadcast message (Mon May 27 20:56:43 2013):
 
Exadata post install steps started.
It may take up to 2 minutes.
The db node will be rebooted upon successful completion.
 
Remote broadcast message (Mon May 27 20:56:54 2013):
 
Exadata post install steps completed.
Initiate reboot in 10 seconds to apply the changes.
 
Broadcast message from root (Mon May 27 20:57:04 2013):
 
The system is going down for reboot NOW!

When the node finished rebooting, I ran dbnodeupdate.sh with the -c switch, to force it to relink all homes for RDS:

[root@enkx3db01 patch_11.2.3.2.1]# ./dbnodeupdate.sh -c
  (*) 2013-05-27 21:04:55: Collecting system configuration details...
 
Active Image version   : 11.2.3.2.1.130302
Active Kernel version  : 2.6.32-400.21.1.el5uek
Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 11.2.3.2.1.130109
Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
Current user id        : root
Action                 : finish-post (perform post steps, relink enable/disable crs)
Relinking for release  : 11.2.3.2.1.130302
Shutdown stack         : No (Currently stack is down)
Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 270513210454)
Diagfile               : /var/log/cellos/dbnodeupdate.270513210454.diag
Server model           : SUN FIRE X4170 M3
Remote mounts exist    : Yes (dbnodeupdate.sh will try unmounting)
dbnodeupdate.sh rel.   : 1.35 (always check MOS 1553103.1 for the latest release)
Automatic checks incl. : Issue 1.8 - Hotspare not reclaimed
                       : Issue 1.10 - Cell and Database image versions 11.2.2.2.2 or lower require workaround before patching
                       : Database servers with an ofa rpm earlier than 1.5.1-4.0.28 can encounter a file system corruption
                       : Issue 1.14 - Upgrade to 11.2.3.x failed due to Sas Exp. FW not upgrd. first to 5.7.0 on X4800 and X4800 M2
                       : Issue 1.15 - Filesystem checks not disabled on database servers
                       : Issue 1.16 - Verify the vm.min_free_kbytes kernel parameter on database servers to make sure 512MB is reserved
                       : Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Manual checks todo     : Issue 1.11 - Database Server upgrades to 11.2.2.3.0 or higher may hit network routing issues after the upgrade
 
Continue ? [Y/n]
y
  (*) 2013-05-27 21:05:10: Verifying GI and DB's are shutdown
  (*) 2013-05-27 21:05:12: Collecting console history for diag purposes
  (*) 2013-05-27 21:05:53: No rpms to remove
  (*) 2013-05-27 21:05:54: Relinking all homes
  (*) 2013-05-27 21:05:54: Unlocking /u01/app/11.2.0.3/grid
  (*) 2013-05-27 21:06:00: Relinking /u01/app/11.2.0.3/grid as oracle
  (*) 2013-05-27 21:06:12: Relinking /u01/app/oracle/product/11.2.0.3/dbhome_1 as oracle
  (*) 2013-05-27 21:06:27: Executing /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -patch
  (*) 2013-05-27 21:07:27: Stack started
  (*) 2013-05-27 21:07:27: Enabling stack to start at reboot
  (*) 2013-05-27 21:07:28: Filesystem max mount count is not configured according to best practices. Correcting setting now.
  (*) 2013-05-27 21:07:28: Filesystem check interval is not configured according to best practices. Correcting setting now.
  (*) 2013-05-27 21:07:28: Kernel parameter vm.min_free_kbytes is not set to the recommended minimum value. Correcting setting now
  (*) 2013-05-27 21:07:29: All above steps finished.

Next, I tried it out with something a little more difficult – updating from 11.2.2.4.2 to 11.2.3.2.1 in one shot.  Previously, I’d had problems with this.  Running dbnodeupdate.sh would be helpful, in that it would give me a backup automatically, in case there were any issues.  I simply downloaded the 11.2.3.2.1 ISO file and the dbupdate-helper scripts (attached in the dbnodeupdate.sh MOS note).  First, run dbnodeupdate.sh with the -u (update) -l (ISO location) -p (bootstrap phase) and -x (helper scripts) options, along with -s (shut down CRS on the node).

[root@dm03db04 yum]# ./dbnodeupdate.sh -u -l /u01/stage/patches/patch_11.2.3.2.1.130109/yum/p16432033_112321_Linux-x86-64.zip -p 1 -x /u01/stage/patches/patch_11.2.3.2.1.130109/yum/dbupdate-helpers.zip -s
  (*) 2013-05-27 10:44:21: Collecting system configuration details...
  (*) 2013-05-27 10:44:22: Unzipping helpers in /u01/stage/patches/patch_11.2.3.2.1.130109/yum/dbupdate-helpers.zip to /opt/oracle.SupportTools, this may take a while
  (*) 2013-05-27 10:44:23: Checking free space in /u01
  (*) 2013-05-27 10:44:23: Unzipping /u01/stage/patches/patch_11.2.3.2.1.130109/yum/p16432033_112321_Linux-x86-64.zip to /u01/app/oracle/stage.13449, this may take a while
 
Active Image version   : 11.2.2.4.2.111221
Active Kernel version  : 2.6.18-238.12.2.0.2.el5
Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : n/a
Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
Current user id        : root
Action                 : onetime  phase: 1
Upgrading to           : 11.2.3.2.1.130302
Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.13449/x86_64/ (iso)
Iso file               : /u01/app/oracle/stage.13449/112_latest_repo_130302.iso
Create a backup        : Yes
Shutdown stack         : Yes (Currently stack is up)
Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 270513104421)
Diagfile               : /var/log/cellos/dbnodeupdate.270513104421.diag
Server model           : SUN FIRE X4170 M2 SERVER
dbnodeupdate.sh rel.   : 1.35 (always check MOS 1553103.1 for the latest release)
 
Note                   : After completing this step continue with phase 2 of the one-time setup by running the following command:
                       : ./dbnodeupdate.sh -u -p 2
 
Continue ? [Y/n]
  (*) 2013-05-27 10:45:04: Verifying GI and DB's are shutdown
  (*) 2013-05-27 10:45:04: Shutting down GI and db
  (*) 2013-05-27 10:47:03: Collecting console history for diag purposes
  (*) 2013-05-27 10:47:24: Performing backup to /dev/mapper/VGExaDb-LVDbSys2
  (*) 2013-05-27 10:55:12: Backup successful
  (*) 2013-05-27 10:55:13: Disabling stack from starting
  (*) 2013-05-27 10:55:13: OSWatcher stopped successful
  (*) 2013-05-27 10:55:29: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.1.0) stopped successfully
  (*) 2013-05-27 10:55:29: Executing bootstrap.sh. Node is expected to reboot when finished
 
Remote broadcast message (Mon May 27 10:57:04 2013):
 
Exadata post install steps started.
It may take up to 2 minutes.
The db node will be rebooted upon successful completion.
 
Remote broadcast message (Mon May 27 10:57:25 2013):
 
Exadata post install steps completed.
Initiate reboot in 10 seconds to apply the changes.
 
Broadcast message from root (Mon May 27 10:57:35 2013):
 
The system is going down for reboot NOW!

Let’s walk through this – first, the script checked for free space in /u01, looked to see if CRS was up and running, then spit out some information for me to confirm.  Because it’s the first time that the script was run, there is no inactive LVM image version.  Also, it reminds me that the node will reboot, along with the command that I should run once the node comes back up (./dbnodeupdate.sh -u -p 2).  After I’ve confirmed that I want to apply the patch, the script shuts down CRS, takes a snapshot of the / LVM to /dev/mapper/VGExaDb-LVDbSys2, disables CRS and OSwatcher, and runs the bootstrap process.  The bootstrap process in this case installs the exadata-sun-computenode RPM, along with the latest kernel release.  Following this, the node reboots.

When the node comes back up, it’s time to run the “phase 2″ bootstrap:

[root@dm03db04 yum]# ./dbnodeupdate.sh -u -p 2
  (*) 2013-05-27 11:03:46: Collecting system configuration details...
 
Active Image version   : 11.2.3.2.1.130302
Active Kernel version  : 2.6.32-400.21.1.el5uek
Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 11.2.2.4.2.111221
Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
Current user id        : root
Action                 : onetime  phase: 2
Upgrading to           : 11.2.3.2.1.130302
Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.13449/x86_64/ (iso)
Iso file               : /u01/app/oracle/stage.13449/112_latest_repo_130302.iso
Create a backup        : No
Shutdown stack         : No (Currently stack is down)
Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 270513110345)
Diagfile               : /var/log/cellos/dbnodeupdate.270513110345.diag
Server model           : SUN FIRE X4170 M2 SERVER
dbnodeupdate.sh rel.   : 1.35 (always check MOS 1553103.1 for the latest release)
 
Note                   : After completing this step and after the systems reboots run 'dbnodeupdate.sh -c' to finish post steps
 
 
Continue ? [Y/n]
y
  (*) 2013-05-27 11:04:23: Verifying GI and DB's are shutdown
  (*) 2013-05-27 11:04:25: Collecting console history for diag purposes
  (*) 2013-05-27 11:04:26: OSWatcher stopped successful
  (*) 2013-05-27 11:04:30: Executing bootstrap2.sh. Node is expected to reboot when finished
 
Broadcast message from root (pts/0) (Mon May 27 11:07:26 2013):
 
The system is going down for reboot NOW!
  (*) 2013-05-27 11:07:26: All above steps finished.
  (*) 2013-05-27 11:07:26: After reboot run "./dbnodeupdate.sh -c" to complete the onetime

The node will reboot one more time. When it comes up, the final phase must be completed – relinking the homes and enabling CRS.

[root@dm03db04 yum]# ./dbnodeupdate.sh -c
  (*) 2013-05-27 11:39:31: Collecting system configuration details...
 
Active Image version   : 11.2.3.2.1.130302
Active Kernel version  : 2.6.32-400.21.1.el5uek
Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
Inactive Image version : 11.2.2.4.2.111221
Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
Current user id        : root
Action                 : finish-post (perform post steps, relink enable/disable crs)
Relinking for release  : 11.2.3.2.1.130302
Shutdown stack         : No (Currently stack is down)
Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 270513113930)
Diagfile               : /var/log/cellos/dbnodeupdate.270513113930.diag
Server model           : SUN FIRE X4170 M2 SERVER
dbnodeupdate.sh rel.   : 1.35 (always check MOS 1553103.1 for the latest release)
Automatic checks incl. : Issue 1.8 - Hotspare not reclaimed
                       : Issue 1.10 - Cell and Database image versions 11.2.2.2.2 or lower require workaround before patching
                       : Database servers with an ofa rpm earlier than 1.5.1-4.0.28 can encounter a file system corruption
                       : Issue 1.14 - Upgrade to 11.2.3.x failed due to Sas Exp. FW not upgrd. first to 5.7.0 on X4800 and X4800 M2
                       : Issue 1.15 - Filesystem checks not disabled on database servers
                       : Issue 1.16 - Verify the vm.min_free_kbytes kernel parameter on database servers to make sure 512MB is reserved
                       : Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
Manual checks todo     : Issue 1.11 - Database Server upgrades to 11.2.2.3.0 or higher may hit network routing issues after the upgrade
 
Continue ? [Y/n]
y
  (*) 2013-05-27 11:40:15: Verifying GI and DB's are shutdown
  (*) 2013-05-27 11:40:16: Collecting console history for diag purposes
  (*) 2013-05-27 11:40:34: No rpms to remove
  (*) 2013-05-27 11:40:57: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.1.0) stopped successfully
  (*) 2013-05-27 11:40:58: Relinking all homes
  (*) 2013-05-27 11:40:58: Unlocking /u01/app/11.2.0.3/grid
  (*) 2013-05-27 11:41:04: Relinking /u01/app/11.2.0.3/grid as oracle
  (*) 2013-05-27 11:42:36: Relinking /u01/app/oracle/product/11.2.0.2/dbhome_1 as oracle
  (*) 2013-05-27 11:44:09: Relinking /u01/app/oracle/product/11.2.0.3/dbhome_1 as oracle
  (*) 2013-05-27 11:45:48: Executing /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -patch
  (*) 2013-05-27 11:47:48: Sleeping another 60 seconds while stack is starting (1/3)
  (*) 2013-05-27 11:47:48: Stack started
  (*) 2013-05-27 11:47:48: Enabling stack to start at reboot
  (*) 2013-05-27 11:48:33: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.1.0) started successfully
  (*) 2013-05-27 11:48:35: Filesystem max mount count is not configured according to best practices. Correcting setting now.
  (*) 2013-05-27 11:48:35: Filesystem check interval is not configured according to best practices. Correcting setting now.
  (*) 2013-05-27 11:48:35: Kernel parameter vm.min_free_kbytes is not set to the recommended minimum value. Correcting setting now
  (*) 2013-05-27 11:48:43: Cleaned up iso
  (*) 2013-05-27 11:48:43: All above steps finished.

Hopefully in the next week, I’ll be able to play around with the rollback functionality, and will report back on that.

Be Sociable, Share!

5 thoughts on “dbnodeupdate.sh on Exadata Compute Nodes

  1. Pingback: Patching an Exadata Compute Node - Oracle - Oracle - Toad World

  2. sunilbhola

    Hi,

    It is really great article. But I have one query on this.

    As per documentation:-
    ———–x—————x——————-x
    NOTE: ‘One-Time Setup’ is required only for database servers on releases earlier than 11.2.2.2.2 and later than release 11.2.3.1.0.
    Depending on your current Oracle Exadata release the dbnodeupdate.sh utility runs either ‘yum upgrade’ for Oracle Exadata releases later than release 11.2.2.4.2 or ‘OneTime Setup’ for releases earlier than release 11.2.2.2.2 and later than release 11.2.3.1.0.
    ———–x—————x——————-x

    But in the first example you have given:- (11.2.3.2.1 (11.2.3.2.1.130109) to the one-off release)

    You have not used one-time setup

    but again in second example (updating from 11.2.2.4.2 to 11.2.3.2.1) you have used one-time setup( -p -1)

    Can you please explain the same.

    Reply
    1. Andy Colvin Post author

      I believe that’s an error in the note. The one-time setup configures yum on the database servers. Versions after 11.2.3.1.0 have yum configured. Versions earlier than 11.2.2.x will most likely be running OEL 5.3, so you need to look at MOS note 1284070.1, which covers upgrading compute nodes from OEL 5.3 to OEL 5.5. After that, the one-time setup will need to be completed.

      Reply
      1. sunilbhola

        Hi Andy,

        There are so much confusion in the documents. Took lots of time for me to go through again and again. As per my understanding the KEY for dbnodeupdate,sh is:-

        The four use cases for the dbnodeupdate.sh utility typically are :

        •One-Time Setup (updating procedure for Exadata releases 11.2.2.4.2 running Oracle Linux 5.5 or later)
        •Updating database servers running Exadata releases later than 11.2.2.4.2 on Oracle Linux 5.5 or later
        •Rolling back updates
        •Post-Patching (or Post-Rollback steps) (relinking the Oracle homes, enabling Grid Infrastructure to start)

        So if you have any version greater than 11.2.2.4.2 you are good to go and you have to use only “-u and -l” switch with the patch file (which can be any repository like YUM or file:// or http or even the zip file of the patch

        And if you are on version 11.2.2.4.2.* you HAVE TO invoke the dbnodeupate.sh with “-p 1″ and “-p 2″ and you can not use “-u and -l”

        YUM is kind of repository which can be any things as below:-

        •Any upgrade, either a OneTime or regular yum upgrades requires a repository with the Exadata rpms for the release being updated to. This repository can be one of the following:

        1.’an http baseurl’ (http:// address). Example: http://yum-repo.us.oracle.com/yum/unknown/EXADATA/dbserver/11.2_save/latest/x86_64/
        2.’a file location’ (zip file with Exadata ISO channel in it), such as /u01/16432033.zip
        3.’a file baseurl’ (file:/// address). Example: file:///var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest (repository to be prepared by operator)

        For the preceding option 1) and option 2) no manual actions are required
        For option 3) it is required to mount the ISO image as a loop device before starting the upgrade (see My Oracle Support note 1473002.1 for instructions about how to mount the ISO manually)

        Please correct me if I am understood it wrongly.

        Regards,
        Sunil Bhola

        Reply
  3. sunilbhola

    Detail for compute node version:-

    Exadata Storage server software Database Server Kernel version

    11.2.3.2.1 Default: 2.6.32-400.21.1.el5uek (superseded 2.6.32-400.11.1.el5uek 2013-Mar-15 in ULN channel
    exadata_dbserver_11.2_x86_64_latest)
    Optional: 2.6.18-308.24.1.0.1.el5
    ———————————————
    11.2.3.2.0 Default: 2.6.32-400.1.1.el5uek
    Optional: 2.6.18-308.8.2.0.1.el5
    ———————————————
    11.2.3.1.1 2.6.18-274.18.1.0.1 (OL 5.7)
    11.2.3.1.0
    ———————————————
    11.2.2.4.2 2.6.18-238.12.2.0.2 (OL 5.5)
    11.2.2.4.1
    11.2.2.4.0
    ———————————————
    11.2.2.3.5 2.6.18-194.3.1.0.4 (OL 5.5)
    11.2.2.3.3
    11.2.2.3.2
    11.2.2.3.1
    11.2.2.2.2
    ———————————————
    11.2.2.2.1 2.6.18-194.3.1.0.3 (OL 5.5)
    11.2.2.2.0
    11.2.2.1.1
    11.2.2.1.0

    Regards,
    Sunil Bhola

    Reply

Leave a Reply