Monthly Archives: January 2012

Speaking at Miracle Open World 2012

I’ve been invited to speak at Miracle Open World 2012 in Billund, Denmark!  My topic will be “Oracle Database Appliance Internals,” which I’ll try to make fun and exciting.  I’m definitely looking forward to the experience, and it’ll be great to spend time with the other speakers and attendees.  My session is currently scheduled for Thursday afternoon at 4PM, before the dinner and beach party.  If you’re coming to the conference, come by my session and say hi!

New Exadata Full Stack Patches

Oracle has announced a new patching strategy for Exadata, starting with databases running 11.2.0.3.  Oracle will be moving away from the monthly bundle patch philosophy, which was panned by many administrators as coming too often to keep up with, given the tight schedules held around most Exadata systems.  Instead, Oracle will be releasing a Quarterly Database Patch for Exadata, or QDPE.  The QDPE will most likely be released in conjunction with the standard critical patch updates (CPUs).  Oracle will still release interim bundle patches, but recommends for customers to only install the QDPEs unless they have a specific need to install a bundle patch.  Note that so far, the QDPEs are only being released for 11.2.0.3 – Linux x86_64, SPARC Solaris (supercluster), and Solaris x86_64.

In addition to the QDPE release, Oracle has announced a “full stack QDPE” – the Quarterly Full Stack Download Patch, or QFSDP.  This “full stack” patch includes all of the latest software that can be found in MOS note #888828.1.  The January 2012 QFSDP includes:

  • Infrastructure Software
    • Exadata Storage Server version 11.2.2.4.2
    • Exadata Infiniband Switch version 1.3.3-2
    • Exadata PDU firmware version 1.04
  • Database
    • 11.2.0.3 January 2012 QDPE
    • Opatch 11.2.0.1.9
    • OPlan
  • Systems Management
    • Patches for 11g OEM agents
    • Management plugins for 11g OEM
    • Patches for 11g OEM management server

No word on when Oracle will start including patches for the new OEM 12c.  Keep in mind that these are just a collection of patches, they all still need to be installed as if they were downloaded separately.  Oracle does not yet have a mechanism in place to apply the QDPE, storage server patches, Infiniband switch patches, etc in one swoop.

The current QDPE patch is January 2012 (patch #13513783), and the current DFSDP is January 2012 (patch #13551280).

Voting Disk Redundancy in ASM

A recent discussion thread on the OTN Exadata forum made me want to test a feature of 11.2 RAC – voting disk redundancy.  There was one section of the Clusterware Adminsitration and Deployment Guide (http://goo.gl/eMrQM) that made me want to test it out to see how well it worked:

If voting disks are stored on Oracle ASM with normal or high redundancy, and the storage hardware in one failure group suffers a failure, then if there is another disk available in a disk group in an unaffected failure group, Oracle ASM recovers the voting disk in the unaffected failure group.

 
How resilient is it?  How quickly will the cluster recover the voting disk?  Do we have to wait for the ASM rebalance timer to tick down to zero before the voting disk gets recreated?

First, what is a voting disk?  The voting disks are used to determine which nodes are members of the cluster.  In a RAC configuration, there are either 1, 3, or 5 voting disks for the cluster.

Next, when voting disks are placed into an ASM diskgroup, which is the default in 11.2, the number of voting disks that are created depend on the redundancy level of the diskgroup.  For external redundancy, 1 voting disk is created.  When running ASM redundancy, voting disks have a higher requirement for the number of disks.  Normal redundancy creates 3 voting disks, and high redundancy creates 5.  Oracle recommends at least 300MB per voting disk file, and 300MB for each copy of the OCR.  On standard (non-Exadata) RAC builds, I prefer to place OCR and Voting disks into their own diskgroup, named GRID or SYSTEM.

Anyway, back to the fun stuff.  I’m testing this on Exadata, but the results should carry over to any other system running 11.2 RAC.  For starters, we have a diskgroup named DBFS_DG that is used to hold the OCR and voting disks.  We can see that by running the lsdg command in asmcmd. I’ve abbreviated the output for clarity.

[enkdb03:oracle:+ASM1] /u01/app/11.2.0.2/grid
> asmcmd
ASMCMD> lsdg
State    Type    Rebal  Total_MB   Free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N      65261568  49688724        21877927              0             N  DATA/
MOUNTED  NORMAL  N       1192320    978292          434950              0             Y  DBFS_DG/
MOUNTED  HIGH    N      22935248  20570800         5466918              0             N  RECO/
ASMCMD>

The DBFS_DG diskgroup has 4 failgroups – ENKCEL04, ENKCEL05, ENKCEL06, and ENKCEL07:

SYS:+ASM1> select distinct g.name "Diskgroup",
  2    d.failgroup "Failgroup"
  3  from v$asm_diskgroup g,
  4    v$asm_disk d
  5  where g.group_number = d.group_number
  6  and g.NAME = 'DBFS_DG'
  7  /
 
Diskgroup		       Failgroup
------------------------------ ------------------------------
DBFS_DG 		       ENKCEL04
DBFS_DG 		       ENKCEL05
DBFS_DG 		       ENKCEL06
DBFS_DG 		       ENKCEL07
 
SYS:+ASM1>

Finally, our current voting disks reside in failgroups ENKCEL05, ENKCEL06, and ENKCEL07:

[enkdb03:oracle:+ASM1] /home/oracle
> sudo /u01/app/11.2.0.2/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   43164b9cc7234fe1bff4eb968ec4a1dc (o/192.168.12.10/DBFS_DG_CD_02_enkcel06) [DBFS_DG]
 2. ONLINE   2e6db5ba5fd34fc8bfaa0ab8b9d0ddf5 (o/192.168.12.11/DBFS_DG_CD_03_enkcel07) [DBFS_DG]
 3. ONLINE   95eb38e5dfc34f3ebfd72127e7fe9c12 (o/192.168.12.9/DBFS_DG_CD_02_enkcel05) [DBFS_DG]
Located 3 voting disk(s).

Going back to the original questions, what does it take for CRS to notice that a voting disk is gone, and how quickly will it be replaced? Are they handled in the same way as normal disks, or some other way? When an ASM disk goes offline, ASM waits the amount of time listed in the disk_repair_time attribute for the diskgroup (default is 3.6 hours) before dropping the disk and performing a rebalance. Let’s offline one of the failgroups that has a voting disk and find out.

SYS:+ASM1> !date
Fri Jan  6 12:57:18 CST 2012
 
SYS:+ASM1> alter diskgroup dbfs_dg offline disks in failgroup ENKCEL05;
 
Diskgroup altered.
 
SYS:+ASM1> !date
Fri Jan  6 12:57:46 CST 2012
 
SYS:+ASM1> !sudo /u01/app/11.2.0.2/grid/bin/crsctl query css votedisk
[sudo] password for oracle:
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   43164b9cc7234fe1bff4eb968ec4a1dc (o/192.168.12.10/DBFS_DG_CD_02_enkcel06) [DBFS_DG]
 2. ONLINE   2e6db5ba5fd34fc8bfaa0ab8b9d0ddf5 (o/192.168.12.11/DBFS_DG_CD_03_enkcel07) [DBFS_DG]
 3. ONLINE   ab06b75d54764f79bfd4ba032b317460 (o/192.168.12.8/DBFS_DG_CD_02_enkcel04) [DBFS_DG]
Located 3 voting disk(s).
 
SYS:+ASM1> !date
Fri Jan  6 12:58:03 CST 2012

That was quick! So from this exercise, we can see that ASM doesn’t waste any time to recreate the voting disk. With files that are this critical to the stability of the cluster, it’s not hard to see why. If we dig further, we can see that ocssd noticed that the voting disk on o/192.168.12.9/DBFS_DG_CD_02_enkcel05 was offline, and it proceeded to create a new file on disk o/192.168.12.8/DBFS_DG_CD_02_enkcel04:

2012-01-06 12:57:37.075
[cssd(10880)]CRS-1605:CSSD voting file is online: o/192.168.12.8/DBFS_DG_CD_02_enkcel04; details in /u01/app/11.2.0.2/grid/log/enkdb03/cssd/ocssd.log.
2012-01-06 12:57:37.075
[cssd(10880)]CRS-1604:CSSD voting file is offline: o/192.168.12.9/DBFS_DG_CD_02_enkcel05; details at (:CSSNM00069:) in /u01/app/11.2.0.2/grid/log/enkdb03/cssd/ocssd.log.

Finally, what happens when the failgroup comes back online? There are no changes made to the voting disks. The voting disk from the failgroup that was previously offline is still there, but will be overwritten if that space is needed:

SYS:+ASM1> !date
Fri Jan  6 13:05:09 CST 2012
 
SYS:+ASM1> alter diskgroup dbfs_dg online disks in failgroup ENKCEL05;
 
Diskgroup altered.
 
SYS:+ASM1> !date
Fri Jan  6 13:06:31 CST 2012
 
SYS:+ASM1> !sudo /u01/app/11.2.0.2/grid/bin/crsctl query css votedisk
[sudo] password for oracle:
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   43164b9cc7234fe1bff4eb968ec4a1dc (o/192.168.12.10/DBFS_DG_CD_02_enkcel06) [DBFS_DG]
 2. ONLINE   2e6db5ba5fd34fc8bfaa0ab8b9d0ddf5 (o/192.168.12.11/DBFS_DG_CD_03_enkcel07) [DBFS_DG]
 3. ONLINE   ab06b75d54764f79bfd4ba032b317460 (o/192.168.12.8/DBFS_DG_CD_02_enkcel04) [DBFS_DG]
Located 3 voting disk(s).

In previous versions, this automation did not exist. If a voting disk went offline, the DBA had to manually create a new voting disk. Keep in mind that this feature will only take effect if there must be another failgroup available to place the voting disk in. If you only have 3 failgroups for your normal redundancy (or 5 for your high redundancy) OCR/voting diskgroup, this automatic recreation will not occur.