After seeing this press release, I couldn’t help but think back on the last year and a half that I’ve been working on Exadata, and all of the interesting projects and implementations we’ve worked on. When you think about the number of Exadata systems that are out there (Oracle claims over 1,000), and we at Enkitec have sold – 29 – it’s pretty impressive (75% of all Exadata systems in North America not sold by Oracle were sold by Enkitec), at least to me.
Going back over a few of them, we’ve worked with the following packaged applications:
- eBusiness Suite
- Oracle Warehouse Builder
Not to mention a number of custom applications based around code that was developed in house. There have been OLTP, data warehouse, and mixed load environments. We’ve moved 9.2 databases into Exadata using export/import, 11.2 databases using RMAN, and more than a few live migrations/upgrades using golden gate.
One of the first Exadata systems we worked on was our own, back when information was limited (if you think it’s hard to get info today, imagine what it was like when there weren’t many out there). We had no help going through the configuration worksheets. I’ll always remember when looking it over and saying “You mean I need HOW many IPs for a quarter rack?!?!” From there, we learned about the system from building ours from the ground up. We chose not to purchase the Oracle installation service, and through a couple of “learning experiences” we picked up quite a few valuable skills on the internals and core of Exadata. Without having our own box to break and fix, we wouldn’t have learned what we did. We ran through the quater rack to half rack upgrade, and learned the hard way that without labels for the cables, your upgrade isn’t going to get very far.
From there, we started with a few engagements as Exadata started to take root in the Dallas area. We took on a project with a customer that had 2 half rack systems and wanted one of them split into 2 quarter racks. I even got to do a weekend-long patch-a-thon on a V2 system that was tabbed by Oracle as the “Exadata Basic” system that had 1 database server, 1 storage cell, and 1 infiniband switch. That was a really interesting process and setup. We had another client that was running on a maxed out T3 SPARC system, and needed to get off of it badly. Their database was dying a slow death as the number of active sessions hogged the CPUs until there weren’t any more resources left. We quickly moved them over to an M5000 while we worked out a path to move them from 10.2 on SPARC to Exadata with limited downtime. We used golden gate to keep the M5 and Exadata databases in sync, then cut over once things were ready to go.
We took on clients needing to consolidate massive numbers of databases from various architectures and versions all onto one Exadata frame. One client migrated and consolidated 30 databases onto 2 quarter rack systems…all with the help of smart scans, and good resource management. We performed a few more split rack configurations along the way to help customers save on power costs, as buying 2 half racks wasn’t feasible when looking at leasing costs for floor space in the datacenter.
2 of the more interesting implementations were more recent. One was a migration from a Sun e20k to an X2-8. The design included migrating a heavily transactional OLTP system with a separate data warehouse. In the past, they were unable to get both databases running on the same host, as one would completely overrun the other. We were able to combine the databases (~25TB) into one database and migrate them using golden gate, minimizing the cutover window to a couple of hours (mostly for application reconfiguration). Now that they’re live on the X2-8, they’re able to run reports that would never finish before. Processes that took hours now run in a matter of minutes. Full backups that took 48 hours to run now finish in under 10 hours. It’s really cool to see the power of the system once you get it up and running.
The other interesting implementation was something you don’t see very often. Exadata without RAC. I know, you probably wouldn’t expect it, but it is possible (and supported) to run Exadata without RAC. From this standpoint, it becomes more of an HA, consolidation type of system. I’ll have more on this in a future post, but basically, you create a clustered grid infrastructure (which means one set of ASM diskgroups if you so desire), and run single instance databases. That was definitely one of the coolest installs we’ve done, just because it’s so unique.
All this to say – we’ve seen quite a bit of Exadata this past year or two, and I can’t wait to see what’s in store for the future. I’m sure that at some point, we’ll see somebody running an Exadata on Solaris, a SPARC supercluster or two, and who knows what else Oracle is going to announce in the near future. Here’s to another 60 implementations and beyond!