Oracle Database 11gR2 is moving from Premiere Support to Extended Support in January 2015. What does this mean to you?
Here is the basic definition of the services provided by Premiere Support:
* Major product and technology releases
* Technical support
* My Oracle Support
* Updates, fixes, security alerts, data fixes, and critical patch updates
* Tax, legal, and regulatory updates
* Upgrade scripts
* Certification with most new third party products/versions
* Certification with most new Oracle products
Extended support provides the same benefits for an additional three years but these benefits come at an additional cost.
So the real question is, what is in your best interest as a customer? Moving to 12c and staying with Premiere Support or taking your time, investing in Extended support and waiting to move to 12c later? There are a number of factors that weigh in on either side, and it’s clear that there is no one best answer for everyone otherwise, why would we have Extended Support available to us in the first place?
So, let me state clearly that what I’m about to say in this post (and all my posts for that mater) is my opinion. It’s based on a lot of assumptions, specific objectives and what I consider to be best practices. You may disagree with me, and if you do I hope you will post a comment and nicely tell me why. ?
First, let me define that what I’m talking about in this piece is major upgrades. These are upgrades from one major version of Oracle (say 11gR2) to another major version of Oracle (say 12cR1).
I’m not talking about upgrading existing versions with the latest bundle patches, PSU, GIPSU, EIEIO, or whatever. I consider these interim patch sets and will call them such in this post.
What are the considerations when planning a major upgrade from, say, Oracle Database 11gR2 to 12cR1? Let me present you with a non-exhaustive list of things I’ve come up with, along with some comments about these factors:
A major upgrade has quite a bit more cost associated with it than other upgrades. This is for a lot of reasons many of which I list below.
There is this lore out there in Oracle land. That lore is centered around the notion that we don’t use the first release or two of a major version change. Lore is just what it is, based on our own experiences and things we have heard. In my mind, this argument seems to be considered a postulate rather than a theorem and it certainly lacks any real proof. I further would argue that technology has
changed many of the reasons argued for this approach, we just have not implemented that technology.
* Resource Availability
Resources are a major issue when it comes to major upgrades. Resources are always a problem, but with Major Upgrades there is a learning curve that adds to the resource complexity. Finding someone experienced on the newer version of the database will be difficult, and those experienced with older versions of the database will need some ramp up time.
A major upgrade always has dependencies that need to be considered. For example, before you can upgrade your 11gR2 RAC databases to 12c RAC, you will need to upgrade your Grid Infrastructure to support 12c RAC. There may be other dependencies to consider as well.
Stakeholders and Management get understandably nervous about any kind of change. When we are talking about a major software version change, they really seem to get nervous. This is for a lot of reasons including the fact that they would like their application to continue
working after the upgrade! Many organizations have not streamlined their testing processes, and thus just getting testing started is a major event and finishing it is a major accomplishment. This old way of testing if you will also turns out to be very costly and is often not budgeted for.
This in some ways is what it all boils down too risk. How much risk is involved in this upgrade. How likely is it that we have caught everything in our testing? Is the likelihood of an outage increased by this change in software versions. The possible risks are many of course but are they
really that much greater than the risk of doing nothing? Also, what is the root cause of your risk assessment fears? Perhaps it’s not the risk that is the problem, but the process you are using.
In the past, I’ve had many discussions about major upgrades and one of the questions that comes up is, What is the benefit of doing this now? The benefits are inexorably connected with the risks. But do we really measure this ratio correctly? We certainly can’t unless we know something
about the product and the features that it brings to the table. We can’t properly measure it’s benefit and stability unless we put it to the test.
This might well be one of the biggest hindrances to migration. It’s easy to blame a vendor for failing to certify but is that where the blame properly belongs?
As mentioned above with respect to resources, education is a big issue. Time and time again I run into DBA’s and developers who are still doing things the way you would back in the Oracle 7 days. Education is a real issue in so many ways because it has ties into the here and now but ties into the future as well.
Many fear that which we are not comfortable with. There are some who embrace the newness of something, there are those who are cautious and there are those who go running for the hills. We have to address this fear. One of my favorite movie quotes is from dune: "Fear is the mind killer."
Our fear is our undoing. Make no mistake, this fear isn’t just about fearing the migration itself it might run much deeper than that. Some employment cultures almost embrace fear, thinking that it’s some Darwinian approach to success.
* Lack of agility
In my mind, this is one of the greatest of roadblocks to the enterprise. Being unable to be agile is what makes things take longer, cost more, adds risk and causes a whole host of other problems that stagnate the enterprise.
The scale of an upgrade project can have a serious impact on decisions made around such projects. It's far easier to manage the upgrade of two databases in a non-RAC infrastructure than 400 some databases spread across many RAC clusters. Scale demands that you act sooner, not later - and yet, I often find that this is not the case.
I've seen that a myriad of assumptions seem to occur at various milestone moments, including major patching. I think these assumptions are the source of many stumbling blocks and even failures.
I'm sure there are many other factors that will come to mind after I post this - there is always something I wish I'd written.
So - why put this under the title of putting the cart before the horse? Mostly because of the assumption factor. In assuming that X, Y and Z are postulate items rather than something that really requires a proof, we kind of put the cart before the horse. The cart represents the proof that comes from faulty assumptions that X,y and Z represent postulates. So often, those postulates are not real, or are not as dire as one might make them out to be - for so many reasons. Thus, in this case, the cart is blocking the horse, and hindering him from pulling the cart - and it might well be that the cart might injure the horse - the horse being your IT infrastructure and your users applications.
And - just to add to the visualization of the cart and the horse and give it a bit more excitement, let me add one more thing. The cart and the horse are sitting on a pair of railroad tracks, immobile. Our in the distance is a whistle of a train heading your way. One one side of the train is plastered the big bold words - REALITY COMETH. On the other side of the train, the side you can't see, in equally bold words we find - AND SO AM I, AND I AM AFTER YOUR DATA!!
The reality is that the bad guys move on the fast tracks. We need to learn how to do the same lest the horrific word BREACH meet your mailbox on some Monday afternoon.