• February 18, 2015

Responses to You can't please all of the people... deprecation of the non-CDB architecture

Guest Author
I have received a few comments related to my posts on the deprecation of the non-CDB architecture. I must admit, I find some reactions to be a bit confusing.
First, let me be clear - I am NOT an official voice for Oracle. This is MY blog, MY opinions and I am NOT directed by Oracle in any way with respect to what I do (or do not) write here.
So, let's quash the notion that I'm acting as some kind of Oracle Shill when I suggest that there is a major over reaction to this issue.
Let's start with a few snips from comments made in the previous posts on this subject:
"People who are not on the bleeding edge do not appreciate being forced into a buggy new code path. This is not FUD, this is experience."
"We do NOT want working code and system procedures to be replaced with something that might work in the future maybe kinda sorta if you get around to it."
"I think once Oracle is seen to be "eating it's own dogfood" more people will lose their fear of the unknown...."
Based on these quotes, I would think that somehow Oracle had announced that it was ripping out the non-CDB code now, or in That's simply not the case.
The non-CDB code isn't going to be ripped out in 12.2 either. Beyond that, I don't know, but I can't see it being ripped out for quite some time.
Why are people knee jerking about this stuff? Why are assumptions being made, completely without any foundation? I am also often confused about the fact that people look at their enterprise and it's unique issues - and assume that everyone else faces the same issues.
I am confused by the arguments like this one:
"We don't want a moving target, we don't want make-work, we want our data to be secure and reachable in a timely manner."
Yes, and THOSE (security and data itself) are moving targets in and of themselves and they NECESSITATE a moving, growing and changing database product.
Security risks are constantly changing and increasing - hacking attempts are more frequent and and more complex. The costs of intrusions are growing dramatically. Are you suggesting that responses to such things such as Data Vault, or encryption at rest - should not be added to the database product so it can remain static and, hopefully, bug free? Is the urgency to avoid bugs so critical that we weigh it higher than development of responses to these risks?
With respect to data being reachable in a timely manner. This too is a moving target. 10 years ago, the sizes of the databases we deal with now were nothing more than speculation. The unique types of data we need to deal with have increased as have the timelines to process this data.
If Oracle had decided to remain even semi-static ten years ago - do you suppose that CERN would be able to process the vast amounts of data that it does with Oracle? Do you suppose that reports that went from running in an hour to time periods of days - because of the incredible increases in data volume - would be something that customers would accept as long as the code base remained stable? It's the constant modification of the optimizer that provides the advanced abilities of the Oracle database.
The biggest of the moving targets are not the database, but rather they are in the business that the database must accomplish. Just because one enterprise does not have a need for those solutions, or can not see the benefit of those solutions, does not mean that there is not a significant set of customers that DO see the benefit in those solutions.
Then there is this statement (sorry Joel - I don't mean to seem that I'm picking on you!)
It's by no means new - the issue that immediately comes to mind is the 7.2 to 7.3 upgrade on VMS. People screamed, Oracle said "tough."
Change is always difficult for people. I agree that change can present serious challenges to the enterprise - and we can focus on those challenges and see the cup as half empty. However - change can be seen as quite the positive too. We make the choice which way we look at it.
This is an opportunity to refine how you do things in the Enterprise. It's an opportunity to do things better, more efficiently and build a smarter and more automated enterprise.
Or, you can moan and complain along the whole path. Change things begrudgingly and ignore the fact that opportunity is staring you in the face. I would argue that if you think you are too busy to deal with this change over the next several years - then perhaps you are not working as efficiently as you could be.
I'd also offer that if your enterprise is so complex, and so fragile, that you can't make the changes needed in the next five years or so - then your problem is not changing Oracle Database software code. It is the complexity that you have allowed to be baked into your enterprise. So - we can look at this in a negative light or we can see it as a call to do better across the board. To work smarter and to simplify complexity.
When will Oracle's own packaged applications be compatible with the PDB architecture. For example E-business suite which still arguably is Oracles best selling ERP suite is still not certified to run on a single instance PDB , let alone multitenant.
Here lies the proof that what Oracle is doing is giving you a LOT of notice about this change to the Oracle architecture. The CDB architecture is a new architecture, and it's true that pretty much all of the Oracle software that actually uses the database does not yet support the CDB/PDB architecture. So, I argue that in losing our cool about the fact that non-CDB will be going away is clearly a knee jerk reaction to something that's coming, for sure, but not tomorrow or anytime soon.
In this one statement should arise the notion that this isn't going to happen anytime soon. So, why are people acting like this is happening next year?
I agree with many of the points, but I kind-of disagree with the scripting aspect somewhat.
So, first let me say that I sympathize with this. However, maybe changing scripts so that they use a service name rather than OS authentication, is an overall improvement in how we manage our enterprises overall.
I'm not saying that this is not probably one of the biggest pain points of a migration to the CDB architecture - it is. I am saying that maybe the idea of using services rather than using OS authentication is a better solution, and that we should have been doing that in the first place anyway.
Most applications should be using services by now anyway. So there should not be a significant amount of pain there.
Perhaps, in an effort to look at positive, we might say that in being forced to modify our existing way of doing things, we are also forced to look at our existing security infrastructure. Are we simply allowing applications to connect via OS authentication? Is this really a best practice? I'm not sure it is. So, there is an opportunity here - if we choose to look at it that way.
Your voice carries weight. Your opinions do matter.
I think you over value my voice and its impact. :) Be that as it may, I see multitenant as the natural evolution of the database product. There will be a significant amount of time for these changes to mature, and for people to execute migration paths to this new architecture, before we see the plug pulled.
This isn't me speaking as some Oracle shill. I would feel this way should I work for Oracle or anyone else. Remember - I'm the guy that writes the new features books! :)
I think the direction Oracle is going is right on target. It addresses a number of issues that are now being addressed haphazardly with solutions like virtual machines. It addresses the performance of multiple databases on one machine sharing resources most efficiently.
If you should wish to study some of the performance benefits of the Multitenant architecture you can find them listed here:
The fact is that, all things being equal (and acknowledging that there will always be the outliers), there are significant performance gains when you use PDB's instead of stand alone databases.
I know that it's easy to take something personally. I know it's frustrating to be pulled, kicking and screaming, into something we don't think we want. I also know that we can sometimes close our minds about something when we have a negative first reaction.
I've been working with Multitenant quite a bit of late. In its solid, but not full featured yet. Is it bugless - of course not. Is the Oracle Database bugless without Multitenant? Nope. Is any large application without bugs, nope. I don't think you stop progress because of fear. You don't stop sending manned missions into space because of the risk of death. You don't stop developing your product because of the risks of bugs. If you do the later, you become a memory.
None of us want our critical data running on databases that are just memories - do we?
We might THINK that we don't want change. We might complain bitterly about change because of the fact that it inconveniences us (and how DARE they inconvenience us!!). We might think our life would be better if things remained the same. The reality - historically - is that static products cease to be meaningful in the marketplace. Otherwise, the Model-T would be selling strong, we would still be using MSDOS and there would be no complex machines like the 747.
Agility - Let my voice carry the message of being agile
If my voice carries any weight - then let agility be my message.
I see many DBA's that treat their database environments as if they were living in the 1990's. These environments lack agility - and they use more excuses than I can count to avoid being agile.
For example, I would argue that the fact that we are not using services, and instead relying on OS authentication is all about engineering for agility. Yes, it might be legacy code - but if it is, then the question is are we thinking in terms of agility and using maintenance cycles to modify our code to BE agile?
Probably not - and for many reasons I'm sure.
I argue that one root cause behind these complaints (I did NOT say the only root cause) against the demise of the non-CDB model, boils down into one thing - the ability to be agile.
Now, before you roast me for saying that, please take a moment and think about that argument and ask yourself - if it's not just a little bit possible... If I might just be a little bit right. If I am, then what we are complaining about isn't Oracle - it's how we choose to do business.
That is its own blog post or two ... or three.... And it's what I'll be talking about at the UTOUG very soon!
Note: Edited a bit for clarification... :)

Join the discussion

Comments ( 6 )
  • Robert Wednesday, February 18, 2015

    Funny... after writing this post, I read this quote from Darwin. It seems to express my feelings well:

    "It is not the strongest of the species that survive, nor the most intelligent, but the one more responsive to change."

    So it seems that Agility is the key.

  • robin chatterjee Wednesday, February 18, 2015

    you used my comment :)

    "When will Oracle's own packaged applications be compatible with the PDB architecture. For example E-business suite which still arguably is Oracles best selling ERP suite is still not certified to run on a single instance PDB , let alone multitenant."

    As I mentioned that will be the point when PDB/CDB would be enterprise ready in my opinion :) After all that's what the E in ERP stands for.... till then It certainly makes sense to learn how the new stuff works. One reason I can think of why Services are not generally used in administrative scripts is they need the listeners to be available, wher as with critical scripts you want them to be dependant on as few moving parts as possible, so if the tcpip stack has failed , or if the listener is crashing due to some sort of filesystem corruption or port conflict I still want all my admin scripts to work. i remember reading a blog on how to shut down a pdb if your listener is not working and you basically need to set the context to the pdb in question ? perhaps admin scripts should be doing that to increase robustness ?

  • Robert Wednesday, February 18, 2015

    The point about a service being tied to the listener is a valid one. In a non-RAC environment that would be a known limitation. But, if your not invested in RAC then you accept that there will be some lesser degree of availability. That is the give and take of the decision to cluster.

    However, the same point could be made for many of the components on a single server that might fail. For example, perhaps the SQL*Plus binary becomes corrupt. Perhaps the database instance on that server has a configuration problem. There are many single points of failure that can occur.

    I think that cognitively we tend to go though this list of things that can fail and we grab at what seems the most obvious - looking for confirmation that our fears are reality. In fact, how often do you really have issues with a listener on a single instance, non-RAC database? What are the real statistics and facts with that single point of failure?

    Beyond that, the idea of a service is that it generally is not considered to have affinity to a specific node. Granted, it can be configured as such, but in my mind the whole idea of high availability kind of flies in the face of assigning a service (that would support running local scripts) to one node.

    I think I'd much rather have that service be supported by many nodes, and have scripts be managed by OEM globally - rather than, say, by a local CRON job. That removes the dependency on a single listener you are talking about.

    This, no doubt, will result in people throwing the "but you have to pay for RAC" rocks at me. I don't package or price this stuff folks. I only build architectures that utilize these features. I'm not the right person to complain about packaging of features at. :)

  • robin chatterjee Thursday, February 19, 2015

    Problem is OEM is a BIG thing with lots of moving parts itself. For my critical scripts I want even less to be dependant on it...

    Many is the time when the OEm shows me that my agent is unavailable where as the agent shows that it is connected from the console...

    I haven't studied the Pdb architecture that well. If I have a RAC cluster and one of the nodes listeners is down will I still be able to manage the pdb from the othe nodes ? Won't the cdb instance on the failed listener node also have the pdb datafiles open ?

  • Robert Freeman Thursday, February 19, 2015

    Hi Robin - Thanks for the comment.

    I appreciate that you have had issues with OEM, but it has gotten much better. Cloud control has modified the agents significantly and it seems quite a bit more stable. The only thing I can ask is that if you are having problems with OEM or the agents, then please open an SR.

    I know that working with support can be a pain (all of you who think I'm an Oracle shill please take note of the shot at Oracle support here). Still, if everyone would work with support when they have issues rather than working around them or deciding to give up, we would have a more solid product.

    Please know that I'm not saying you are doing this at all. I've just had lots of discussions with folks that complain about something not working and when I ask them if they opened an SR, it turns out they did not. If you are having a problem with something, then it's likely someone else is having that same problem. So, opening an SR is helping other folks. :)

    With respect to your RAC question. The instance for a RAC CDB would be running on each node of the RAC cluster, just like non-CDB's. So if you loose a node, the CDB and it's PDB's are still available on the surviving nodes. Likewise you can use services to load balance, etc...

    Thanks for your comment!!

  • Jay Weinshenker Sunday, February 22, 2015

    With as many books as you've written (I counted for on the credenza behind me when I replied to your post), trust me, your voice definitely carries weight.

    I think Multitenant databases are a good thing for Oracle to develop and I applaud Oracle for it. Having said that, in the land of (my VMware) virtual machines, I'm not convinced it makes sense to implement (even if it was supported today by my packaged Apps from Oracle such as EBS, Agile, Hyperion).

    You typically "lock" the RAM being used by a Tier 1 VM to ensure the SGA doesn't ever get paged out to disk. This is done at the VM level. So If I'm running multiple CDBs of my Production Oracle EBS (I know, not yet), I've now got those DEV CDBs getting the same IO, CPU and memory reservations that are done at the VM level.

    It's not Oracle's fault, and like I said, I'm glad Oracle has the Multitenant option. But at the price I'd pay for Multitenant, combined with the challenges it imposes on an Oracle DB running under VMware, it doesn't seem like it'd be a smart choice. I do realize not all (by far!) Oracle DBs run under VMware - but a very sizable amount of them do.

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.