X

Recent Posts

It's out! Learn all about Oracle Database 12c Multitenant features. The book is called: Keeping Up With Oracle Database 12c Multitenant - Book One: CDBs, PDBs and the Multitenant World

My newest book effort is out! It's called Keeping Up With Oracle Database 12c Multitenant - Book One: CDBs, PDBs and the Multitenant World and you can find it here on Amazon.com. If you have not looked at Oracle Multitenant - now is the time! This is part one of what will be 3 or 4 books on Oracle Multitenant. You might look at this book and feel like it's not complete. Well, you are correct. This is part of a new series of books that I'm trying to get off the ground called the "Keeping Up with Oracle" series. The idea is to create a set of books (in this case 3 or 4 books eventually) on a given topic (Multitenant). Each book will be about 25% the size of a normally published book on the topic (for example, the Multitenant book will easily reach 500 pages by the time I'm done). Big books take a long time to write. Something like the full sized Multitenant book might take a year. Often, by the time the book is released, some of the information in the book is old and stale. I wanted to figure out a way to address this problem.By dividing the book into parts, I can write the first book, and publish current information that is not stale. As I write the remaining books, then the information in those books will be fresh too. So, I've released Book One now. It has five chapters and really gets you started on how to deal with Multitenant. I need to finish up the last bit of my 12c RMAN book and then I'll be starting on the second Multitenant book - picking up where the first left off.The nice thing is that while I'm writing the second book, I may come across new things that I want to put in the first book - that will add value. That is almost impossible to do in the traditional publishing paradigm. In my paradigm - it's easy. I modify the work and re-publish it. It only takes about 24 hours to go through the re-publishing process. This also means that any errata related to the books will be much smaller - if it exists at all. Please note that I've priced the books in such a way that even the total price of the full set (3 or 4 books) is LESS than what you would pay for a big doorstop of a book. Something else to note - I don't use a small font for the words in the book. While the book format is 6x9, the font size is such that I'm packing a great deal of information into the 6x9 format. I've thought about dropping the font size even more, but we will see how it goes with this first book. So, I hope you enjoy the book. I've already found the first two stupid errors in it. One of the chapter names on the table of contents got messed up and the chapter of contents is in upper case - I'm not sure how or why that last one happened. I'm sure I'll find other silly things. It's funny. You can look at your book over and over, you can look at the proofs of your book. You can have an editor look through the book for errors (which I did) and then ... when you publish it, it never fails that I find some kind of silly error within the first 30 pages.  I hope that you will check out the book and let me know what you think about it's content and the new publishing model I'm going to try to use. Any comments on the book are welcome here. I'll also post an errata page here at some point in time.

My newest book effort is out! It's called Keeping Up With Oracle Database 12c Multitenant - Book One: CDBs, PDBs and the Multitenant World and you can find it here on Amazon.com. If you have not looked...

Personal

The Life and an INTP Part One - Ah, the good old days

This is somewhat of a personal post, but it's also related to Oracle. I've been involved in some discussions about 12c Multitenant and the fact that Oracle will be doing away with the 12c Non-Multitenant architecture someday. These discussions have made me a bit insightful, as have some related things. So, if you will allow me a moment to write a few personal thoughts - I'll then get back to Oracle stuff. I also wonder if you might relate to some of these thoughts. First I want to be clear - I harbor no ill will towards anyone. I think it's possible that some have misunderstood me or my motivation, and that is life. I'm also not looking for sympathy as much as mutual understanding. I fully understand that some won't care, and they can just move along. However, it might be that you experience some of the feelings that I do, and I hope that this post and the few that will come after it, will be helpful. I also understand that there will be those that sigh and say, he's writing a long thing again.... If you are one of those, you really might want to read what I have to say. ------------------------------My Life - The "Good Old Days"------------------------------There are days when I look back on my life as a DBA some 20 years ago and for a second I miss it. Back then, all I did was sit in my cube, and dedicate myself to learning all about how Oracle worked. I poked, prodded, read and asked questions of mentors. The internet was still an infant.. but I still managed to learn a lot online it seems. As I recall Compuserve and AOL had forums that I learned a lot in. I also think I participated in a few news groups and so on. The point is that I got to be me, to a degree, without the vagaries of social interaction - which I both am not great at and tend to cause me a lot of discomfort. Over time, for a number of reasons, I ignored my social discomfort and put myself "out there". I wrote books, started presenting, participating in discussions and offering opinion. I enjoy that - I enjoy writing and presenting. I enjoy discussing things and learning from others. I love teaching and sharing thoughts on a number of topics. I have made a number of really good friends and acquaintances over those years. It has been good for me professionally. I enjoy all of this. However, there has been a personal toll, and I feel it a bit today. Before I get into that, let me backtrack a little to last night. -----------Last Night-----------We had a friend visiting us last night. He and my wife have known each other for a long time and they both are in the medical field. Last night they were taking an online version of the myers-briggs personality test, and they had me take it. My results were no real surprise. I am an INTP type. I've taken these types of tests before, so I knew the general results. INTP equates to: IntrovertiNtuitiveThinkingPerceivingThe assessment says about 3% of people are INTP types. Anybody that knows me probably is not shocked by the fact I'm an INTP.After some interactions that I had today, I was feeling a lot of different ways. Then I started to mull on the fact that I'm an INTP type. I found myself getting really frustrated and wondering if, at the end, the cost of my public life was really worth what it does to me emotionally. While the typical INTP type tends to prefer thinking over feeling - the fact is that emotions actually run very deep in us. How did I feel today? After a lot of thought, I feel a lot of things - but for the first time I think I feel misunderstood and discriminated against. Now, that might sound odd, but I'll circle around to that later in another post and explain myself.-----------An Example-----------First - this might seem like a disjointed round about, but stay with me, it leads to a point.I mentioned the interactions today. It has been part of a debate over Oracle's decision to eventually do away with the non-CDB model. This debate started with my reaction to a blog posting done by someone who chooses to blog anonymously. Now, I understand wanting to be anonymous. My problem with being anonymous is that it impacts accountability. Also, by being anonymous it's easier to hide your true motivations. It seems that being anonymous is the big thing to be today. It's a wonderful shield. On the other hand, there is nothing anonymous about this blog. You know who is writing it. You probably know that I work for Oracle - or assume I do based on the URL associated with this blog. If you know me, then you probably know various things about me, which add additional context to what I say in my blog. Knowing that I work for Oracle, it's easy (though inaccurate) to ascribe certain attributes to me. Maybe, by writing under the banner of blogs.oracle.com you think I lack the ability to think independently. Maybe you think that I am here to sell Oracle licenses or other things. Maybe you think the content of this blog is regulated in some way, or that I hold back on my opinions. None of these would be true. These are all faulty assumptions, and frankly, they are disrespectful ones at that. -----------About INTP's-----------INTP's want to be precise in our meanings and descriptions. This can prove very annoying to those who tend to be less concise. I am an INTP with ADD to boot. INTP's try to be concise, but I sometimes tend to wander a bit getting there. This is very much a part of what an INTP is - we look for different evidences and ways of looking at things. Sometimes, to those outside of us, it might look like meandering or straying off course. In reality, we are often just exploring the fringes, because sometimes the fringes offer great information and detail. If you have ever been a co-worker or manager of mine, you are probably smiling and nodding your head. My ability to write multi-page emails is legendary. That some don't understand why, is often also apparent. I may be INTP on steroids - I don't know. I've been lucky to work with people who appreciated my INTP, and those who have not really gotten it. I've tried, of late, to be more aware of that trait and dial things down. I find it a painful exercise and time consuming. It is a conscious exercise, not unlike trying to regulate your breathing when stressed or keep calm when your flying and you find yourself in a 1000FPM downdraft, while IFR and with about 3000 feet between you and the mountains below. ----------------------------Where am I going with this?----------------------------Enough for this post... I want to dive further into being an INTP in the next post. As you read it and the one or two to follow, I'd like you to ask yourself. How do you respond to people like me, and is it possible that your response to INTP's a form of discrimination. Finally, are you possibly missing something by discounting how INTP's think and work?And be proud of me... this entry could have been a lot longer!

This is somewhat of a personal post, but it's also related to Oracle. I've been involved in some discussions about 12c Multitenant and the fact that Oracle will be doing away with the 12c...

Oracle Multitenant - Common Users

In a Multitenant database,you can create user accounts within the PDB's just like you normally would. For example this command: SQL> show con_nameCON_NAME------------------------------TESTPDBSQL> create user dbargf identified by robert;Will create a user account called dbargf within the container named TESTPDB. The user will not be created in any other container. This is known as a local user account within the CDB architecture. It is local to a unique PDB. This isolation is in alignment with the notion that each PDB is isolated from the parent CDB. If you had a PDB called PDBTWO, then you could create a different dbargf account in that PDB. That account would be completely separate from the TESTPDB local user account created earlier. The upshot of all of this is that, in general, the namespace for a user account is at the level of the PDB. However, there is an exception. In the root container of a CDB you cannot create normal user accounts, as seen in this example:SQL> show con_nameCON_NAME------------------------------CDB$ROOTSQL> create user dbargf identified by robert;create user dbargf identified by robert *ERROR at line 1:ORA-65096: invalid common user or role nameThis is a problem because we will probably need to create separate accounts to administer the CDB at some level (for backups, for overall management) or even across PDB's but with restricted privileges. For example, let's say I wanted to have a DBA account called dbargf that would be able to create tablespaces in any PDB. I would create a new kind of user account called a common account. The common account naming format is similar to a normal account name - except that it starts with a special set of characters, C## by default. Too create a common user account called dbargf we would log into the root container and use the create user command as seen here:SQL>create user c##dbargf identified by robert;Likewise you use the drop user command to remove a common user account.When a common user account is created, the account is created in all of the open PDB's of the pluggable database. At the same time, the account is not granted any privileges. If a PDB was not open when the common user account is created, it will be created when the PDB is opened. When a PDB is plugged in, the common user account will be added to that PDB. As I mentioned before, in a non-CDB environment and in PDB's, when a user account is created it does not have any privileges and the same is true with a common user account.For example, if we try to log into the new c##dbargf account we get a familiar error:ERROR:ORA-01045: user C##DBARGF lacks CREATE SESSION privilege; logon deniedThe beauty of a common user account or role is that it's privileges can span across PDB's. For example, a common user account can have DBA privileges in two PDB's in a CDB, but it might not have DBA privileges in the remaining PDB's. You grant privileges to common users as you would any other user - through the grant command as seen here:SQL> connect / as sysdbaConnected.SQL> grant create session to c##dbargf;Grant succeeded.SQL> connect c##dbargf/robertConnected.When the grant is issued from the ROOT container, the default scope of that grant is just to the ROOT container. As a result of this grant then, we can connect to the root container. C:\app\Robert\product\12.1.0.2\dbhome_2\NETWORK\ADMIN>sqlplus c##rgfdba/robertSQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 18 14:15:24 2015Copyright (c) 1982, 2014, Oracle. All rights reserved.Last Successful login time: Wed Feb 18 2015 14:15:20 -08:00Connected to:Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing optionsSQL> exitHowever if we try to connect to a PDB, we will get an error:SQL> alter session set container=newpdb;ERROR:ORA-01031: insufficient privilegesIt is in this way that the isolation of PDB’s is maintained (by default). The default scope of all grants is limited to the container (or PDB) which they are granted. So, if you grant a privilege in the NEWPDB PDB, then that grant only has effect in the NEWPDB PDB. As an example of this isolation, let's see what happens when we grant the create user privilege to the c##dbargf user and try to create a new common user with c##dbargf afterwards. First, we grant the create user privilege - this is pretty much what we do today:grant create user to c##dbargf;However, when c##dbargf tries to create a user, we still get an error:create user c##dbanew identified by dbanew *ERROR at line 1:ORA-01031: insufficient privilegesThis serves to underscore that by default, grants to a common user (any user for that matter) by default only apply to the container that the grant occurs in. So, the create privilege grant in this case only applies to the ROOT container. The problem here is that when you create a common user, Oracle tries to create that user in all PDB's. Since c##dbargf does not have create user privileges in all of the PDB's, the command fails when Oracle recurses through the PDB's and tries to create the c##dbargf common user. So, how do we deal with this? How do we grant the create user privilege to the c$$dbargf account so that it's able to create other common users. What we do is use the new containers clause which is part of the grant command. In this example, we are using the container=all parameter to indicate that the grant should apply across all containers. Here is an example:SQL>Connect / as sysdba connected.SQL> grant create session to c##dbargf container=all;Grant succeeded.Now, let’s try that create user command again:SQL> create user c##dbanew identified by dbanew;User created.Note that we had to log in as SYSDBA to issue the grant. This is because, at this time, the SYSDBA privileged account was the only account that had the ability to grant the create session privilege across all PDB's. We could give the c##dbargf account the ability to grant the create user privilege command to other accounts if we had included the "with admin option" option during the grant. So, it's clear then that just because you create a common user, it's like any other user. It essentially has no rights to begin with anywhere. Next time I'll address common users in PDBs and I'll also talk a bit more about grants in the PDB world.

In a Multitenant database,you can create user accounts within the PDB's just like you normally would. For example this command: SQL> show con_name CON_NAME ------------------------------ TESTPDBSQL>...

Responses to You can't please all of the people... deprecation of the non-CDB architecture

I have received a few comments related to my posts on the deprecation of the non-CDB architecture. I must admit, I find some reactions to be a bit confusing. First, let me be clear - I am NOT an official voice for Oracle. This is MY blog, MY opinions and I am NOT directed by Oracle in any way with respect to what I do (or do not) write here. So, let's quash the notion that I'm acting as some kind of Oracle Shill when I suggest that there is a major over reaction to this issue. Let's start with a few snips from comments made in the previous posts on this subject: -------------------------------------------------------------------------------------------------------------------------------------------------------------"People who are not on the bleeding edge do not appreciate being forced into a buggy new code path. This is not FUD, this is experience.""We do NOT want working code and system procedures to be replaced with something that might work in the future maybe kinda sorta if you get around to it.""I think once Oracle is seen to be "eating it's own dogfood" more people will lose their fear of the unknown...."--------------------------------------------------------------------------------------------------------------------------------------------------------------Based on these quotes, I would think that somehow Oracle had announced that it was ripping out the non-CDB code now, or in 12.1.0.3. That's simply not the case. The non-CDB code isn't going to be ripped out in 12.2 either. Beyond that, I don't know, but I can't see it being ripped out for quite some time.Why are people knee jerking about this stuff? Why are assumptions being made, completely without any foundation? I am also often confused about the fact that people look at their enterprise and it's unique issues - and assume that everyone else faces the same issues. I am confused by the arguments like this one:--------------------------------------------------------------------------------------------------------------------------------------------------------------"We don't want a moving target, we don't want make-work, we want our data to be secure and reachable in a timely manner."--------------------------------------------------------------------------------------------------------------------------------------------------------------Yes, and THOSE (security and data itself) are moving targets in and of themselves and they NECESSITATE a moving, growing and changing database product. Security risks are constantly changing and increasing - hacking attempts are more frequent and and more complex. The costs of intrusions are growing dramatically. Are you suggesting that responses to such things such as Data Vault, or encryption at rest - should not be added to the database product so it can remain static and, hopefully, bug free? Is the urgency to avoid bugs so critical that we weigh it higher than development of responses to these risks?With respect to data being reachable in a timely manner. This too is a moving target. 10 years ago, the sizes of the databases we deal with now were nothing more than speculation. The unique types of data we need to deal with have increased as have the timelines to process this data. If Oracle had decided to remain even semi-static ten years ago - do you suppose that CERN would be able to process the vast amounts of data that it does with Oracle? Do you suppose that reports that went from running in an hour to time periods of days - because of the incredible increases in data volume - would be something that customers would accept as long as the code base remained stable? It's the constant modification of the optimizer that provides the advanced abilities of the Oracle database. The biggest of the moving targets are not the database, but rather they are in the business that the database must accomplish. Just because one enterprise does not have a need for those solutions, or can not see the benefit of those solutions, does not mean that there is not a significant set of customers that DO see the benefit in those solutions. Then there is this statement (sorry Joel - I don't mean to seem that I'm picking on you!)-------------------------------------------------------------------------------It's by no means new - the issue that immediately comes to mind is the 7.2 to 7.3 upgrade on VMS. People screamed, Oracle said "tough."-------------------------------------------------------------------------------Change is always difficult for people. I agree that change can present serious challenges to the enterprise - and we can focus on those challenges and see the cup as half empty. However - change can be seen as quite the positive too. We make the choice which way we look at it. This is an opportunity to refine how you do things in the Enterprise. It's an opportunity to do things better, more efficiently and build a smarter and more automated enterprise.Or, you can moan and complain along the whole path. Change things begrudgingly and ignore the fact that opportunity is staring you in the face. I would argue that if you think you are too busy to deal with this change over the next several years - then perhaps you are not working as efficiently as you could be. I'd also offer that if your enterprise is so complex, and so fragile, that you can't make the changes needed in the next five years or so - then your problem is not changing Oracle Database software code. It is the complexity that you have allowed to be baked into your enterprise. So - we can look at this in a negative light or we can see it as a call to do better across the board. To work smarter and to simplify complexity. --------------------------------------------------------------------------------------------------------------------------------------------------------------When will Oracle's own packaged applications be compatible with the PDB architecture. For example E-business suite which still arguably is Oracles best selling ERP suite is still not certified to run on a single instance PDB , let alone multitenant.--------------------------------------------------------------------------------------------------------------------------------------------------------------Here lies the proof that what Oracle is doing is giving you a LOT of notice about this change to the Oracle architecture. The CDB architecture is a new architecture, and it's true that pretty much all of the Oracle software that actually uses the database does not yet support the CDB/PDB architecture. So, I argue that in losing our cool about the fact that non-CDB will be going away is clearly a knee jerk reaction to something that's coming, for sure, but not tomorrow or anytime soon. In this one statement should arise the notion that this isn't going to happen anytime soon. So, why are people acting like this is happening next year?--------------------------------------------------------------------------------------------------------------------------------------------------------------I agree with many of the points, but I kind-of disagree with the scripting aspect somewhat.--------------------------------------------------------------------------------------------------------------------------------------------------------------So, first let me say that I sympathize with this. However, maybe changing scripts so that they use a service name rather than OS authentication, is an overall improvement in how we manage our enterprises overall. I'm not saying that this is not probably one of the biggest pain points of a migration to the CDB architecture - it is. I am saying that maybe the idea of using services rather than using OS authentication is a better solution, and that we should have been doing that in the first place anyway. Most applications should be using services by now anyway. So there should not be a significant amount of pain there. Perhaps, in an effort to look at positive, we might say that in being forced to modify our existing way of doing things, we are also forced to look at our existing security infrastructure. Are we simply allowing applications to connect via OS authentication? Is this really a best practice? I'm not sure it is. So, there is an opportunity here - if we choose to look at it that way.--------------------------------------------------------------------------------------------------------------------------------------------------------------Your voice carries weight. Your opinions do matter.--------------------------------------------------------------------------------------------------------------------------------------------------------------I think you over value my voice and its impact. :) Be that as it may, I see multitenant as the natural evolution of the database product. There will be a significant amount of time for these changes to mature, and for people to execute migration paths to this new architecture, before we see the plug pulled. This isn't me speaking as some Oracle shill. I would feel this way should I work for Oracle or anyone else. Remember - I'm the guy that writes the new features books! :) I think the direction Oracle is going is right on target. It addresses a number of issues that are now being addressed haphazardly with solutions like virtual machines. It addresses the performance of multiple databases on one machine sharing resources most efficiently. If you should wish to study some of the performance benefits of the Multitenant architecture you can find them listed here:http://www.oracle.com/technetwork/database/multitenant/learn-more/oraclemultitenantt5-8-final-2185108.pdfThe fact is that, all things being equal (and acknowledging that there will always be the outliers), there are significant performance gains when you use PDB's instead of stand alone databases. I know that it's easy to take something personally. I know it's frustrating to be pulled, kicking and screaming, into something we don't think we want. I also know that we can sometimes close our minds about something when we have a negative first reaction. I've been working with Multitenant quite a bit of late. In 12.1.0.2 its solid, but not full featured yet. Is it bugless - of course not. Is the Oracle Database bugless without Multitenant? Nope. Is any large application without bugs, nope. I don't think you stop progress because of fear. You don't stop sending manned missions into space because of the risk of death. You don't stop developing your product because of the risks of bugs. If you do the later, you become a memory. None of us want our critical data running on databases that are just memories - do we? We might THINK that we don't want change. We might complain bitterly about change because of the fact that it inconveniences us (and how DARE they inconvenience us!!). We might think our life would be better if things remained the same. The reality - historically - is that static products cease to be meaningful in the marketplace. Otherwise, the Model-T would be selling strong, we would still be using MSDOS and there would be no complex machines like the 747. --------------------------------------------------------------------------------------------------------------------------------------------------------------Agility - Let my voice carry the message of being agile--------------------------------------------------------------------------------------------------------------------------------------------------------------If my voice carries any weight - then let agility be my message. I see many DBA's that treat their database environments as if they were living in the 1990's. These environments lack agility - and they use more excuses than I can count to avoid being agile. For example, I would argue that the fact that we are not using services, and instead relying on OS authentication is all about engineering for agility. Yes, it might be legacy code - but if it is, then the question is are we thinking in terms of agility and using maintenance cycles to modify our code to BE agile?Probably not - and for many reasons I'm sure. I argue that one root cause behind these complaints (I did NOT say the only root cause) against the demise of the non-CDB model, boils down into one thing - the ability to be agile. Now, before you roast me for saying that, please take a moment and think about that argument and ask yourself - if it's not just a little bit possible... If I might just be a little bit right. If I am, then what we are complaining about isn't Oracle - it's how we choose to do business. That is its own blog post or two ... or three.... And it's what I'll be talking about at the UTOUG very soon!Note: Edited a bit for clarification... :)

I have received a few comments related to my posts on the deprecation of the non-CDB architecture. I must admit, I find some reactions to be a bit confusing.First, let me be clear - I am NOT an...

You can't please all the people anytime - or - How people use anything to throw up FUD - or - Yes, the non-CDB architecture is deprecated in Oracle Database 12c - Part Four

Welcome to part three of my response to this blog entry on the deprecation notice of the Oracle non-CDB architecture. The previous blog entries are here for part one. Until now I've tackled these comments: What I want to talk about is Oracle’s attitude to its customers and what seems to me to be breathtaking arrogance. Personally I can think of three very good reasons why I might not want to use the single PDB within a CDB configuration which does not require a Multitenant license and  Multitenant requires additional configuration and the use of new administrative commands, which means re-writing admin procedures and re-training operations staff and: Multitenant is an entirely new feature, with new code paths – which means it carries a risk of bugs (the list of bug fixes for the 12.1.0.2 patchset contains a section on Pluggable/Container Databases which lists no fewer than 105 items) Now, I want to address the final comment which is: With the Multitenant option installed it ispossible to trigger the requirement for an expensive set of licenses due tohuman error… without the option installed this is not possible This bit of FUD is silly. First of all, this risk already exists with various features of the Oracle database. For example, many of the OEM packs can be inadvertently used without a license as could several of the views in the database itself. Partitioning is another example that comes to mind. Often it's installed in a database but it's use requires a license. So, how is this any different? Well, it's not. Simply put, this is an argument for enterprise compliance auditing/management. In the end, this blog entry is a great example of knee jerk responses to a situation that is, frankly, still developing. A lot of assumptions are made in the points the blog presents in it's attempt to throw stones at Oracle. I only hope in the future that people will not start yelling about the end of the world before the trajectory of the asteroid is actually tracked with some degree of accuracy. 

Welcome to part three of my response to this blog entry on the deprecation notice of the Oracle non-CDB architecture. The previous blog entries are here for part one. Until now I've tackled these...

You can't please all the people anytime - or - How people use anything to throw up FUD - or - Yes, the non-CDB architecture is deprecated in Oracle Database 12c - Part Three

Welcome to part three of my response to this blog entry on the deprecation notice of the Oracle non-CDB architecture. The previous blog entries are here for part one. Until now I've tackled these comments: What I want to talk about is Oracle’s attitude to its customers and what seems to me to be breathtaking arrogance. Personally I can think of three very good reasons why I might not want to use the single PDB within a CDB configuration which does not require a Multitenant license and  Multitenant requires additional configuration and the use of new administrative commands, which means re-writing admin procedures and re-training operations staff Now, I want to address this comment: Multitenant is an entirely new feature, with new code paths – which means it carries a risk of bugs (the list of bug fixes for the 12.1.0.2 patchset contains a section on Pluggable/Container Databases which lists no fewer than 105 items) If the first two comments didn't strike you as FUD, then this one should. First of all, I know of no software that does not have bugs. This is especially true with the new features in that software and Oracle Database is no exception. The expectation that Oracle is just going to pull the non-CDB architecture and leave a bug filled database is ridiculous. First, there is no history to justify such a sweeping implication. Second, I agree that the new Multitenant feature is certain to have touched a great deal of the Oracle code. One might argue that by removing the non-CDB related functionality from the code base that this would simplify the code.  In simplifying the code base, you reduce the likelihood of new bugs. I suspect (I have no facts to prove this) that there are many places now where a given fix must be replicated in more than one piece of code - one part being the CDB piece and one for the Non-CDB database. Removing the CDB specific pieces of code would remove the requirement to maintain two different code bases. This should reduce the number of bugs in the future. So, I would argue that eventually moving to a CDB only architecture is going to be a positive to the Oracle code base. Finally, Multitenant is not a singular new feature introduced by Oracle to be buggy. Any new functionality is going to have bugs that come with it. In the end, my point is this - You can't make tactical business and technical decisions based on where you are now. You make them based on where you think you will be based on the timelines of the product and it's releases. The Product Managers at Oracle have their eye on many things. I'm sure this includes the demise of the non-CDB model. They know, as anyone else should know, that it's not going to happen today. Thus, there is time to work through bugs and time to get the entire Oracle Database feature set working in the CDB architecture, as well as giving them time to bake. The FUD here is the implication that Oracle is somehow running at ludicrous speed hell bent on killing the non-CDB architecture. There is absolutely no evidence to support this chicken little approach. In my final post I'll address the last comment that was made, which is: With the Multitenant option installed it ispossible to trigger the requirement for an expensive set of licenses due tohuman error… without the option installed this is not possible I'm sure you can probably guess my response to this one...

Welcome to part three of my response to this blog entry on the deprecation notice of the Oracle non-CDB architecture. The previous blog entries are here for part one. Until now I've tackled these...

You can't please all the people anytime - or - How people use anything to throw up FUD - or - Yes, the non-CDB architecture is deprecated in Oracle Database 12c - Part Two

This is part two of my response to the deprecation of the non-CDB model at some point in the future and some of the responses that have been made in relation to this announcement.In Part One I addressed the initial comments that were made indicating that Oracle was being "arrogant" for daring to announce this deprecation of the 30+ year old architecture. In this and two more blog entries, I will address additional comments made in the blog as it tries to argue against the movement to de-support the non-CDB architecture. In this post, I'd like to address this assertion: Multitenant requires additional configuration and the useof new administrative commands, which means re-writing admin proceduresand re-training operations staff This seems to boil down to a few different arguments. First, the implication is that the changes introduced with the CDB architecture are complex. I disagree. Here is what I think this statement is trying to say and my thoughts on the matter. Argument: It's going to take time to train DBA's and operations staff on Multitenant Response #1: Hogwash. While there are additional administrative commands (ie: alter pluggable database, create pluggable database) none of these are all that complex. I firmly believe that the average DBA can spend about a 6 hours and become competent to manage a CDB environment. Oh, they might need to go to the SQL reference guide once in a while - but they will understand the architecture and the new commands needed to manage this architecture. With respect to backup and recovery involving PDBs, there is no reason that the average DBA can't spend another 6 hours or so and be up to speed on how RMAN does PDB backup and recovery, including PDB point-in-time recovery. So, total time to upgrade your skills to manage a CDB database environment - about 12 hours. Let's just round that up to 2 whole days of training. Frankly, this isn't much of an investment when you look at the returns that a CDB database can provide. Response #2: You have uncovered a nasty secret that needs to be addressed. Training is required for every major or minor number release. Literally, everywhere I go I see people doing things the "old' way. Forexample, I *still* see people using the analyze command to gatherstatistics. I see people who still set method_opt in dbms_stats.gather_*_statscalls using a sample size of something other than auto. If you want tolearn about the reasons that you might want to start using auto, take a look at Maria Colgan's blog here. She does a masterful job of explaining the changes that were made to the optimizer in Oracle Database 11g. She explains why using auto will, in most cases, provide you with the most accurate statistics with a minimized collection time. When I ask some DBA's why theyare not using auto, usually they either tell me that it didn't work well in 10g(it didn't) or that they have never heard of it. Sometimes they just insist that a 100 percent sample size is better (it isn't). What is the problem then? It is the assumption that when moving from 8i to 9i or 10g or moving from 11.1to 11.2, or from 11.2 to 12.1 that there is not some requirement for trainingof staff. Oracle makes migration simple, because often it introduces newfeatures that supersede old ones, but it kindly leaves the old way of doingthings available for backwards compatibility. The upshot is that your stuffstill works, but over time it will become less and less efficient. Maybe youthink a 100% sample size is just fine, if you look at Maria's blog you willfind that it can be quite inefficient. We have gotten lazy about training. Wehave gotten lazy about keeping up. I see the evidences of this almost every time I look at a database. You can just look at thecontents of the database SPFILE and see the evidences that we are not trainingourselves for these new versions. When you see events that were meaningful backin the Oracle 8i days still set in the SPFILE, you have to ask yourself, whatdid these folks do to prepare to use this new version of Oracle? The bottomline is that Oracle really makes it easy to migrate/upgrade to a new version ofthe database. The result, in my thinking, is that we have become complacent andlazy about understanding how the new version works, and what features andfunctionality has changed. As a result, the default assumption when planningupgrades is that no training will be required. This dangerousthinking is betrayed in the first point in the blog. With each major upgrade training is required, period. If you are a DBA - then you are doing yourself a huge disservice by not insisting on training. This impacts your skill set and your economic value.   Response #3: You have to modify your monitoring scripts I'll be honest. If this is your justification for not moving to multitenant - you should be fired - I'm sorry, that's probably not politically correct - let me say re-educated. Why? Because if you are busying yourself with scripts to monitor this and that, then you are not earning the wage you are being paid. I know, that might be harsh, but it's true. If your entire database monitoring infrastructure (or worse your entire backup infrastructure) is based on manually written scripts, then you have a huge problem. This problem is just surfaced by the advent of Multitenant but if you are honest, the problem exists almost any time you upgrade some part of your infrastructure. The problem is one of scale. Creating and maintaining scripts takes time and effort - sometimes lots of it. As the enterprise grows, the cost of these kinds of customizations grows.  The point is that these kinds of scripts are not scalable.We need tostart doing things differently. We need to make more efficient use of our time.Writing scripts isn't it. This is a job for OEM. When you change database versions, you simple have tomake sure that the version of OEM you are running is compatible with that newdatabase version. If it is - Viola - your monitoring continues to work. If itisn't - you update OEM, which you would need to do anyway. If your kneejerk response (as I once had) is to avoid OEM because of past experiences - thenmay I prevail on you to try again. OEM has improved a great deal and it's worthanother try. Also, you need to be realistic in your expectations. You can'texpect to turn on every monitoring bell and whistle and have it work perfectlyout of the gate. A little time spent on configuration now will save you lots oftime down the road. If you run into a bug, don't just shut it all down and give up on it. Open an SR and work with Oracle to fix the problem so we can provide you, and all of our other customers, a better product. Finally, with respect to OEM, the fact is that there is some front end time that is going to spent configuring it. However, the time saved on the back end, and the fact that it will simplify the administration of the enterprises databases, is significant. I know it can be frustrating, but stay with it. I have more thoughts on standing up OEM but I'll save them for another time. In my next blog post, I'll address the next bullet point that was listed which is: Multitenant is an entirely new feature, with new code paths - which means it carries a risk of bugs (the list of bug fixes for the 12.1.0.2 patch set contains a section on pluggable/container databases which lists no fewer than 105 items).  Then, in my final post on this topic, I'll address this assertion: With the Multitenant option installed it ispossible to trigger the requirement for an expensive set of licenses due tohuman error… without the option installed this is not possible

This is part two of my response to the deprecation of the non-CDB model at some point in the future and some of the responses that have been made in relation to this announcement. In Part One I...

You can't please all the people anytime - or - How people use anything to throw up FUD - or - Yes, the non-CDB architecture is deprecated in Oracle Database 12c - Part One

Oracle has announced the deprecation of the non-CDB architecture as seen in this blog entry here. Of course, the knee jerk, negative, FUD ridden comments are starting to come out. One of these kinds of responses can be seen in this blog. Edit: I wanted to point out that Mike Dietrich has some great information on the deprecation on non-CDB architecture here. In it Mike says clearly that the removal of the non-CDB architecture will not occur until after 12.2. This blog entry starts with a quick review of Multitenant. However, it then starts to divert into it's FUD with this comment: "What I want to talk about is Oracle’s attitude to its customers and what seems to me to be breathtaking arrogance. Personally I can think of three very good reasons why I might not want to use the single PDB within a CDB configuration which does not require a Multitenant license: " I'd like to address the reasons the writer (anonymous of course) gives for objecting to this decision in this post, as well as some others. First, I need to address the accusation of "breathtaking arrogance".  I can tell you that as part of the Oracle "team" that all of us are concerned about our customers. It's true that we are a large organization and I am sure we have our share of arrogant people (in fact I know a few of them) but I can tell you by and large - the people within Oracle care about our customers. Second, I can tell you that the world is moving, fast. Technology is moving, fast. You can keep up with it or get run over by it. You can continue to be viable or you can become a has been. The has been's die, often in some very spectacular ways.You don't want your database vendor, of all things, to become the has been. In the over 25 years I've worked with Oracle, I think it's amazing that the basic underlying foundation of the administration of the product has not really changed. Yet, it's been able to be competitive, offer foundational and forward thinking features and most of all, at the same time, it's relational engine is second to none in performance. New features introduce new administrative responsibilities, to be sure, but the basics remain largely unchanged. However, to derive the benefits that Multitenant offer, there simply had to be some changes. These changes are noticeable changes, to be sure. For Oracle to say that they will be ending the life of the old model, after 30+ years, should be something we marvel at instead of complain about. Consider also, they are giving us plenty of warning. So, what does our esteemed writer argue is wrong with all of this? His blog entry summarizes concerns in three bullets, listed here: Multitenant requires additional configuration and the use of new administrative commands, which means re-writing admin procedures and re-training operations staff Multitenant is an entirely new feature, with new code paths – which means it carries a risk of bugs (the list of bug fixes for the 12.1.0.2 patch set contains a section on Pluggable/Container Databases which lists no fewer than 105 items) With the Multitenant option installed it is possible to trigger the requirement for an expensive set of licenses due to human error… without the option installed this is not possible Let me address these, one at a time. Because each of these deserves it's own response I'm doing one response per blog entry. I'll post the next entry soon.

Oracle has announced the deprecation of the non-CDB architecture as seen in this blog entry here. Of course, the knee jerk, negative, FUD ridden comments are starting to come out. One of these kinds...

Descriptive vs. Non-descriptive Naming

I was working on something today and started to muse on names. By names, I mean things like host names, service names, database names, PDB names, listener names and so on.  I started to mull over the question of the use of descriptive vs. non-descriptive naming and how this relates to security and vulnerability of Enterprise systems.  A lot of environments that I've been in name their database instance with fairly descriptive names. For example prod, prd, prdactg and so on. The point is that the name makes it fairly clear that this is a production database. Similarly you see derivatives of test, dev (development) and so on.Likewise, I've seen server hostnames that often give away their purpose. Again, they contain naming that indicates the server houses production databases or test databases and so on. I see a lot of RAC servers that are named this way. Maybe racprod01, racprod02 and so on.  Even if the name does not directly indicate the purpose of the server or database, it may be that the pattern you use can help identify a set of production databases. For example, it might be that your naming standard is that all production databases will start with DB8 followed by some two digit number that distinguishes each server - ie: DB801, DB802 and so on. A good hacker will pick up on the pattern of you sequence order and figure out which sequences represent which kind of databases. I think some name services the way they do, so that the service name will be easy to remember. I mean, it's hard to forget the host name prodlinux01 for a server or prod01 for a database, yet, it provides a set of clues for the hacker as to where the good stuff is.In my mind, I think that all naming of hosts and databases should be non-descriptive. It should not follow any pattern that might indicate the use of that service. Even using easy to remember alias names which are resolved at the DNS or by the SCAN listener can give information away. If, for example, I provide a DNS address called prod01, which resolves to some nondescript system called host5a4 - I could use ping to easily ping the service name, and see what IP it resolves too. Thus, even an alias that does not directly connect to a service, can still provide a risk.So, you might ask, what kind of standard should we use? I'll throw out a suggestion and we can see how it sets with you. I suggest that a randomized set of characters be used to produce a unique host name for each host. I suggest the same for each database. I would make these names at least 8 characters in length (Oracle recommends that you make the SID of a database no longer than 8 characters in length).  While such a standard might make identifying a SID a bit more difficult, I suspect it might make a database more secure. Perhaps, in some environments the security concerns are not such that random naming is important. Given the number of breaches that we have seen in the past years, I suggest that some enterprises underestimate the importance of security. Even seemingly simple things can lead to your downfall.

I was working on something today and started to muse on names. By names, I mean things like host names, service names, database names, PDB names, listener names and so on.  I started to mull over the...

Oracle Multitenant - Should you move to a single CDB/PDB architecture?

Note: Something happened to this original copy of this post. I'm not sure what it was, but somehow it was like a draft copy ended up being posted and it lost almost half it's content. I have re-created the post. So, for those of you who saw the earlier edition - I have no idea what happened. It actually happened to me twice yesterday - and then this time - very frustrating. You are probably aware that there is a new feature in Oracle Database 12c called Multitenant. Multitenant provides the ability for a single instance to manage multiple databases. At the higest level the Multitenant architecture includes:The Multitenant Instance - This is really my own name, for what Oracle calls the Database Instance in the documentation. I'm calling it the Multitenant Instance simply to distinguish between an instance supporting a Multitenant database and one that is not.A Container Database (CDB) - This is the type of database that is created when that database supports Oracle's Multitenant option. The container database is also called the ROOT container and is called CDB$ROOT within the data dictionary views of the CDB.The Root Container Database - This container (database) is created automatically when you create a Multitenant database. The Root container contains the data dictionary for the CDBThe Pluggable Database - These are the databases that are stored within the CDB.When I talk to people about Multitenant, I often get very excited responses. However, when I mention that this is an option that requires a license, they are often very disappointed. Usually their ability to purchase such options is limited to budgets, budget cycles and other constraints. There is a loophole in all of this, that provides a way for you to at least start moving towards this architecture. This is the fact that you can actually create a CDB, and populate it with a single PDB - all without needing an additional license. Even though you can't populate the CDB with any more than 1 PDB, this still offers some unique benefits:You can start taking advantage of the cloning features of Multitenant between two different CDB. For example, if you have one CDB with a "Gold" copy of a database, you can very quickly clone that database, using a database link, to another CDB database. This can make refreshes and new database creations much easier than using something like Data Pump.You can start taking advantage of the much easier upgrade process that the Multitenant architecture provides. The steps to upgrade a Multitenant database are far easier (I think) than a regular database.  In summary, you simply unplug the database in the old CDB, plug it into the new CDB (which you have created running the new version of the software you want to upgrade too). You then run a single script to upgrade and boom! Database upgraded. In fact, if you look at the instructions on how to upgrade from 12.1.0.1 to 12.1.0.2 in place and compare them to the instructions on doing an upgrade using Multitenant, you will find the Multitenant upgrade instructions much easier. Perhaps the most compelling reason is simply self-interest. Multitenant is the direction of the future. If you are an older DBA, you might recall the days when RAC (or OPS if you are really long in the teeth like me) was just a pipe dream and nobody you used was using it... unless they were a big shop with lots of money and hardware. Flash forward to today and how many job postings do you see that do not include some kind of requirement or preference for RAC experience. RAC, over time, has developed a very real presence in the Oracle landscape, and it is going to just continue to do so.Multitenant is going to go the same way. If you get ahead of the curve now, you will develop skills that will be very much in demand in the near future. Additionally, you will be positioning your infrastructure for the future. You will be preparing for the day when your organization has decided that it's time to take advantage of all of the features of Multitenant - and there are many.So - what is my recommendation? Personally, I think that when you start upgrading to 12c from 11g or 10g - that is the time to move the databases into a CDB. Some might want to take a two phase approach - upgrade to 12c and wait a while, and then later move into a Multitenant infrastructure. This seems, to me, to be a waste of time and money. It will require two separate projects, two separate iterations of testing, two separate outages - though these can be minimized with something like Oracle GoldenGate. It's often hard enough to get one large enterprise spooled up, coordinated and moving to do even a single large upgrade project. Splitting it into two large projects just makes things more complex from an execution point-of-view. Of course, every environment is different and has different needs and requirements, so moving to Multitenant right now might not be the right move for you. The Oracle database still has a few features that have not yet been migrated into the Multitenant architecture, but those are fewer now in 12.1.0.2 - and will decrease over time - quickly I suspect.I'd love to  hear your thoughts about moving to Oracle Database 12c and Multitenant!

Note: Something happened to this original copy of this post. I'm not sure what it was, but somehow it was like a draft copy ended up being posted and it lost almost half it's content. I...

PDB Recovery - Your PDB won't open because a datafile is missing!

So, you start your CDB... SQL> startupORACLE instance started.Total System Global Area 2566914048 bytesFixed Size                  3048920 bytesVariable Size             671091240 bytesDatabase Buffers         1879048192 bytesRedo Buffers               13725696 bytesDatabase mounted.Database opened. So good, so far. Now, you try to open one of the PDB's called TPLUG with the alter pluggable database command and get the following error: SQL> alter pluggable database tplug open;alter pluggable database tplug open*ERROR at line 1:ORA-01157: cannot identify/lock data file 12 - see DBWR trace fileORA-01110: data file 12:'C:\APP\ROBERT\ORADATA\ROBERTCDB\DATAFILE\O1_MF_TESTING_BB1BOOGK_.DBF' Now, normally you would just use the alter database command and take the datafile offline. Once that was done you would open the database and then you could restore the datafile with RMAN (or manually) in the background. However, when you try this on a datafile assigned to a PDB, we have a problem: SQL> alter database datafile 12 offline; alter database datafile 12 offline*ERROR at line 1:ORA-01516: nonexistent log file, data file, or temporary file "12" Boy, that file 12 is causing us problems. It seems that the root of the CDB does not know about file 12, and we can confirm this by querying DBA_DATA_FILES: SQL> select file_id from dba_data_files;   FILE_ID----------         1         3         5         6 Yet, the file DOES appear in v$datafile: SQL> select file# from v$datafile where file#=12;     FILE#----------        12 So, what's up doc? Well, this is one of the upshots of the separation of PDB's within a CDB. Kind of like separation of church and state here in the US, if not more global in nature. What is a person to do? I can't open the PDB, I can't  offline the datafile - do I really have to wait until the datafile is restored to be able to get the database is open? Thankfully, the answer is no. What you need to do is scurry over to the PDB in question and offline that datafile. You might rush to try to connect to the PDB via SQL*Plus using it's service name, like this: sqlplus robert/robert@//myhost:1522/tplug but, this gets us nowhere:  ERROR:ORA-01033: ORACLE initialization or shutdown in progressProcess ID: 0Session ID: 0 Serial number: 0 No, what we need to do is log into the root container, and then use the alter session set container command to move us properly into the tplug container. Here is an example (I added a quick query against v$pdbs just to show you that the PDB was indeed only mounted:  SQL> select name, open_mode from v$pdbs;NAME                           OPEN_MODE------------------------------ ----------PDB$SEED                       READ ONLYROBERTPDB                      MOUNTEDTPLUG                          MOUNTEDSQL> alter session set container=tplug;Session altered. Now, we are connected to the tplug PDB - even though it's only mounted. We can now take the datafile offline and open the PDB: SQL> alter database datafile 12 offline; Now, I can open the database: RMAN> alter pluggable database tplug open;Statement processed This is better now! The database is open! Your users can now access all of the data in it except for that in the missing (or corrupted datafile). It seems likely that the other datafiles are ok, and online but we can certainly check that SQL> select file#, online_status, error from v$recover_file;     FILE# ONLINE_ ERROR---------- ------- -------------------------        12 OFFLINE FILE NOT FOUND  Now, all I need to do now is fire up RMAN, connect it to the TPLUG PDB and restore and recover the datafile: rman targetrobert/robert@myhost:1522/tplugrestore datafile 12;recover datafile 12; alter database datafile 12 online; and the datafile is now back online and we can query against any objects in it! So, just remember when dealing with CDB/PDB's that some of the ways you do things have changed slightly, but in general, they still stay the same.

So, you start your CDB... SQL> startup ORACLE instance started. Total System Global Area 2566914048 bytes Fixed Size                  3048920 bytes Variable Size             671091240 bytesDatabase...

Personal

DBA ADD.... Am I the only one....?

So, I realized that I am doing it again. I've been doing some testing with advanced compression. In doing so, I built some scripts to run my tests and produce the results of those test and so on... Those scripts lead to me writing other scripts to build a test harness of sorts for this specific test, and on and on... Then, I realized - I was doing it again - I was letting my ADD get the best of me. Instead of focusing on the end game, which consisted of collecting compression ratios given different criteria, I got way to focused on the building of the code and so on. I kept wanting to add this gismo and that ... and found that I was all of a sudden focusing on that rather than what I really set our to do. Then, I'll realize that I need to re-focus on the primary task at hand.  Yes - I suffer from ADD - I call it DBA ADD. As I think about the DBA's I know and have dealt with in the past - I believe that I've run into a large number of them that I suspect suffer from ADD. I think that this has it's good aspects and it's bad aspects. The problem is that sometimes the actions that are expressed by virtue of our ADD are misinterpreted and misunderstood by those around us. If you work with someone with ADD, you might well think this person is moody, arrogant or scatterbrained and you would be right - but you would be right for the wrong reasons. I am honestly diagnosed with ADD. It's been about 5 years now since I was diagnosed. As I look back, knowing what I know now, and I can see all the signs. I sometimes wonder if there are not a lot of DBA's who suffer from ADD. My poor family suffers through it... Understanding is the first key to overcoming some of the effects. In the interest of understanding I offer a great article here. If you have ADD - you will probably see yourself in this article - and it might well be eye opening - it was for me. If you love/live/manage or work with someone with ADD this article might help you to understand them a little better. It might help you help them cope with some of the real effects of ADD. And these effects are real - they have positive and negative consequences of course, but trust me, nobody with ADD who feels incredible irritation when they are interrupted mid-task wants to feel that way. Nobody wants to feel the blast of lightning that hits us when we are working and someone wants to come make conversation - and the poor person that came to make that conversation - they don't want to run away crying at our reaction. So, in an effort to help you recognize your ADD, or the ADD in someone you love, here are some of the highlights of 20 things to remember about people with ADD:1. People with ADD have an active mind. 2. People with ADD listen but do not absorb what is said. My father used to think that this was just because I tuned him out - I really didn't. This one is so true for me, and it's so frustrating.3. People with ADD have difficulty staying on task. The fact that I stopped to write this blog entry on ADD is evidence that this is true.4. They become anxious easily. This was an eyeopener for me. I live with terrible anxiety at times and I never understood it.  I used to just power through it and not understand why. There are some wonderful medications out there if you find yourself almost debilitated at times with Anxiety. 5. They can't concentrate when they are emotional. Wow - another head on winner there.6. They concentrate too intensely. We tend to dive in, deep. Coming up for air is difficult. 7. They have a difficulty stopping a task if they are in the zone. I found this one to be so true. I never really understood my irritability when I was deep in the middle of something and I had to stop for even the most seemingly little thing. It seems that when we are doing our thing, we tend to forget that we need to do little things like eat or go check on the mail. When my phone rings, interrupting what I'm doing - it literally sends these shock waves of pain up and down my spine. As a result, when I am doing something that I feel is important and needs focus - I turn my phone off. It makes a world of difference to my mood. 8. They are unable to regulate their emotions. We tend to have problems processing feelings... %(*%$#@W*!! 9. They have verbal outbursts. Some ADD people have a very hard time controlling their mouth. Their mind is working so fast, that they tend to say things they later regret. We have a hard time editing what we are going to say before we say it. 10. They have social anxiety - Oh my, yes. You might say, but Robert - I've seen you present in rooms of hundreds of people (my largest, to date, was something over 1000 people). Surely you don't have social anxiety. Yes - I do. You will notice that I show up in the room I'm speaking in early. It's not so I have time to prepare - but more so I can adjust to the room - and being around that many people. It's much easier for me to speak in a room that I entered first, and then fills up with people - than to enter a room with 100 people sitting and waiting for the next speaker.11. They are deeply intuitive - I have always said that my intuition was one of the reasons I am good at my job. When somethings not working right, I often have a pretty good feel for at least what the nature of the problem is. 12. They think out of the box - This is, I think, why ADD people make great employees and great DBA's. We can think out of the box when troubleshooting and when crafting designs. Being able to see things in an abstract way and find solutions is a powerful gift. 13. They are impatient and figity.. Isn't this list over with yet?!14. They are physically sensitive - This one also answered a lot of questions I had. 15. They are disorganized - One look at my office will confirm this is true.16. They need space to pace - I might change this to read they need space to think. We tend to be introverts very often - but that's because we need a seriously regulated environment so we can get some clarity in our thoughts.17. They avoid tasks - Now, I tend to be the opposite on this one - I tend to take on way to much. However, the reasons they avoid tasks are not what you might think. It is not being lazy - but rather it's because we obsess and dwell on things and our minds just flow with the possibilities. As a result, we tend to self-censor what we accept - understanding (perhaps intuitively) our limits. 18. They can't remember simple tasks - Look at your floor right now. Is it littered with your dirty clothes? Does your wife/husband/SO constantly ask you why you can't do a simple thing like put the clothes in the laundry basket. Well, this is why and it's legitimate. It won't keep them from being frustrated at you, but it's NOT that we are ignoring them - and that's the key.19. They have many tasks going on at the same time - right now I just stopped and in this hour I have about 5 different tasks that are going on or I'm about the embark on all at the same time. We love to multi-task but we are terrible at follow-through. Personally, I hate to multi-task because I know that I will be challenged with completion. And yet - if it's challenging I will pile it on. 20. They are passionate about everything they do - That is me to a tee... When I'm doing something that I really like and find exciting, challenging or something like that - I'm uber passionate about it. I think that is partly why I pick up new things quickly. However, if you catch me in the mode where I'm doing something passionately it's best to not bother me... :) That get's back to number 13 - being impatient. We want to get the thing we are focused on done - and it's almost painful to pull off of it when we are so focused.Ok... so now, which of the five remaining tasks to go tackle...?

So, I realized that I am doing it again. I've been doing some testing with advanced compression. In doing so, I built some scripts to run my tests and produce the results of those test and so on......

About book writing

I get asked from time to time what it's like to write a book. Often, I've given those who have asked me that question an opportunity to contribute to one of my books in one way or another. I tend to look at the reviews that get posted about my books with a bit of trepidation. I know some authors who don't look at reviews at all. I like to look at them so I can get a taste and feel on what I've done write (pun intended) on the book and how I might improve next time. I got involved in a short discussion on one thread about writing. This particular person was very negative about what I'd written. I considered his complaints and after weighing them in my mind, I decided he was a bit off his rocker. You get those, from time to time, people off their rockers. For example this person started the thread off this way:  "I started to read through this book - and the first chapter starts by using VM templates on Windows XP - come on - no one uses XP anymore - also using the templates is a cop-out from doing a real world install - In the real world we are not using templates." Clearly, I thought, he was missing the point. We had a couple of back and forth messages after which it was clear to me that this person was just not getting it. So, I finished the thread off with the post I will paste in at the bottom of this blog entry. In it, I talk about the writing processes. There are a number of factors that eventually influience the content and depth of a given book. I don't mind constructive commentary at all, but when the commentary becomes grossly misrepresentitive (no - it's not a word but I'm using it anyway!) of the book, or it's content - then I mind. This isn't just about me, but it's about a ton of people who put a lot of effort into these works. The authors, editors (copy, technical and others), the contributors, all of those involved in the publishing process work hard. So, to expect constructive criticism rather than criticism that actually mischaracterizes (again, the dictionary says this is not the strict form or use of the word but I'm going with it) your book is annoying to say the least. So, here is the sum of my response to this person. I don't know if he ever "got it", but he never replied afterwards: <name withheld>, In fact in the real world virtual machines are everywhere. I did not use a "template" in this book - but I did use a Virtual machine. Each of the environments I used for the book were created from scratch by me. I suggested the use of the template for the reader so that they could spin up an environment and quickly start learning.With GoldenGate, what is foundational is not the OS, rather what is foundational is the databases you are running on that OS. As such, and unlike any other GoldenGate book we do cover the real world in that we show not only how to replicate between Oracle databases, and we also show how to use GoldenGate on MyQL and SQL Server. No other book on GG that I've seen gives anywhere near the coverage to the heterogeneous features of GoldenGate than mine does.I wanted to do other platforms but we ran out of time and space. If you want to do a chapter on some other platform in the future, please write me privately and I'd be happy to include you in the next revision of this book. In my mind, people need to learn the basics first, and then they build on those basics. The book isn't about installing Linux, so why waste the readers time diving into something that is extra-topical. Good grief, the thing would be > 1000 pages if I did that. <name withheld>, I've looked and don't see that you have ever had anything published. I'm sure you wrote something for your PHD and your MS? but you have never published a BOOK which is quite a different thing. The fact is that when you write a book you have to balance a number of factors:1. Time - You have a limited amount of time to produce the thing. You also have the pressure of deadlines. I also had to coordinate the work of others. Oh yeah, and I had to work my regular job.2. Pages - Your publisher gives you a limited number of pages to write.3. Recompense - You are paid a finite amount of money. It's NOT enough to live on, so see comment about employment on #1.4. Family - You have to make sure that those who love you remember who you are.5. Resources - A serious consideration. Resources cost time and money. I don't have a convenient place, or the time, to reproduce an entire data center in my house. My garage is flat out of space since that is where my wife likes to park her car. I did ask her if I could install an Exadata X3-2 full rack in there and she gave me the "what the heck are you talking about" look. She said, "We don't have the cooling for such a machine". I was surprised, I thought she was going to be upset about not getting to park her car in there.6. The "real world" is a construct in your mind. There is no "real" world. I've been doing this for 25 years and every environment - EVERY ONE - has been different. There is nothing "real" about it. Even principles change over time (ie: development methodologies - agile 25 years ago would have been laughed at). There are environments that are strictly virtual, there are those that are totally physical and there are those with a mix. You have cloud services, etc...etc... So any scenario that I would craft would apply to some percentage of the total audience. Finally, my approach is several fold. I try to teach the reader HOW to fish, rather than fish for them. That is why the book is laid out as it is. You see, not everyone has that PHD there <name withheld>, and not everyone has this laser like understanding of the complexities of the real world. That's why chapters 1 through 5 very carefully lay out the fundamentals of GoldenGate and even include workshops that run the reader through how to do these things. For example, I find that on many occasions there is this discussion on multi-master replication that is occurring and that somehow they forgot about the problem of collisions with respect to data. For the real world, do you not think that these kinds of architectural concerns should be considered.Far to often I find that books give readers the "quick start" cheat sheet - getting them off to the races fast and furious. The problem with that is they build their architectures based on that quick start chapter and later realize that they missed some fundamental principles in developing their environment. They are miles down the road and now they have to U turn and rebuild stuff. My approach is to teach foundational principles, WHY things are the way they are - then teach how to do these things. That said, once you have the fundamentals down, doing something like an upgrade or a database migration is a snap. I have considered a quick start chapter when we do a revision of this book. It is funny though, we do mention zero downtime upgrades and migrations in chapter 14, with examples. We don't have a workshop in there though. Maybe I'll add one next time.Bottom line, the book is 401 pages of packed content. It starts from the ground up, and teaches you not only how to use GoldenGate but how to use it properly. The real world is not contained in a tidy box. The real world is, in fact, usually very ugly. Thus, any attempt to replicate the "real world" will only apply to a small portion of reality.Finally, with respect to "grammar" errors, I'm not sure I know how to respond to that. If you wish to reply to me offline about this issue, I'll be happy to send your comments to Oracle Press so that their editors, who usually have an MS in English, can be further educated. I don't claim to be the greatest writer in the world, but I try. I'm a DBA, not an English major Damn it! :) At the end of the day, <name withheld>, I don't know what to say. I respectfully say I disagree with you and that if you can build a better book, then do so. If you want to write a book, contact me offline and I'll see what we can do to get you published. All you have to do is go through the process once and you see it in a WHOLE different way. I challenge you to write a better book, given the constraints I've listed. I'd love to see it. I'm sure it would be great.Cheers... Robertrobertgfreeman@yahoo.comEdit: Correction - Chapter 14 does not have an example of a migration in it, so that is something to consider next time. Chapter 14 assumes that you have learned the basic principles of the previous chapters. When we do a re-write I will have a workshop on zero-downtime migration/upgrade included. 

I get asked from time to time what it's like to write a book. Often, I've given those who have asked me that question an opportunity to contribute to one of my books in one way or another. I tend to...

The DBA post-it notes... what's weighing you down?

I once heard a story of a manager who had just taken over a department with a large number of employees and many managers. This department had been struggling, trying to meet goals and objectives. In a meeting with his managers, to put things in context, the new manager stood up a large figure of a man.  Then he took post-it notes and wrote the things that this person, a typical employee, was responsible for. When the manager was done, the stick figure was filled with post it notes. So much so that you could not see most of the figure behind the notes. The point was, our employees are already over burdened - we can't add to that, we must remove from it. I was thinking of that the other day and I created my stick figure for the typical DBA. This is what it looked like: What does this stick figure tell us? It tells us that we need help. That we need to learn to prioritize things, automate as much as possible and to rationalize our responsibilities. The key to me is automation. This includes everything from automating as many daily administrative tasks and reporting as possible - to enabling self service as much as possible. The post-it notes are not falling off, nor is the work associated with them getting any easier... We have to work smarter, harder and, most of all we need to learn how to make the work we do scalable. We are IT people, after all - these kinds of things are what we do every day. So, let's take some time and give ourselves the benefit of our own experience. Stand-up OEM, configure it to work, and work properly. Stand up automated and centralized backups and backup reporting. Get rid of monitoring scripts running from CRON and run them from OEM. Get right of "positive" reporting (ie: the backups worked on these databases) and move to "exception" reporting. Centralize as much as you can, while at the same time providing a HA framework for that centralized system.There are important things to do - when we are burdened by post-it notes, we often find that we have either failed to do them, or we have not done them well.It's time to shed post-it notes folks!

I once heard a story of a manager who had just taken over a department with a large number of employees and many managers. This department had been struggling, trying to meet goals and objectives. In...

Questions that make you go Hmmmm..... #3 Why 3NF?

I was reading my various Oracle forums the other day. This question was posed:"I have a table columns like id,password,age,gender and interests. now I need to insert multiple interests to single user.please help me..thank you in advance. "I was sickened that the people who first answered the question were suggesting things like this:"My first thought would be to store the interests in comma-delimited format."There were several suggestions along this line (including an interesting suggestion on using XML and Oracle's object oriented features). Honestly, I was just taken aback at the total lack of understanding on 3NF and why you develop a database using 3NF. Why not use object oriented features? There are a lot of reasons some of which I describe here:=====================================================================Assume this set of inserts:INSERT INTO nested_table VALUES (1, my_tab_t('A'));INSERT INTO nested_table VALUES (2, my_tab_t('B', 'C'));INSERT INTO nested_table VALUES (3, my_tab_t('c', 'a', 'F'));COMMIT;Tell me which a or A or c or C is valid and represents the source of truth. If C changes to Z does c also change to z or to Z or does it stay c? How do you know if all C represent all other C and if it does how in tarnation are you going to manage the sweeping change of C and c to Z? Application logic? Are you going to ensure that everyone else who manipulates this data in the future will follow these rules and how much code and time will it take? How are you going to validate these values? Application code? What happens when a Z code is added? What is the cost and effort required to add that new code to the application? This is called a rat hole. As you scale to millions of records and as new code comes into play that breaks these already tenuous relationships you end up with CRAP data - Completely Ridiculous Absolutely Poor data.======================================================================This isn't the only reason to avoid such designs of course. This is not the purpose of this post though. In this blog post, I want to provide what I think is the correct answer using 3NF...First of all, we have requirements stated in the OP and some I've added using best practices....1. We need to record the following information about a person:a. A unique identifier (primary key)b. Last_name (I added this requirement)c. First_name (again, I added it)d. Age - I'm changing this to date of birth as it's a more accurate data point.e. Gender - Seems self evident. Unless someone has a sex change.f. Interest - one person may have many interests and a given interest may be had by more than one person. Note the change from age to date of birth. This is one of the HUGE benefits of normalization. You discover business rules during the process. The discipline of modeling will cause you to ask questions as you develop the model that would not appear if you are just flying by the seat of your pants.3NF and logical modeling in particular is more than a way to get to a physical model. It's a way to ferret out business rules and discover relationships, rules and data needs that you might have not thought about. This is one of the reasons that logical modeling is so important. If it's done right, logical modeling really defines the nature of data, and it also surfaces many things (like relationships) that no one thought about. Logical modeling is rarely done but it's really important. So, we need to define entities. These are logical constructs NOT the physical manifestation of the requirements (yet). After consideration, our entities are:persongenderinterestNote that these are named using a singular noun phrase. This is the accepted way of naming an entity. The name should be meaningful and might include acronyms and can be shortened but it needs to make sense. If the name is to long (over 30 characters in most cases) then shorten first by removing vowels. For example people would be shortened to ppl (but obvioiusly you would not do this since people is < 30 characters. We would also express the relationships of these entities. For example:person has a 1:many relationship with interest.Interest has a many:1 relationship with personPerson has a 1:1 relationship with gendergender has a 1:many relationship with person.In the resulting physical model we will define and enforce these relationships with foreign keys (usually - some complex relationships might require a bit more work). Not creating foreign keys is not acceptable in any model. Tom Kyte has talked about this quite often - for example here:http://www.oracle.com/technetwork/issue-archive/2009/09-may/o39asktom-096149.htmlEnforcing these relationships in application code is really a common but really bad practice. This isn't the post to discuss that issue though. Let's fast forward now to our physical database design. We have decided that we need these tables:drop table people cascade constraints;drop table gender cascade constraints;drop table interest cascade constraints;drop table assoc_person_interest;create table people (people_id number primary key, last_name varchar2(30), first_name varchar2(30), gender_code number);create table gender(gender_id number primary key, gender_name varchar2(30) );create table interest(interest_id number primary key, interest_name varchar2(30) );Notice that we have created primary keys in both people and gender.Notice that we have a many to many relationship between people and interest. This is because a given person might have none, one or many interests. How we deal with this issue is really the crux of the question that the poster had.The suggestion to create a single column and put in the data as a comma delimited list of values comes with many problems. How do you ensure that the data is consistent? For example, we might enter flying and FLYING as the interest for two different people. What if we want to change the definition of what an interest is - say from PILOTAGE to FLYING? Querying comma delimited data is also problematic, will not perform and will not scale.So, what do we do? We create another table. People have different names for these tables, Some call themintersection tables, or join tables or associative tables. Whatever you want to call them,their purpose is to associate two tables in many to many relationships.In our case, our associative table will look like this:create table assoc_person_interest(people_id number, interest_id number);alter table assoc_person_interestadd constraint fk_peopleforeign key (people_id) references people (people_id);alter table assoc_person_interestadd constraint fk_interestforeign key (interest_id) references interest (interest_id);(I deliberately left out things like cascading deletes at this point just in case you wondered). Now, a given person can have more than one interest, but this list of interests are not recorded in the person table. Using a join table gives us a great amount of flexibility, allowing us to define any number of interests that a person might have. The foreign key relationships prevent us from assigning the same interest to a given user more than once, which is a pretty good idea since it will save us space and duplication of data (which is a bad thing). So, now we have our model built. Will it work? If so, how do we use it?We will need to follow the foreign key constraints. First we need to load our codesettables:delete interest;insert into interest values (1,'HOCKEY');insert into interest values (2,'FLYING');insert into interest values (3,'ART');insert into interest values (4,'DRAWING');commit;=======================================SQL> delete interest;0 rows deleted.SQL> insert into interest values (1,'HOCKEY');1 row created.SQL> insert into interest values (2,'FLYING');1 row created.SQL> insert into interest values (3,'ART');1 row created.SQL> insert into interest values (4,'DRAWING');1 row created.SQL> commit;Commit complete.SQL>===================================Now onto genderdelete gender;insert into gender values (1, 'MALE');insert into gender values (2, 'FEMALE');insert into gender values (3, 'UNSURE');commit;====================================SQL> insert into gender values (1, 'MALE');1 row created.SQL> insert into gender values (2, 'FEMALE');1 row created.SQL> insert into gender values (3, 'UNSURE');1 row created.SQL>SQL> commit;Commit complete.=============================================Now - we are ready to begin adding person records.Note that the business rules do not demand that a person must have an interest.So we have modeled this as an optional relationship. If there was a mandatory rule then we would have to add some additional code to manage that.Let's enter a person record along with their interests...-- enter information on a person.insert into people values (1, 'FREEMAN', 'ROBERT', 1);-- enter my interestsinsert into assoc_person_interest values (1, 2);commit;===================SQL> insert into people values (1, 'FREEMAN', 'ROBERT', 1);1 row created.SQL> commit;SQL> insert into assoc_person_interest values (1, 2);1 row created.SQL> commit;Commit complete.======================================If we try to insert a record that is invalid:-- enter information on a person.insert into people values (2, 'FREEMAN', 'CARRIE', 1);-- enter my interestsinsert into assoc_person_interest values (2, 6);commit;-- insert a valid interest for an invalid personinsert into assoc_person_interest values (9, 2);commit;-- Note that the first insert will be successful but the second one will fail.-- unless we rollback the transaction.===========================================SQL> insert into people values (2, 'FREEMAN', 'CARRIE', 1);1 row created.SQL> insert into assoc_person_interest values (2, 6);insert into assoc_person_interest values (2, 6)*ERROR at line 1:ORA-02291: integrity constraint (ROBERT.FK_INTEREST) violated - parent key notfoundSQL> commit;Commit complete.=============================================So, we can't assign an invalid interest. Also we can not add an invalid person.Notice though how the transaction is not rolled back with the second failure. The commit saves the first insert.To query these tables and display the related records is easy as seen here:SQL> l 1 select a.last_name, a.first_name, c.interest_name 2 from people a, assoc_person_interest b, interest c 3 where a.people_id=b.people_id 4* and b.interest_id=c.interest_idSQL> /LAST_NAME FIRST_NAME INTEREST_NAME---------- ---------- ------------------------------FREEMAN ROBERT FLYINGFREEMAN CARRIE DRAWINGFREEMAN ROBERT DRAWINGOf course there might be other issues to deal with like people who do not have any interests, in which case we will need to use outer joins like this:Let's add another person without any interests:SQL> insert into people values (3, 'HANKS','DULE''',1);SQL> commit;Now - what is the result of our query?SQL> l 1 select a.last_name, a.first_name, c.interest_name 2 from people a, assoc_person_interest b, interest c 3 where a.people_id=b.people_id 4* and b.interest_id=c.interest_idSQL> /LAST_NAME FIRST_NAME INTEREST_NAME---------- ---------- ------------------------------FREEMAN ROBERT FLYINGFREEMAN CARRIE DRAWINGFREEMAN ROBERT DRAWINGWhoops, where is Mr. Hanks? We need to change our query just a bit to surface him: 1 select a.last_name, a.first_name, c.interest_name 2 from people a, assoc_person_interest b, interest c 3 where a.people_id=b.people_id (+) 4* and b.interest_id =c.interest_id (+)SQL> /LAST_NAME FIRST_NAME INTEREST_NAME---------- ---------- ------------------------------FREEMAN ROBERT FLYINGFREEMAN ROBERT DRAWINGFREEMAN CARRIE DRAWINGHANKS DULE'Here we have used an outer join. There are different ways to do an outer join... I've used the old Oracle (+) syntax.So - there is a quick and dirty answer to the question - how do I store multiple data values for a given person.

I was reading my various Oracle forums the other day. This question was posed:"I have a table columns like id,password,age,gender and interests. now I need to insert multiple interests to single...

Urban "Best Practice" Myths....

I had a situation today where someone was insisting that "x" was a "best practice". I asked them to show me where Oracle had defined "x" to be a best practice, and after some searching there was no best practice "x" defined by Oracle. It really interesting how we have actual best practices and what I call cultural best practices and then there are the Urban best practices.In every case, the only true best practice is one that is defined by Oracle as a best practice. OFA is a best practice. MAA is a collection of best practices. Cultural best practices are those that get their start in our culture. These cultural best practices tend to grow, because they offer a solution to problems we think we are facing or, more often, that we might face in the future. The solutions sound reasonable. The problem is that the expected results of these cultural best practices often is never really validated. For example, rebuilding indexes on a regular basis is, in my mind, a cultural best practice. It is NOT a best practice in reality. While using the best practice might demonstrate some positive results, as time goes on the same problem that caused the index to have performance and growth issues comes back. So, this is a best practice that really isn't one because it does not really solve the problem. It does not address the root cause of the problem which is probably something like sparse deletion on indexes that use some kind of surrogate key... The real best practice for this problem lies in the domain of data modeling. Because some people can't data model even the simplest things, then they have to rely on this problem inherent in B*Tree indexes. Then there are the  Urban Myth best practices. This is one I caught today. The DBA said "I was told that X is a best practice!". This database is weeks away from going to production. Implementing this X best practice would involve a lot of risk, and the customer is in a panic because he thinks he "had" to implement this best practice. That the DBA simply didn't understand two very important things about best practices. First of all, just because someone else said it does not make it so. There are many urban myths in the DBA world designed to save your database. People will tell you this, suggest that and insist that xy and z are best practices - because it's what they do every day. This is simply a best practice myth - and so often when I dive into these urban best practices, they have unintended consequences that the DBA never considered. So, in the end, the lack of understanding that cause the problem that the DBA had and that they developed their "best practice" with the same lack of education - not understanding their folly. Then, thinking they have done something amazing, they spread their newly found best practice, finding an audience. The other best practice myth, and perhaps the biggest one, is if you see it on the internet it must be true. How many times have you copy and pasted some long bit of SQL or PL/SQL code from a web page because the resulting output sounds very helpful. Then, the web page tries to guide you to interpreting the output. So often, I've seen these pages analyzing data completely wrong, and thus leading to totally incorrect solutions. Another like situation is when someone posts a problem in user groups. All of a sudden, there is this rush to provide answers - change this, alter that, and configure this all rush out from the internet from well meaning (usually) people who want to help you solve your problem.The problem here is that usually the problem statement is woefully lacking needed detail. People tend to fill in the blanks in one way or another and spit out solutions ... sometimes the solutions are bad, sometimes really bad and sometimes downright dangerous. Running around changing 13 things that 13 different people tell you to do is not proper problem diagnosis and resolution. The bottom line is that there is a lot of misinformation out there. Make sure, before you change anything on your database - understand the implications of that change. For example, when adding an index remember that index might be used by the query you are now tuning - but it's presence might well change other execution plans - for the better in some cases and for the worse in others.If you want to apply best practices, look to Oracle as the source for those best practices if your own experience does not provide that source of information. If you wonder how to approach a problem, and what the best practice is to deal with that problem, shoot support a SR and ask them if Oracle has any guidance. Also, look to experienced Oracle sources for the information you need - the likes of Tom Kyte - you can also ask your sales rep and he or she might well be able to find you someone inside of Oracle that can answer your questions.

I had a situation today where someone was insisting that "x" was a "best practice". I asked them to show me where Oracle had defined "x" to be a best practice, and after some searching there was...

Build Your Own Exadata? Questions that make you go Hmmmm..... #2 (Part 3)

If you have read my previous posts on the topic of building your own Exadata then you have seen my arguments for the substantial costs of building your own both when developing the system and after it's gone operational. In this section, I discuss miscellaneous other costs - most of them reoccurring - that are associated with build your own... Notice that one thing I've done, for the most part, is avoid the discussion about specific features in Exadata. I think I mention them in a few places respect to flash and compression but in general I've tried to focus on costs specific to just standing up such a system and maintaining it...  Here is the last part of my thoughts: 1. Other Cost Considerationsa. Lack of scale – This method of procuring systems does not scalewell at all. The cost of additional resources will increase as the scale of theinfrastructure increases. As the system scales further, the ability to supportthe infrastructure juxtaposed to Exadata will quickly probably be severalorders of magnitude different, with Exadata clearly being the better choice. b. Refresh – Each refresh requires a re-run of Steps listed in 4. c. Additional Cluster – Each cluster created will have to gothrough step 4 again. This is because the same hardware/software components youput into cluster #1 have become obsolete. So, now we have a growinginfrastructure that is wildly component divergent – increasing costs yet again. d. Local Body of Knowledge encapsulated in one or few peopleon-site – The working body of knowledge with BYOH (Bring your own hardware) islocal and usually limited to a few people on-site. This is a significant riskand increases costs, sometimes significantly. If the maverick who put thismonster together ever left, who will support it and will they be able tosupport with a comparable body of knowledge? e. Disciplined approach – The Engineered Systems design,development and production have a specific discipline to them that includesthings like peer review, deep experience in hardware as well as past customerissues and problems. This leads to a disciplined approach to engineered systemsthat is very hard to replicate in most IT departments. One maverick designing a“Exadata Killer” is likely to not benefit from input of others with deeptechnical experience. Thus, while the maverick may have specific expertise inparticular areas, he lacks the overall vision of what is required and has fewif any peers capable of matching him on the same technical level he lives at.Thus, what you get is a solution that is designed from the point of view of theexpertise of the designer, rather than one that is holistic in its approach –as is Exadata.  So, in selling Exadata it’s not just about the special sauce orwhatever – it’s bigger and broader than that. It’s about TCO – which includesall of the items I’ve listed above and more.  When I was first getting into IT I was a C programmer. Theybrought in Oracle on AIX and I fought a huge fight to get the internal DBA jobthey were offering – which I got. I saw it as cool and an incredibleopportunity. I was going to get to build warehouses and OLTP databases and bethe DBA… power!!! Moooohahahahahaha oh the power! In the end I was really nodifferent than the maverick that wants to build the Exadata killer – he’sprobably drooling at the idea – conjuring every excuse and reason to do it thathe can dream of (honestly - I can see me doing this!). The Achilles heel of these types is that they always talk invague terms. They speak using generic arguments and never provide a single factto back up what they say. The response is to expose the true costs of theeffort they propose. Shed light on the fatal flaws of their arguments (whichisn’t that hard). Present these intangible costs to management and ask THEM toassign a cost to these items. Accumulate these costs and all of a sudden thisgreat Exadata killer won’t look so grand.  Yeah…. All in all it looks like it’s much cheaper to BYOE (BuildYour Own Exadata) – NOT.

If you have read my previous posts on the topic of building your own Exadata then you have seen my arguments for the substantial costs of building your own both when developing the system and after...

Build Your Own Exadata? Questions that make you go Hmmmm..... #2 (Part 2)

This is part two in my post about designing your own "Exadata Killer" vs. Buying Exadata. In this post I postulate several operational related costs that are not considered when building out your own Exadata like infrastructure. This post is a copy of an discussion I had on this topic a few days ago and I thought it might be worth posting:Then there are the operational costs that are not being considered: a. Cost to patch and upgrade the entire infrastructure on a regular basis (Quarterly). This includes:i. Resource cost to be constantly in a patching mode.ii. Cost/risk if opting to not patch on a regular basis.iii. Time required to completely check all vendors to make sure you have the most current patches to apply. This includes any applicable bug fixes.iv. Deciding what patches need to be applied.v. Significant time testing the patches before applying them to ensure compatibility and certify.vi. Costs of troubleshooting and support with various vendors. Tracking down bugs. Waiting for fixes. Etc.vii. Deciding what other bug patches need to be applied to the patches you are applying.viii. Assessing the impact of all patches on all components.ix. Dealing with incompatible patches that you were told were compatible.x. Not being able to easily rollback patches.xi. Not having readily available scripts to perform many patching activities.b. Cost of not having a single tool (Exacheck) to ensure your entire infrastructure is healthy and that it is meeting specific best practices. Also the cost of not being able to determine if your database needs to have critical patches installed on it that impact HA.c. Lack of specific integration between Cloud Control and Your system. You have to manually configure the various monitors and alerts.d. Costs of not having access to tools like cellcli that provide performance information to Cloud Control and via the command line.e. Costs of not having a readily available, time reliable (relative) upgrade path available. If you want to expand the system then just repeat all costs from item number 4 times a multiple of about 1.5 to 2.0 to reflect that you are now dealing with a production system and testing and stability are now really key.f. Costs of performance tuning queries that still don’t perform well, even on the souped up hardware.g. Differential costs of power, cooling and space.---In part three I'll conclude my thoughts on Exadata "Killers" and why you just can't build your own. Later, I'll address my thoughts on why these vendor supplied Exadata "Killers" just can't hit the mark. Note: Edited for content and formatting.

This is part two in my post about designing your own "Exadata Killer" vs. Buying Exadata. In this post I postulate several operational related costs that are not considered when building out your own...

Build Your Own Exadata? Questions that make you go Hmmmm..... #2 (Part 1)

So, I've been told on occasion "Hey, this Exadata stuff is expensive! I've just costed out my own Exadata Killer and it's like a quarter of the price. I'm just going to build my own!"Yeah - right. I had the ability to participate in a discussion about this and I postulated a lot of things that I don't think that people who throw out the "Build my own" argument ever consider. Let me share my thoughts with you about this. I'm going to chop this up into three parts over the next week or so.Here are the thoughts I expressed:-------------------------------------------------------------------I’m sure at one time or another we have heard someone speculate about their ability to build their own. Often their off the top of their head cost model is based on some quick pricing of various components. This is typical of technowizard types who think that something like an engineered system is just some marketing hype. They have not taken the time to really understand what it is, because they are so wrapped up in their own technical point-of-view that can’t possibly be wrong.Because they are often technical types, they often don’t consider the non-technical costs more often than not they are not database people in the first place they are pure hardware people who probably lust at the idea of creating an Exadata beater .. budget? meh... who cares about overall costs and that they will fail in the end.I rather think that you could produce a detailed cost estimate comparing the two and find that Exadata is by and far the cheaper solution. Things I’d consider in my overall cost analysis:1. Cost of *comparable* hardware as already pointed out in this thread.2. Costs for differential memory requirements due to lack of cell offload. Thus they will require more memory for SGA than an equivalent Exadata database will.3. Costs for additional Flash than would be in an Exadata box. This is because their flash will be flushed during backups and will not intelligently store blocks like smart flash cache will. Thus, their flash performance will be less impressive.4. Time related and other costs required to initially standup and online the architecture they propose. I am assuming that they had not considered these significant costs. a. Time spent in initial architectural decision making (ie: how much disk, speed of CPU’s, etc)b. Time spent to ensure that the architecture will actually be supported by the different vendors.c. Time spent determining how to connect everything together correctly with fully redundant pathing (as mentioned already in this thread).d. Time spent determining if components selected are certified to work with each other and are compatible with Oracle RAC.e. Time spent ordering components. Time spent reordering incorrectly ordered components.f. Time spent installing the components.g. Time spent configuring the components.h. Time spent installing Oracle CW, ASM and RAC. Configuring disk groups, etc.i. Time spent talking to Oracle support to solve problems while trying to perform item h.j. Time spent troubleshooting configuration issues (can be HUGE)k. Time spent trying to figure out why your RAC nodes fence all the time (because something is miss-configured or not compatible that you didn’t find until now).l. Time reconfiguring things so it will work.m. Money required for additional components you didn’t order or architect that you later realize you needed to provide HA.n. Time spent in meetings explaining why you are behind schedule and trying to justify why your project has already cost upwards of 70% of the cost of an Exadata machine, fully installed and operational.o. Time spent because the old system is not able to be moved in the time frames required, which caused certain unexpected expenses such as extension of support contracts (I’ve seen this happen more than once).p. Costs related to stake holder discomfort with the stability of the system you are standing up.q. Costs related to standing up a backup infrastructure that will support the large amounts of data that you will need to backup (remember – incremental backups are offloaded to the cells – so they won’t get this benefit).r. Costs related to multi-vendor calls for support as you attempt to standup the system.s. Costs for additional equipment needed to provide for quick failed part replacements.t. Costs for RandR of failed equipment. Cost of loss productivity due to resource being required for RandR of equipment.u. Costs to customize database parameters for this hardware configuration.v. Cost of not being able to use Exadata related features such as HCC, DBRM and IO Resource Manager as available on Exadata.w. Costs to integrate the database server you are creating with the application servers.i. Cost of providing Infiniband connectivityii. Cost of slower applications running on non-exalogic hardware.iii. Cost of slower application responses since database performance will likely be slower than with Exadata.x. Costs to properly decide where on the disks to best physically locate disk groups and how to allocate those disk groups properly.y. Costs of lack of a community that has equipment similar to yours. Little or no peer feedback to questions or issues. Smaller information base on your configuration which makes problem research times longer – perhaps significantly longer.z. Costs of having normal Oracle support rather than that offered to Exadata customers. Cost of paying for Oracle support and paying for all other vendor support contracts.aa. Cost of labor scale as you will need to hire more SME’s to manage this environment – especially if it scales out.Next post will be on the costs of operations and Other ongoing cost considerations.Thanks!Note: Edited due to some formatting issues and a *gasp* spelling error.

So, I've been told on occasion "Hey, this Exadata stuff is expensive! I've just costed out my own Exadata Killer and it's like a quarter of the price. I'm just going to build my own!"Yeah - right. I...

To Oracle Database 12c or Not to Oracle Database 12c - Carts and Horses (A reoccurring series - Number two)

Oracle Database 11gR2 is moving from Premiere Support to Extended Support in January 2015. What does this mean to you? Here is the basic definition of the services provided by Premiere Support:* Major product and technology releases * Technical support * My Oracle Support * Updates, fixes, security alerts, data fixes, and critical patch updates * Tax, legal, and regulatory updates * Upgrade scripts * Certification with most new third party products/versions * Certification with most new Oracle products Extended support provides the same benefits for an additional three years but these benefits come at an additional cost. So the real question is, what is in your best interest as a customer? Moving to 12c and staying with Premiere Support or taking your time, investing in Extended support and waiting to move to 12c later? There are a number of factors that weigh in on either side, and it’s clear that there is no one best answer for everyone otherwise, why would we have Extended Support available to us in the first place?So, let me state clearly that what I’m about to say in this post (and all my posts for that mater) is my opinion. It’s based on a lot of assumptions, specific objectives and what I consider to be best practices. You may disagree with me, and if you do I hope you will post a comment and nicely tell me why. ? First, let me define that what I’m talking about in this piece is major upgrades. These are upgrades from one major version of  Oracle (say 11gR2) to another major version of Oracle (say 12cR1). I’m not talking about upgrading existing versions with the latest bundle patches, PSU, GIPSU, EIEIO, or whatever. I consider these interim patch sets and will call them such in this post. What are the considerations when planning a major upgrade from, say, Oracle Database 11gR2 to 12cR1? Let me present you with a non-exhaustive list of things I’ve come up with, along with some comments about these factors:*    Cost A major upgrade has quite a bit more cost associated with it than other upgrades. This is for a lot of reasons many of which I list below. *    StabilityThere is this lore out there in Oracle land. That lore is centered around the notion that we don’t use the first release or two of a major version change. Lore is just what it is, based on our own experiences and things we have heard. In my mind, this argument seems to be considered a postulate rather than a  theorem and it certainly lacks any real proof. I further would argue that technology has changed many of the reasons argued for this approach, we just have not implemented that technology. *    Resource AvailabilityResources are a major issue when it comes to major upgrades. Resources are always a problem, but with Major Upgrades there is a learning curve that adds to the resource complexity. Finding someone experienced on the newer version of the database will be difficult, and those experienced with older versions of the database will need some ramp up time.*    DependenciesA major upgrade always has dependencies that need to be considered. For example, before you can upgrade your 11gR2 RAC databases to 12c RAC, you will need to upgrade your Grid Infrastructure to support 12c RAC. There may be other dependencies to consider as well.*    TestingStakeholders and Management get understandably nervous about any kind of change. When we are talking about a major software version change, they really seem to get nervous. This is for a lot of reasons   including the fact that they would like their application to continue working after the upgrade! Many organizations have not streamlined their testing processes, and thus just getting testing started is a major event and finishing it is a major accomplishment. This old way of testing   if you will   also turns out to be very costly and is often not budgeted for. *    RiskThis in some ways is what it all boils down too risk. How much risk is involved in this upgrade. How likely is it that we have caught everything in our testing? Is the likelihood of an outage increased by this change in software versions. The possible risks are many of course but are they really that much greater than the risk of doing nothing? Also, what is the root cause of your risk assessment fears? Perhaps it’s not the risk that is the problem, but the process you are using.*    BenefitIn the past, I’ve had many discussions about major upgrades and one of the questions that comes up is, What is the benefit of doing this now? The benefits are inexorably connected with the risks. But do we really measure this ratio correctly? We certainly can’t unless we know somethingabout the product and the features that it brings to the table. We can’t properly measure it’s benefit and stability unless we put it to the test.*    CertificationThis might well be one of the biggest hindrances to migration. It’s easy to blame a vendor for failing to certify but is that where the blame properly belongs?*    EducationAs mentioned above with respect to resources, education is a big issue. Time and time again I run into DBA’s and developers who are still doing things the way you would back in the Oracle 7 days. Education is a real issue in so many ways because it has ties into the here and now but ties into the future as well.*    FearMany fear that which we are not comfortable with. There are some who embrace the newness of something, there are those who are cautious and there are those who go running for the hills. We have to address this fear. One of my favorite movie quotes is from dune: "Fear is the mind killer." Our fear is our undoing. Make no mistake, this fear isn’t just about fearing the migration itself   it might run much deeper than that. Some employment cultures almost embrace fear, thinking that it’s some Darwinian approach to success.*    Lack of agilityIn my mind, this is one of the greatest of roadblocks to the enterprise. Being unable to be agile is what makes things take longer, cost more, adds risk and causes a whole host of other problems that stagnate the enterprise. * Scale The scale of an upgrade project can have a serious impact on decisions made around such projects. It's far easier to manage the upgrade of two databases in a non-RAC infrastructure than 400 some databases spread across many RAC clusters. Scale demands that you act sooner, not later - and yet, I often find that this is not the case. *     AssumptionsI've seen that a myriad of assumptions seem to occur at various milestone moments, including major patching. I think these assumptions are the source of many stumbling blocks and even failures. I'm sure there are many other factors that will come to mind after I post this - there is always something I wish I'd written. So - why put this under the title of putting the cart before the horse? Mostly because of the assumption factor. In assuming that X, Y and Z are postulate items rather than something that really requires a proof, we kind of put the cart before the horse. The cart represents the proof that comes from faulty assumptions that X,y and Z represent postulates. So often, those postulates are not real, or are not as dire as one might make them out to be - for so many reasons. Thus, in this case, the cart is blocking the horse, and hindering him from pulling the cart - and it might well be that the cart might injure the horse - the horse being your IT infrastructure and your users applications.And - just to add to the visualization of the cart and the horse and give it a bit more excitement, let me add one more thing. The cart and the horse are sitting on a pair of railroad tracks, immobile. Our in the distance is a whistle of a train heading your way. One one side of the train is plastered the big bold words - REALITY COMETH. On the other side of the train, the side you can't see, in equally bold words we find - AND SO AM I, AND I AM AFTER YOUR DATA!!The reality is that the bad guys move on the fast tracks. We need to learn how to do the same lest the horrific word BREACH meet your mailbox on some Monday afternoon.

Oracle Database 11gR2 is moving from Premiere Support to Extended Support in January 2015. What does this mean to you? Here is the basic definition of the services provided by Premiere Support:* Major...

Exadata support for ACFS (and thus, 10gR2) now available!

Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now!ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following:Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are:You have to use Oracle LinuxYou must use GI 12.1.0.2 or laterIf you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you.The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure...

The Dangers of “Recipes”

The Story of theWould Be Baker There was once a man who loved to bake. He lived on theshore line. He would love to watch the ocean from his windows as he would bakehis wares, steadfastly following his recipes. The recipes were ones that hadbeen handed down from his mother and his mother’s mother, honed and craftedover generations of trial and error. The result of these recipes was thecreation of amazing masterpieces of baking. Pies, cakes, rolls and other breadsthat he cooked were simply amazing. His mother taught him how to bake, and howto follow the recipes and he had eventually opened his bakery. He was known formiles around as an amazing baker, and people came from miles around to buy and eathis baked goods. One day, there was a woman had heard of this man and his wonderful baking. She drove tothe bakery to sample his wares. She ordered several items and as she ate them she fell in love withhis baking. In her life, she had cooked a great deal and always wanted to trybaking. Her few attempts had not be the greatest, but she always felt that wasbecause the recipes were not the best. Here, there was no mistaking that theserecipes were the best recipes and they were tested and tried over time. Whileshe had little experience with baking, she felt confident that with theserecipes in hand she would soon be an amazing baker too. She had wonderfulmemories as a child of her mother taking her to the bakery and as a result,opening a bakery was a dream of hers. She knew with these recipes that shecould not fail. Excited, she went to the man, and offered to buy his recipesso she could open her own bakery. Staring at a check with several zero’s in thenumber, the man agreed and handed a copy of his time honored family baking recipesto the woman. The woman was elated and got in her car and drove home, somethree hundred miles away and in a beautiful little city nestled high in themountains. The city was a famous ski resort and she was excited to open herbakery and sell wonderful wares to the tourists who would soon be entering hertown . Quickly she located a shop front that already had all the bakingequipment in it that she needed. Several days later, having acquired the best of ingredients,and having followed the mixing instructions to the letter, she started to bakethe recipes. She started baking cookies and pies and all manners of bread. Thesmell filled the room, and she found herself lost in the delightful aromas asthey proceeded to bake. In those delightful smells, she was content that allwas well. She peeked in the ovens from time to time, and all seemed to be goingwell. The first timer went off. It was the timer for the cookies.She remembered these cookies from the bakery – Oatmeal Raisin cookies. Thewoman, excited to experience the work of her own hands, opened the over wherethe trays of cookies were and pulled each tray out excitedly – with a smile onher face. Yet, when she looked at the cookies as she pulled them out, there wassomething that didn’t seem quite right about them. She could not put her fingeron what it was, and she decided it was just that they needed to cool down. Sheput the trays down and waited as the cookies cooled. Finally, she looked at thecookies, which had cooled down – still, something didn’t seem right. She picked up the cookie and put it in her mouth. When shebit down on the cookie, it was dry and crumbly and nothing like the chewy anddelicious cookie she had eaten earlier. As each timer went off, and she removedthe item she was baking, she kept finding that something was off in each one.Some of the breads didn’t set correctly, cakes were not moist – nothing cameout right. What was the problem? The Problem withRecipes Did you notice in the story that the man’s bakery was at sealevel and that the woman’s new bakery was up in the mountains, at altitude?This was the problem with the recipes. Because the environment had changed, therecipe was no longer able to produce the wonderful baked goods that the man hadmade. My wife tells me that baking is a science, and that cooking is an art.Baking requires the right mixture of all of the elements – flour, water, sugar,heat, time and so on. A recipe is a method of putting all these elementstogether given some very specific base variables, altitude being one of those.If you change any of these variables, then you need to understand the sciencebehind the recipe to adjust the recipe. For example, when baking at higheraltitude you will often cook at a higher temperature for a shorter period oftime. This can lead to evaporative issues in the item being cooked, so you willoften need to add some additional water to keep the resulting baked goods frombeing too dry. There are many variables, and dependencies (such as thehigher heat requiring more water to avoid dryness) that will cause a failure ofthe recipe if you are not aware of them and adjust accordingly. Miss oneadjustment and the whole affair becomes a sad memory to what could have been.In cooking, we have the opportunity to try and try again, adjusting the recipeuntil it’s just right again. With production databases – we don’t usually getto many redo’s when it comes to recovering from data loss. Loose data and youbetter get that word processor ready and update your resume. Why Recipes areDangerous in IT There are a number of books published that offer “recipes”to do this and that in an Oracle database. Often in user groups I find peopleasking lots of “How do I” kinds of questions (which is understandable) and Isee much fewer “How does this work” kind of questions. To me, this demonstratesa fundamental flaw in the thinking of IT professionals these days. In dayspast, you had to know how something worked in order to use it properly. Oftenthe only source of help that you had was poorly written documentation, so wewere often forced to learn from the ground up – learning how something workedand then applying that knowledge to making that something work. In those days, we would work from the statement: “I need todo X.” – for example, “I need to be able to recovery my data.”. Based on thestatement (which might also be called a requirement), we designed a solutionunderstanding the technology that we were working with. That is to say, we didn’toften have tools that would do the work for us, so we had to build those tools.In having to build those tools, we had to understand the underlying principlesof how something worked. This was not a very scalable solution of course, everyonewriting their own tools. Now, we have wonderful tools like RMAN that make ourlives much easier, and provide for us the ability to focus on other work. Thisis a wonderful thing. However, I have noticed what I think is an unintended consequenceof the advent of these new tools. This consequence is that we no longer areforced to understand the technology that the tool is abstracting us from. In aneffort to quickly get up to speed and implement the tool, we run to recipes, ortutorials, to help us quickly setup and use the tool. Indeed, we clamor forthese recipes because of the growing complexity of these tools. We want toshortcut the learning curve because it is so very steep. We then dismiss thenotion of learning about the internal functioning of the tool, and depend onsomeone else’s knowledge and experience to lead the way, following theirrecipes. However, just as the man in our story didn’t realize thataltitude would impact his recipes (for he never really professionally learnedbaking, only to read and follow his family recipes) we often don’t understandhow to adjust the recipes when our environment changes. Perhaps the recipe forbacking up does not work just right, and we struggle to figure out how to makeit work. Worse, maybe the recipe for recovery does not work right, and we onlylearn about this problem when it’s crunch time and we have to restore andrecovery our database. The point is that just asking for and following a recipe isnot enough for a professional DBA. You are expected to understand how thingswork, not just how to make things work in a specific set of pre-definedsituations. Just understanding recipes is not enough when it comes time toarchitect your solution because the permutations of possible architectures andmethodologies is huge. A number of things from licensing, software versions,clustering, DR solutions, business rules and many more variables can take arecipe and make it unusable. The problem is if you don’t understand the technology,you really don’t know what you’re getting when you just follow a recipe. Youdon’t know that you’re not meeting a business need until it’s too late. So, What to doInstead? The solution to the problem, I think, is as follows: · Deeper Education – You need to take the time andunderstand how things work. In the case of backup and recovery – you needto know how the database works, and whatit does as transactions are processed. You need to know how these mechanismsare used during the recovery process. Do you find that you have to return tothe recipes time and time again because you don’t know how to do things off thetop of your head? Are you using all of the RMAN features to your bestadvantage? If your just following recipes, you can’t possibly know thesethings. Understand Architecture – Sure, maybe you knowRMAN, but what don’t you know? Do you know how to implement a tiered backupinfrastructure with maintained retention? Do you know how to test restores bothwith and without actually restoring the database? Is the recipe you are usingreally the one best suited to your needs? Do you understand all of thedifferent backup and restore methodologies and which one will best meet yourneeds? Define SLA’s – The term SLA must be one of thedirtiest words in IT because every time I use it people’s heads bow, I heargasps of air and fingers start pointing everywhere. How can you define anythingwithout an understanding and agreement between all of the stakeholders as towhat you are going to do. Recipes are not SLA’s. Define and understand your strategy – Knowing whereyou need to go is important. A recipe cannot understand what the business needsand requirements are – therefore a recipe can not be a blueprint for your organizationsstrategy. A recipe is only a small part of the whole of what needs to happen. Define best practices and standards – Neither ofthese two terms are synonymous with recipes. Instead, they define the best wayof doing things and they provide a common, standardized and scalable way ofdoing things. Best practices and standards are borne from education,understanding and application of SLA’s. They may result in more than onestandardized way of doing things – for example your standard backup andrecovery strategy for OLTP databases might be different than that for datawarehouse. The thing that best practices and standards to is to put together ascalable and repeatable and tested solution for your enterprise while alsoacknowledging that it needs to be flexible. Mentoring, monitoring, control and communication– these are critically important items, and I’ve rarely if ever seen thesecalled out in a recipe. My belief that understanding how something works isparamount to understanding how to implement solutions that meet specificrequirements is why I spend the first several chapters of my RMAN bookexplaining how Oracle works with respect to backup and recovery. Some getperturbed by that, suggesting that this method takes too long to get startedwith RMAN. I acknowledge that understanding the basics first does take time(and in fact in my new addition I have added a quick start guide) but at the endof the day the difference between rank amateur and professional is the degreeof understanding of that which you profess expertise about. (and if you everhope to pass a technical interview with me, you best be able to describe ingeneral terms what happens to the database during a restore and recovery, downto how UNDO and redo are applied, and what happens to uncommitted transactionsand even the question – will blocks that contain uncommitted data ever beflushed to the database datafiles by DBWR and if so, why?) At the end of the day, the needs and requirements of anenterprise are way beyond the scope of simple recipes. Recipes are not best practicesand often they are not even complete examples of how to do something. They aresimply a means of trying to educate, and in my mind it’s a poor approach to education.Why? Because far too many people believe that recipes are the way thatsomething should be done, instead of understanding that, at best, it’s just oneway of doing what should be done. Way too often, I’ve had people point at a recipe, or anexample and claim that’s the way it has to be done. Way too often I’ve lookedat backup scripts and asked people “Where did you source this piece of junkfrom?” (I’m usually nicer than that) only to find that it’s been cut and pastedfrom this book , that book or some website. Way too often, as we dive intobackup and recovery discussions do I find that their methodology does not meettheir requirements and, once in a while, the scariest moments are when Irealize that their backups are wholly unrecoverable – and that the DBA’s neverrealized that. Way to often I’ve asked questions of DBA’s about restoresituations that throw them way off their restore “scripts” or recipes. Way toooften, when things go off script (as happens with restores very often) the DBAgets lost. I don’t expect you to understand how to force a database open,indeed that information is not documented officially. I do expect you to tellme that when a datafile or two is lost that restoring the datafiles online is afar better situation than shutting down the database and restoring the wholething. (true story by the way). When the time comes for an actual restore and recovery,those are scary moments indeed. The fight and flight response will be in fullswing and you need to be prepared – and you need to understand – how thedatabase works, and how the restore and recovery process works so you can getthe database back up and running with a minimum of outage. In the end if we are managing databases, and are charged toprotect the data within them, then recipes are not the way to ensure that thisoccurs. Only good old education and understanding will suffice. That takes time,commitment, prioritization, determination and a refusal to be lazy. Otherwise,you might as well keep that resume up to date – for your time is coming. And you don’t want to sit in a job interview, trying toanswer the question, “Why did you leave your last job?” when the correct answeris, “Because I cratered the database by issuing an rm –r * from root, and intrying to restore the database realized that I had killed all of the onlineredo logs and that the database could only be restored and recovered to a pointin time 2 hours before I trashed the file systems.” That is a bad day my friends. How do you feel about recipes?  Note: Edited in a couple of places for grammar and content since original post.

The Story of the Would Be Baker There was once a man who loved to bake. He lived on the shore line. He would love to watch the ocean from his windows as he would bakehis wares, steadfastly following...

Thoughts and Musings on Oracle Exacheck

If you deal with Oracle Exadata, then you are probably awareof Exacheck. Exacheck is a tool that is used to determine the health of theentire Exadata infrastructure. (Note that Exacheck is also available forExalogic but for the purposes of this discussion we will be dealing withExacheck and Exadata). The infrastructure checks on Exadata includes checking thedatabase, the cell servers, the network and the other components of the Exadatamachine. When I do classes and talk about Exacheck, I ask the attendees if theywanted to find out the health of their entire infrastructure, do they have asingle tool that can quickly give them an idea of the health of their entiredatabase or application Infrastructure in just a few moments? Universally theanswer is no. Exacheck provides this tool for Exadata. Beyond health, the Exacheck tool provides a review of thesystem configuration compared to Oracle best practices. This review includes anumber of checks, some of which may or may not apply in your specificenvironment. However, each of the checks does cover each Oracle best practice,and as such, you might find specific gaps in your architecture by reviewing theExacheck report. I’m not going to re-invent the wheel by showing you thedifferent parts of the Exacheck report. I have previously written on Exacheckon my old blogsite. The Exacheck is updated and improved all the time. For example, sincethe post that I referenced above, the Exacheck output now includes a detailedlist of each component, the current level of the software that the componentand compares the current version with the recommended versions and indicates ifyour version is still supported. What I’d like to focus on in this post are some of the brandnew features that have been made available in the latest version of Exacheck –2.2.5. The first thing I’d like to point out is that Exacheck isconstantly being improved. The Exacheck six months ago has been much improved.To keep up with the new functionality in Exacheck, the best place to go is theExacheck Feature Fix History document. This document is contained in the downloadpackage that includes Exacheck. In the latest version of Exacheck as of thiswriting (2.2.4_2014-228) the Exacheck Feature and Issue Fix History Documentcontains a history of the new features, and also lists features that have been fixedand removed. In Exacheck version 2.2.4_2014-228 we have some great newfeatures. Let's look at a few of these. The Exacheck Daemon Can be Configured to Auto-Start Upon Reboot Exacheck2.2.2 introduced a daemon process that provides for automated execution ofExacheck operations. You can run Exacheck on a reoccurring basis (hours ordays). Exacheck also includes the ability to automatically email the results toa recipient. Here is an example of enabling the Exacheck Daemon: ./exachk-set "AUTORUN_INTERVAL=1d;AUTORUN_FLAGS= -ov;NOTIFICATION_EMAIL=firstname.lastname@company.com;PASSWORD_CHECK_INTERVAL=1" In thiscase the Exacheck AUTORUN_INTERVAL parameter is set so that Exacheck runs on adaily basis. Each run will email the report to the email valueNOTIFICATION_EMAIL parameter. Havingadjusted the settings, we would start the Exacheck daemon. This is required anytimewe use the –set command to modify a parameter as seen here: “./exachk -d start FurtherExacheck 2.2.3 provides the ability to define a schedule when Exacheck isexecuted. For example: ./exachk-set "AUTORUN_SCHEDULE=15,16,17 * * 2" Will setup Exacheck to run on hours15-17 on every day of the month that is a Tuesday. After we issue the –set commandwe would need to start the daemon again by issuing the exachk command again with the –dstart parameter. You can query the Exacheck daemon to see when the next run isby using the –nextautorun parameter. You can use the –status parameter to look at the status of the Exacheck daemon. Increased daemon error logging ability Exacheckhas a lot of logging that is handy to look at if you are concerned that thereare problems with the output of Exacheck, or if the daemon fails. Many people are familiar with theregular Exacheck output, but they are not familiar with the logging. You willfind these logs contained in the Exacheck report output zip file in a directorycalled logs. Also, the log files are contained on disk, in a directory calledlogs, which is associated with the individual Exacheck run. Themain log file is called Exacheck_error.log. This contains any errors that occurredduring the Exacheck report output. The daemon process itself will includeinformation on errors and problems that it has raised in the error log. Collection Manager for ORAchk, RACckeck and Exadata (MOS Note 1602329.1) Collection Manager provides a central repository for Exacheck information, and other collection information. This makes collection, management, reporting and life cycle maintenance of Enterprise level Exacheck data easier to deal with. Collection Manager now provides a GUI front end. This new front end makes it easier to manage the repository of this collected information. MOS note 1602329.1 provides a great deal of information on the Collection Manager. There are other new Exacheck features, and there continue tobe new Exacheck features introduced all the time. So, keep up with thesechanges and make sure that you are always using the most current version ofExacheck! More coming soon!

If you deal with Oracle Exadata, then you are probably aware of Exacheck. Exacheck is a tool that is used to determine the health of theentire Exadata infrastructure. (Note that Exacheck is...

The Latest New Features Skinny – Part One

Oracle Database 12.1.0.2 has just been released herefor Linux X86-64 and Solaris! I’m sure other OS releases are not far behind. Thedocumentation is available here. Back in the olddays, a dot release of the Oracle Database (i.e. 12.1.0.1 to 12.1.0.2) productdidn’t have that many new features in and of itself. It was the big dotreleases that contained the cool new features, and then the number changereleases (11 to 12 for example) that really introduced major changes.However, Oracle has pretty much broken this mold in12.1.0.2, which is not unexpected. Many of the new features in 12.1.0.2 revolvearound adding normal database functionality that was not initially present inmultitenant. In the first of my new features blog posts, let’s look at three ofthe new features present in the Oracle Database 12.1.0.2. 1. PDBCONTAINERS Clause The CONTAINERS clause provides a way toaddress data in two or more PDB’s in a single query. Thus, if you have two EMPtables, and they are in PDB’s called AHR and RHR – you can now issue a selectthat accesses each table in the different PDB’s. For example, let’s say that we wanted to seethe average cost of an employee’s benefit to the employer. In our case this information resides in twodifferent systems. The active employee data is in the AHR PDB, the retiree datais sitting in the RHR database. If the employeer benefit cost for an employeeis in a table called EMPLOYEE and the data is in a column called BENEFIT_COST,then we could query both tables using one SQL statement to get the averageBENEFIT_COST. Here is an example: SELECT avg(benefit_cost) FROM CONTAINERS(HR.employee) WHERE CON_ID IN (45, 49); Note in this query that we have to list theCON_ID of the containers we want to include the query. We could slightly modifythis query so we don’t need to know the CON_ID of the containers like this: SELECT avg(benefit_cost) FROM CONTAINERS(HR.employee) WHERE CON_ID IN (select con_id indba_containers where con_name in(’AHR’,’RHR’) ); This is clearly a simpler solution. 2. CREATE_FILE_DEST parameter  If you have dealt with CDB’s andPDB’s you might get a little frustrated with the lack of ease with respect tothe creation of PDB related datafiles. Originally in 12c Release 1 the databasewould put the files in the default file system directory and there was not aneasy way to shortcut this default value. This new functionality isfacilitated through the use of the CREATE_FILE_DEST parameter. This parameterset unique to each PDB. Once defined, you will need to make sure that the mountpoint that is defined has been created and that the appropriate permissions aregranted to Oracle so that it may read and write files to the file system. Note that this parameter can beused to make plugging PDB’s in and out since the files will be isolated awayfrom other PDB and CDB related files.3. PDBMetadata Clone Oracle’s ability to quickly clonea PDB is one big selling features of Multitennant. One of the nice features of12.1.0.2 is the ability to clone a PDB with all the logical structures, butnone of the data is cloned over. This is a nice feature if you need to fastprovision a development database from a production source, but you don’t wishto put production data in that cloned database. You might be asking yourself – Whydo I care about CDBs and PDB’s anyway. We are not going to be using OracleMultitennant anyway. My response is simple – yes, you will. Someday the Oracleinfrastructure will be purely based on the concept of a Container Database andat least one Pluggable Database. So, you might as well get ready for thatrevolution now – because it is coming.4. ForceFull Database Caching Mode Force Full Database Caching Modeis new in Oracle 12.1.0.2. You might have heard about the Oracle In-memorydatabase features that are being released with 12.1.0.2 (this feature is calledthe in-memory column store) – and you might have accidently confused thisfeature for the Oracle in-memory database features. These are two differentfeatures. I’ll address Oracle’s in-memory database features at a later date oryou can read about them now on this page. You can enable Force Full DatabaseCashing Mode by mounting the database and then using the alter database force full database caching command and then openthe database with the alter database open command. You only need to issue thiscommand once and the database will continue to open in Force Full DatabaseCaching Mode on subsequent database opens. To turn off Force Full DatabaseCaching Mode shutdown the database, mount it, and issue the command alter database no force full databasecaching. You can then open the database with the alter database open command. More new feature stuff to come!** Post edited 7/28/2014 because the author can't type ... it's X86-64!

Oracle Database 12.1.0.2 has just been released here for Linux X86-64 and Solaris! I’m sure other OS releases are not far behind. The documentation is available here. Back in the olddays, a dot...

Carts and Horses (A reoccurring series)

Putting the Cart Before the Horse In my years of experience I’ve had the opportunity to engageso many different customers that I’ve lost count. In my professional life I havespent maybe 9 or 10 years total as a contractor. If you assume that I haveengaged one new customer every 4 weeks as a contractor (which seems somewhataccurate in my head), then that would be about 12 customers a year. That addsup to some 120 customers in the 10 years – and to be honest that number seems prettylow. I’m going to guess that number iswell over 150 or more – maybe even 200. The point is, I’ve seen an awful lotout there. This gives me what I think is a unique prospective on things. I am engaged in a discussion on LinkedIn.In this discussion, the original poster (OP) asked a very basic question: Ihave two tables in single schema EMP1 and EMP2 with same structure. How can isynchronize data bi-directionally in both of the tables ? I engaged in the discussion, suggesting solutions andshooting down other solutions. At the same time period, I was at a customermeeting where it was clear that they had put the cart before the horse withrespect to their infrastructure, application and in particular, security. As I was working these two situations together, it occurred tome how they really define what I think is a very common problem in IT – Puttingthe cart before the horse. In fact, I’d say that in many cases we put the cartseveral MILES before the horse. First, let’s look at the question that was asked onLinkedIn. Do you see any problems with what is being asked, or are you justchomping at the bit to provide a solution? If you’re a member of the laterfaction, you are not alone. It’s easy to go there. We live in a day and agewhen the technology solutions, hardware and software, are simply amazing. Wehave solutions to problems that are exciting and I think that many of us lovethe position of the designer, the solution provider and the creator. Indeed, welove to create. However, in both cases I think that solutions and deployment were premature. Important questions had not been asked and basic infrastructure requirements not addressed. Thus, the cart was put before the horse. So it is, with these two events, that the idea of a series on "The Cart Before the Horse" came to be. In this light, the first thing I thought I'd address is security. What is the cost of putting the cart (the application) before the horse (security architecture)? In the zeal to create, we sometimes forget that everything wecreate must have a solid foundation. I take you back to your childhood and the threelittle pigs. The pigs that built the houses of straw and wood I believe mightwell represent a lot of IT organizations. Two of the pigs wanted to stand their houses upfast, for little money and they did not properly perceive the threat in theform of the Big Bad Wolf. It leaves you wondering, why did they ignore the BigBad Wolfs clear and present danger?The last pig, well he planned well, probably spent a bit more money for right foundation and architecture and survived the test of the Big Bad Wolf. Sometimes I think it's just that there was not enough experience to understand that there was a big bad wolf, and he's out to get you. Sometimes, you understand that the big bad wolf is out there, but you don't know what you don't know. So you plan in ignorance. A little bit of research might have taught the pig who built his house of straw that straw does not withstand the normal forces of big bad wolf huffs and puffs. Perhaps that Pig might have hired a security consultant and found out what he didn't know. Maybe the second pig saw what the first pig did, and thought to himself - This does not look very strong. Thus, by looking at the perceived mistakes of the first Pig, the second pig improved on the infrastructure of the first pig. However, it seems again that our second pig was working by the seat of his pants, not really knowing how to put together a structure that would survive big bad wolfs. The third pig? I'm sure that he studied some architecture, learned about the physics of building and the loads that various materials would take. He probably talked to experts in the field. Most importantly, he took the Big Bad Wolf very seriously, and prepared for his arrival. He knew, at the end of the day, it was very hard to mitigate the failure of his little house as he was being turned over the spit. He had one opportunity to get it right, and he made the best of it. How is it then that so many in IT, fully aware of the Big Bad Wolf and all the risks associated with the Big Bad Wolf, plan like being roasted on the spit once he huffs and puffs our house down is not a big deal? Here are my thoughts: We feel safety in numbers – The Big Bad Wolf isout there, but he will get someone else. We feel that just because he’s neverattacked us before, that we are somehow immune from his attack now. Perhaps ourlittle pigs figured that a 3:1 pig to wolf ratio would save them. At the end ofthe day, that little bit of bad planning almost cost them their lives, andcertainly cost them two homes. We feel safety in our walled up environments –Usually we have built this castle, with what we think is an awesome moat, around our systems in the form of firewallsand other network related issues. The fact that we feel are systems are isolatedfrom the outside world tends to reduce our concerns about exposure to riskvectors. Perhaps the three little pigs thought that with so many other pigs around,that they would probably get over looked. Perhaps they felt that the wolf wouldavoid them just because they had erected any kind of structure, looking for theeasiest pig to eat. Either way, their ignorance about the nature of the wolfwas very costly. We feel pressured to deliver – The little pigs,no doubt, were in a hurry to get a shelter over their heads, and we are oftenin just as much of a rush to get our application into production. Sometimes weget target fixation on our delivery goals, and forget that there is a seriouscost to delivering something that works, but that isn’t secured. When somethingworks well, when it works fast and all of the data is in good shape, that justmeans that the information a hacker can usurp is all that much better. In hismind, the hacker will thank you for all that you did to improve performancewithout impeding his work. In the days when data breaches regularly occur, in quitepublic ways, I’m still amazed at the resistance I get when I come into acustomer and I point out all of the holes in their security schema that exist.Risk vector here, risk vector there, and so on – while the risks mount the desireto acknowledge and mitigate these risks seems lacking too. Instead – there istunnel vision on delivery and budgets and a myriad of other technical hurdlesthat face them. Somehow the notion of security seems to be an afterthought, orsomething to be dealt with once things are in place and people can use thesystem. This befuddles me. Releasing an application without givingthe security risks prudent analysis is very risky. The number of environmentsthat I’m aware of this practice just scares the willies out of me. The costs ofthese breaches have been the subject of a number of studies. A 2013 study donethe Symantic Corporation and the Poneman Institute offer some guidelines as tothe cost. These companies do a yearly report on the impacts of data breaches. Youcan find the report here.Note that the report studies the average breach – not the huge breaches thatthe press likes to paint pictures of. Any breach of over 100,000 records isremoved from the report because these outliers skew the report and are not representativeof normal breaches. In the US, which had some of the most costly breaches, thecost per record of a breach was $188.00 per record. The average organizationalcost of a data breach in the US was over five million dollars. If you complainthat your budget isn’t big enough, and your delivery timelines are not graciousenough to factor in additional security work, does your company have an extra 5million dollars hanging around to pay for the breach that will occur later downthe road? $188.00 per record is quite expensive if you have lost 100,000records ($18,800,000). That’s enough to break the smaller organization. Notethat if you are in certain high risk industries, like health care orfinancials, your average cost per record is quite a bit higher than theindustry average ($233 and $215 per record on average). If you are a larger organization, and you are dealing withmillions of records with PCI data, then the costs are going to be much higher.Add to this that if you consolidate all of the causes of the breaches, thehighest reason for a breach in the U.S. was due to criminal attacks. Fully 41%of attacks in the US are criminal in nature (as opposed to human error orsystem error). Add to this that the cost of a malicious breach is actuallyhigher (around 25% higher) than the cost of other types breaches. This means that someone is out there and they are actively lookingfor your data, exploiting the holes that you have left in place. Looking foryour cart without a horse (or for the horse since it might be of more value). What is more interesting is that the report identified thefactors that reduced the costs of breaches. In the US, the report indicatesthat the biggest single factor in reducing the cost of breaches was: “…by having a strong security posture [and] incidentresponse plan and CISO appointment.” And “..from the engagement of consultants to support data breachremediation.” . I find the first statement the most compelling. It’s allabout being pro-active. It’s about acknowledging the risk, mitigating it fromthe beginning and having an organization that deals with security issues. The second statement is about accepting the breach has occurredas quickly as possible and dealing with it. The benefits of these factors – a strongsecurity posture, a response plan, a CISO appointment and engaging experts all combinetogether to reduce the average cost of a breach significantly. Yet, the cost of architecting a system before it’s breachedis going to be much less than the cost of all of those consultants who will begetting calls after the breach. Trust me on that one. There is a lot more that the report discusses, and it’sworth reading. In a world where the number of data records, especially thosewith private data in them, is growing considerably – then the costs of databreaches are going to grow as well. It’s my guess that these costs will notscale in a linear way either for a number of reasons. So, as I stand in a room and make the case for security. AsI think about the numberless breaches that have occurred in the public sectoralone – I can’t help but wonder what anyone is thinking when the response I getis equivalent to: “We will get to that after we get into production.". Indeed – the cart is before the horse. In fact, I’m not surethere really is a horse anywhere to begin with… I think the horse is a mythicalbeing, just like the mythical environment is in many places and will be until it’stoo late. Then – the old adage that you can pay me now – or you canPAY me later – really applies. So…. What are you going to do?

Putting the Cart Before the Horse In my years of experience I’ve had the opportunity to engage so many different customers that I’ve lost count. In my professional life I havespent maybe 9 or 10 years...

A proper introduction!

I thought I'd present a more formal introduction for my first real entry. Then, we are going to get into the meat of things. For those of you that don’tknow me, my name is Robert Freeman. I live in Nevada and I work for Oracle in the Public SectorEngineered Systems Specialist group. Our group is at the forefront of thebenefits, use and application of Engineered systems in all public sector spacesin the United States and Canada. From federal, state, provincial governments tocounty and city government, we provide the expertise that is needed to analyzethe database needs of IT organizations and provide solutions that work both nowand into the future. If you are in the PS space and need help with regards to Engineered Systems, we are the place to go. My team is full of amazing people, each with highly tuned skill sets. Want to know about Exadata - we have experts there. Need some help with an Exalogic sale or presentation - we can help there. Exalitics... Exa* - whatever - we have the SME's that support all of the engineered systems. What we don't know, we are capable of finding out in short order.  Frankly, I'm in awe of the people I work with here at Oracle. As for me, I’ve been working with Oracle databases for wellover a quarter of a century. Man, that makes me feel old! I have done manythings in my life from managing database teams, infrastructure teams,application project management, Architect, modeler, DBA, Unix system admin andC programmer. Yes, I’ve worn a lot of hats in my life. Right now, I specializein Engineered Systems and in particular the Oracle Database. That being said,you won’t find a great deal here on the other Engineered systems like Exalogic,but once in a while I might well write something. Also I am author of a number of Oracle Press books onvarious Oracle database subjects such as RMAn backup and recovery, Oracle databasenew features, GoldenGate and Oracle OCP certification. You can find all mybooks for sale on Amazon by following this link: http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=robert%20freeman%20oracle&sprefix=robert+freeman+o%2Caps&rh=i%3Aaps%2Ck%3Arobert%20freeman%20oracle When I’m not working or writing I’m spending time with myfamily. I have a wonderful wife Carrie and six great kids! As this blog liveson, keep an eye out for more on me, and my family. My five oldest kids are 28 to 21 - and then my youngest is just a bit over two. She is making me feel young again, though I swear that the pains I have now are different than the ones I had with the other kids! I also love to fly. I'm a certified instrument rated private pilot. Next year my goal is to get my commercial rating and then start working on my instructors rating. Ultimately, I'd love to be able to teach my wife and kids to fly. I'm also a second degree black belt and certified instructor - though my knees somewhat have abandoned me and I find it difficult to do the more complex kicks these days (though, given enough ibuprofen I still can!). I can still throw a pretty good 360 degree butterfly kick. Still, I don't seem to have enough time for flying or butterfly kicks these days. I’m getting ready to post part one of a several part seriesthat I’m calling the Lullaby of Technology. I also have a few other ideas about things I want to post about. Also you will find me postingregularly about Oracle databases and Exadata. I welcome your comments and thoughts.Please keep them clean and keep them kind to all involved. I am very much afree speech advocate so if you leave a heated comment that makes you look likean idiot, it’s going to stay on here forever. I will remove anything thatsmacks as advertisement, hijacking or that might violate the TOS that youagreed too. J So, greetings all. Let’s talk about Oracle – Sounds fun,doesn’t it?

I thought I'd present a more formal introduction for my first real entry. Then, we are going to get into the meat of things. For those of you that don’tknow me, my name is Robert Freeman. I live in...

Oracle

Integrated Cloud Applications & Platform Services