It seems that most of the collateral we read about cloud will blithely assert that the first step in creating a cloud environment is to virtualize. Often we're not told specifics until we read the details, when we discover that the advice is to shovel everything in to virtual machines. Other times, the author will simply lead with virtual machines as the entry point to cloud. In both cases, the proposition that a cloud must be based on virtual machines is simply taken for granted. And many people seem to have no qualms about this, and they start their evolution to the cloud by shuffling their physical server silos into VM silos. Is that always the right thing to do?
Let's consider the idea that "more is better." A friend of mine is looking for a home to buy and debating different down payment vs. loan options. I'm reminded of when I was on the market and someone gave me this advice: since you can deduct home mortgage interest from your federal taxes, you should make the smallest possible down payment. This will maximize your interest payment, and therefore your tax deduction.
So my question was - if a bigger deduction is better, why not look for a loan with a high interest rate? Then I can pay more interest and get a bigger deduction!
The same fallacy is plaguing many discussions about virtualization in the move to cloud. Virtualization has many benefits, and comes in many forms. Assuming that virtualizing as much as possible - i.e., deploying in VMs - leads you down a path that will simply replace your physical silos with virtual silos. If you want to simplify your environment and make better use of pooled resources, consider the virtualization available in the applications you are deploying. With a product such as the Oracle Database, you'll discover that features and options such as Database Resource Manager, Instance Caging, and Oracle Multitenant will handle the vast majority of use cases you thought you needed VMs for - without the added elements to deploy and manage.