By mrbill on Mar 14, 2011
Well, it appears that the old blahgs are to become new blogs. I'll be scribbling again soon enough.
A great trip, amazing weather, decent food, and fun projects. Nothing beats sitting on a beach at 10pm with a nice 20C breeze coming in off of the Mediterranean, eating hummus and a kabob by the light of a candle in a paper bag. In January.
The primary purpose of the trip was to look at some modernization opportunities, mostly older mainframes. It has been many years since I have heard acronyms like ADS/O, IDMS, CICS, and BMS used exclusively for hours of conversation. Fun stuff. The Clerity guys have some amazing tools and software for rehosting, and BluePhoenix makes the data and application conversions relatively pain-free. These guys definitely make my job easier, as I can concentrate on the IT services side of the rehosted system, running natively on Oracle under Solaris, with proper job and resource control, fully integrated into the "Open Systems" side of the customer's IT operations.
On the downside, I had to give a lesson in the three steps of customer service to a flight crew after the AC's air exchanger dripped on me for the first hour of my 12+ hour flight home. Step one, fix my pain (hand me paper towels to dry off my stuff and my shoulder). Step two, make sure it doesn't continue to impact the customer (move me to a new seat). Step three, make sure it doesn't happen again (put the issue on the maintenance log for the aircraft to make sure it gets fixed before the next customer gets rained on). I was amazed that the crew didn't understand these three basic steps to happy customers. Of course, that would probably explain why United flights hopping through Frankfurt are full, while Continental direct flights have plenty of empty seats. Customer service, and customer experience has a direct impact on customer loyalty. (sigh.) And yes, I am 6'5" (192cm) tall, and am not willing to move back to the "no legroom section" of the plane, from my aisle to a middle seat, against a bulkhead which will not recline. Moving a customer because you have inconvenienced them should be a move to an equal or better situation, not a punishment or additional inconvenience. (double sigh.)
But mostly, I'm just waiting for the exciting days ahead. No matter what happens, life will surely be "interesting" to say the least. Buckle up, it's going to be a wild ride.
To back up a couple steps and frame these entries a bit... There are three basic categories of efficiency in the datacenter: Supply, Demand, and Business (or Process). All three, and all of the subcategories under them, should be tracked.
Obviously, supply components have cost. Reducing the consumption and the waste of the supply components is often the focus of efficiency efforts. We see tons of marketing and geek-journal information about new, super-efficient power distribution systems, transformer-less datacenter designs, and there is much effort in the industry to make these pieces less wasteful. For those who haven't really dabbled in the field, you would be amazed at how much power is wasted between "the pole and the plug". UPS systems, PDU's, power conversions, etc. all mean loss. Loss means power that you are paying for that never makes it to a useful function of processing.
Downside, the supply categories are difficult and usually expensive to change (including labor and asset categories). The real efficiency gains that are often overlooked or given less priority are in demand and business. Demand includes the workloads and software running in your IT environment. Can you imagine the cost savings if you were about to "compress" your consumption by 30%? Maybe 50%? Capacity management, virtualization, and less obvious things (to most people) like fixing bad and under-performing code can be huge wins.
In my experience, customers (and "efficiency consultants") rarely look at the business processes, goals, and overall flow that drives the processing in the first place. Often the flow of demands, and the scoping of demands from the business can have huge impact on consumption. Do you really need 50,000 batch jobs that represent every report that has ever been run on the system? Do you really need to save those results? Do you really need to run and distribute all those reports, and to how many people? How many manual processes are involved in your IT management where the answer is always "yes", or the process is mostly repeatable and could be automated?
Examining supply, demand and business in a structured fashion, and asking "why are we doing that anyway?" can have huge returns with minimal investment. There is always "low hanging fruit" in efficiency. It is just plain dumb to keep throwing money away for the sake of tradition, habit, and legacy operations.
Slight tangent, but still on the efficiency theme. Last year, I was doing some work with a large Internet retailer, and came across a few epiphanies that apply to these monologues.
We were addressing storage, with a view toward a "private storage cloud" as the intended goal. This customer was very forward-thinking in much of their environment, with excellent processes in place for most of their IT environment. Some pieces seemed to be stuck in the 90's though, and were based on "traditional operating practices" rather than the realities of their business.
Simple example: What happens when the table spaces of your database are running out of room? Traditionally (and in this customer's environment), a monitoring agent on the server sent an alarm to the service desk, and opened a trouble ticket. The ticket was then assigned (by hand) to one of the DBA's, who could go into the database management interface and "add tablespace" using some available disk space. What happens when you run out of available disk space? A trouble ticket is opened for the storage group to allocate another chunk of storage to the system. What happens when the storage array starts running short of available chunks? A trouble ticket is opened for the manager of the storage group to either add capacity through a "Capacity on Demand" contract, order an upgrade, or purchase an additional storage array.
What's wrong with this picture? Nothing according to the classic flow of IT management. In reality, there are way too many people doing manual processes in this flow. Simple business question: If the system taking orders from your customers is running out of tablespace to store the orders, are you really ever going to say no to adding more disk space? No? Then why do we have a series of binary decisions and checkpoints on the way to satisfying the demand?
Automation and self-service are key components of the cloud "ilities", but can also stand alone as an efficiency play in almost every IT environment. We often execute traditional processes and practices rather than mapping the real business needs and constraints against the technology capabilities. In this example, a few scripts to automate adding tablespaces, creating new tablespaces, adding filesystems, adding and assigning storage chunks, and pre-provisioning those storage chunks in a standardized fashion saved countless hours of human intervention. Each of those activities is simple and repeatable, and the scripting reduces the opportunities for error. Throw in informational alerts to the interested parties, instead of trouble tickets requiring action, and the efficiency of this little piece of the puzzle is greatly improved.
Some parts do remain mostly "traditional", such as the procurement of new storage assets, but even those business functions are streamlined as part of this evolution. Once IT realizes that the business needs follow a predictable pattern of simple actions and reactions, automation becomes a simple task. After all, what Internet retailer wants a potential customer to click on the "check out now" button and get some weird error message because of a storage shortage, when they just spent an hour filling the cart with all of those items that they need? It isn't just a lost transaction, it could be a lost customer.
Well, back to my day job for now.
Continuing with the ramblings of my last entry, since I am up late with children and their dozens of excuses as to why they are not asleep...
Now that we have defined the "ilities" that we want from our private cloud efforts, we can examine each of them and look for obvious opportunities with high returns. People cost, IT CAPEX, IT OPEX, energy costs, reducing operational complexities, improving service levels, reducing risk, and any other opportunities that we can target and quantify. One major rule here is that when we pick a target and an approach, we must also have a "SMART" set of goals in place.
For the .21% of the readers who have never heard the SMART acronym before, it stands for Specific, Measurable, Attainable, Realistic, and Timely. In other words, for every action that we plan to take, or every improvement that we want to deploy, we must have a measurable set of criteria for success. It amazes me how many IT managers do not know the "average utilization of server systems in the data layer during peak shift". Yeah, that is pretty darn specific, but ask yourself, do you know what your company's utilization is during the prime workday cycles? Bingo. We need a baseline for whatever metrics we choose to measure success for each project and change to our IT operations.
Sidenote: If the answer to the previous question was "Yes", and the utilization is anywhere above 30% during workday peak shift hours, I am impressed.
So where are the obvious targets? I have already hit on one of them, system utilization and idle processing cycles. Systems consume electricity and generate heat (Those servers are actually very efficient space heaters), resulting in cooling requirements and air circulation requirements, and odds are that a majority of the processing potential is not being used for processing.
Consolidation? Maybe. Capacity Planning? Definitely. Capacity Management? Absolutely! Consolidation is a valid target project, but is usually approached as a one-time, isolated event. Consolidation does not necessarily change the behavior that caused over-sizing to begin with, or help when workloads are seasonal or sporadic. These variable workloads most often result in systems that are sized for "peak load", with lots of idle cycles during off hours and off-days (and sometimes off-months).
The first step to a consolidation is capacity planning, including the key step of generating a baseline of capacity and consumption. If, instead of treating this as a one time event, we start monitoring, reporting, and trending on capacity and consumption, we have now stepped into the realm of Capacity Management. We can watch business cycles, transactional trends, traffic patterns, and system loads and project the processing needs in advance of growth and demands. What a concept.
Now imagine a world where we could dynamically allocate CPU resources on-demand, juggle workloads between systems with little or no downtime, and use systems of differing capacity to service workloads with differing demands. Wow. That sounds like one of those "ilities" that we were promised with that "Cloud" concept. Dynamic resource allocation and resource sharing, possibly with multi-tenancy to maximize utilization of compute resources. Yep. Sure is. Ignoring the "Cloud" word, let's look at how we can implement this "Cloud-like capability" into our existing IT environment without bringing in a forklift to replace all of our systems and networks, and spending billions.
Breaking down those technology pieces necessary to execute against that plan, we need Capacity Management (TeamQuest, BMC, Tivoli, pick your tool that does capacity and service level management). The tool doesn't matter. The process, the knowledge generated, and the proactive view of the business matter. Caveat: Define your needs and goals \*before\* buying tools that you will never fully implement or utilize!
So now we know what our hour-by-hour, day-by-day needs are, and can recognize and trend consumption. We can even start to predict consumption and run some "what if" scenarios. The next step is dynamic capacity, which in this context, includes "Resource Sharing, Dynamic Allocation, Physical Abstraction (maybe), Automation (hopefully, to some degree), and Multi-Tenancy from our right hand "Business Drivers" column from my last blahg entry. Sure, we can juggle and migrate these workloads and systems by hand, but the complexity and risk of moving those applications around is ridiculous. We need a layer of physical abstraction in order to move workloads around, and stop thinking of "systems" as a box running an application.
There are many ways to do this, so pick the solution and products that best fit your IT world. You can create "application containers", or standard operating environments for your applications, and juggle the "personalities" running in the physical machines. Not easy. Most apps will likely not move in easily. Still a good goal to reduce variance and complexity in your environment. In this case, not a quick hit, as you will end up touching and changing most of your applications.
The obvious answer (to me and 99.6% of the geeks reading this) is to employ virtualization to de-couple the application from the operating environment, and the operating environment from the physical hardware (and network, and storage). Solaris Containers, LDOMs, VMware, Xen, xVM software in OpenSolaris, Citrix, fast deployment and management tools, the options and combinations are all over the map. The deciding factors will be cost, capabilities, management tools (monitoring, reporting, and intelligence), and support of your operational and application needs. The right answer is very often a combination of several technology pieces, with a unifying strategy to accomplish the technical and business goals within the contraints of your business. There are many of us geeky types that can help to define the technology pieces to accomplish business goals. Defining those business goals, drivers, and constraints is the hard part, and must be done in IT, "the business", and across the corporate organization that will be impacted and serviced.
There, we have some significant pieces of the "private cloud" puzzle in place, and if the server systems were severely under-utilized, and we were able to move a significant number of them into our new "managed, dynamic capacity" environment, we should be able to realize power, cooling, and perhaps even license cost savings to balance the cost of implementation. One interesting note here, if I have "too many servers with too many idle cycles" in my datacenter, why should a vendor come in leading with a new rack full of new servers? Just wondering. Personally, I would prefer to invest in a strategy, develop a plan, identify my needs and the metrics that I would like improved, and then, maybe, invest in technology towards those goals.
Just the late night ramblings of an old IT guy.
Next entry will likely talk more about the metrics of "how much are we saving", and get back to those SMART goals.
Numerous conversations and customer projects over the past few weeks have motivated me to exit from the travelogue and world adventures entries here, and get back to geeky writing for a bit.
Yes, cloud. We all "get it". We all see Amazon and the others in the public cloud space doing really cool things. Somewhere along the way, some of the message got scrambled a bit though. With any luck, I'll clear up some of the confusion, or at least plant some seeds of thought and maybe even some debate with a couple monologues.
Non-controversial: Let's talk about three kinds of clouds. Most folks in the industry agree that there are Public clouds like Amazon's AWS, Joyent's Public Cloud, and GoGrid. That one is easy. In theory, there are "private clouds", where the cloud exists within the IT organization of a customer (note that I did not say "within the four walls of the datacenter"), and "hybrid clouds" that allow a private compute infrastructure to "spill over" to a public cloud, as expandable capacity, disaster recovery, or dynamic infrastructure.
No hate mail so far? Good, I'm on a roll.
So do private clouds exist? Maybe. If we hop into the WayBack Machine, John Gage said it best, "The Network is the Computer.". Let's dive a little deeper into what makes a computing infrastructure a "cloud":
Unfortunately, most people start on the left side, with the technical details. This rarely results in a productive discussion, unless the center and right columns are agreed on first. We put a man on the moon, I think we can solve content structure and locking/concurrency in distributed and flexible applications. We don't want to start a cloud discussion with APIs, protocols, and data formats. We want to start with business drivers, and justifiable benefits to the business in costs, value, and security.
The center column describes at a very high level, the goals of implementing a "private cloud", while the right column lists the control points where a private cloud architecture would create efficiencies and other benefits. If we can agree that the center column is full of good things that we would all like to see improved, we can apply the business drivers to our IT environment and business environment to start looking for change. All changes will be cost/benefit, and many will be business process versus technical implementation conflicts. For example, is your business ready to give all IT assets to "the cloud", and start doing charge-backs? In many corporations, the business unit or application owner acquires and maintains computing and storage assets. In order for a shared environment to work, everyone must "share" the compute resources, and generally pay for the usage of them on a consumption basis. This is just one example of where the business could conflict with the nirvana of a private cloud. You can imagine trying to tell a business application owner who just spent $5M on IT assets that those assets now belong to the IT department, and that they will be charged for usage of those assets now.
So is private cloud impossible? No. Is private cloud achievable? Probably. Does private cloud fit your current business processes and needs? Probably not. Do the benefits of trying to get there outweigh the hassles and heartaches of trying to fit this square peg into the octagon shaped hole without using a large hammer? Most definitely.
Some attributes and motivators for cloud computing have huge benefits, especially on the financial side of the equation. Virtualization definitely has some great benefits, lowering migration downtime requirements, offering live migration capabilities, enabling new and innovative disaster recovery capabilities, and allowing workloads to "balance" more dynamically than ever before. Capacity planning has always been a black art in the datacenter, and every model only survives as long as the workloads are predictable and stable. How many applications have you seen in your datacenter that don't bloat and grow? How many businesses actually want their number of customers and transactions to remain stable? Not many, at least not many that survive very long.
So, to wrap up this piece (leaving much detail and drama for future blahg entries)... Private clouds probably don't come in a box, or have a list of part numbers. Business processes and profiles need to adjust to accommodate the introduction of cloud enabling technologies and processes. Everyone, from the CIO, to IT, to legal, to business application owners, has to buy into the vision and work together to "get more cloud-like". And finally, the business discussion, business drivers, and evolutionary plan must be reasonably solid before any pile of hardware, software, and magic cloud devices are ordered. The datacenter will go through an evolution to become more "private cloud", while a revolution will likely mean huge up front costs, indeterminate complexity and implementation drama, and questionable real results.
Following China, and a quick stop at home for laundry and family time, I ended up back in India. This time it was Bangalore (or Bengaluru to the natives) for some training sessions.
Imagine my surprise when strolling down the sidewalk, jetlagged out of my head, when I glance to my left and see amongst the bushes:
Wake up call... This was actually one of the "outside" displays at the science and technology museum. A very cool place.
While I was in Bangalore, I got to spend time with one of our uber-geeks in India, Neeladri. Neel was recently announced as one of the 30 or so Sun customer facing engineers to join the ranks of "Principal Field Technologist", along with Mitesh from our India GSE contingent. Congrats Neel and Mitesh!
Neel rocks. Work, play, or dancing with the anglos and teaching them "how to dance Hindi style"... Neel rocks.
My compatriot, Dr. Blumenthal and I took a stroll of sorts. We coerced a local taxi driver to take us to the Mu Tian section of the Great Wall. The Badaling section of the wall is more popular with tourists, and closer to central Beijing, but I had heard great things about Mu Tian, so away we went. The taxi for about 6 hours and 180 km ran us about $100 or so. Not a bad deal.
This was the "before" view, looking over toward the non-restored "no trespassing" section to the west. The top of the next hill is the beginning of "no man's land". You can climb the mountain to get to this point, but we decided (and rightly so) to take the gondola to the outlook platform. If you look closely, you can see evidence of the gondola line creeping up the hillside off in the distance.
This is the mid-point of our hike, give or take a kilometer. These outpost buildings are scattered every kilometer or so, with neat features like ramps worn down to steps by 1000 years of footsteps, and little downward sloped holes in the wall to shoot Mongolian hordes through. There is even an "upstairs" section with places to attack the "downstairs" in case the outpost gets overrun. Yes, that is a HUGE uphill section ahead.
Now facing east, and way off in the distance is the ski lift that we were planning on taking down. Yes, there are alot of steps and hills between here and there. At this point, your legs are already burning, and you really need a break. This is a good time to start re-thinking your decision to walk from gondola to ski lift, but either direction involves ALOT of uphill climbing. Notice the canon in the lower left hand side of the picture. Remember that the Chinese invented gunpowder, much to the surprise of the Mongol hordes.
Once we made it to the ski lift (and noticed the very cool luge sled section to get down as well), we discovered that our return ticket was for the gondola only. There is a separate ticket for the ski lift, available at the bottom of the mountain. Yeah, so walk back to the gondola? No way. It turns out that you can purchase a one-way ticket for the lift or the luge at the upper station. Thank goodness. I was not above bribing my way down the hill!! My legs were so tired that I talked Dr. Brad into taking the last trip up the station steps to buy the tickets alone. I just couldn't do another uphill climb!
My wordpress mirror... Still working out the kinks and formatting.
I ended up Q4 in China, on a whirlwind three week tour that included many fun adventures. My wife joined me for the first piece, and Dr. Blumenthal joined me for the conclusion. We visited several cool customers, and talked about cloud computing, realities in deployment, and helped bring in those last couple deals in the fiscal year where we could. My wife learned to eat with chopsticks, how to say "Ni Hao" and "Boo Yao", and shopped in the wild typhoon of bargaining known as the Beijing Pearl Market.
We started off in Beijing, where I had a few meetings to take care of and a couple customers to follow up on. We got to see Tianemen Square at night, and wander around aimlessly to get over the jetlag.
We then grabbed a flight down to Shenzhen, a nice central location for the next piece of my journey. From Shenzhen / Shekou, you can take the ferry to Hong Kong or Macau in about an hour. There is also a high speed train from Shenzhen to Guangzhou, where I needed to see a few customers. The ferry costs about $25 or so, and gets you to Hong Kong Central in about 45 minutes. The ferry terminal is quite nice as well.
We didn't starve. While poking around Shenzhen, I switched my wife from the more up-scale, foofy local restaurants to the more down to earth and authentic local restaurants. Most of them don't have english menus or forks. We were quite the celebrities in some of these places, as westerners are still a rarity outside of the big cities, and the more "western" establishments.
Hong Kong, and the visit to the Hong Kong Sun office was quite fun. I got to meet some of our Greater China teams that work out of the HK office in person. I hadn't been through HK in many years, and the changes were quite significant. It appears that the PRC government has done an amazing job of moving some of the shopping and tourism into the mainland, leaving the "western business" pieces in HK. Interesting, given the state of the current western economic outlook! I left about $30 behind in HK for lunch and taxis, and found absolutely no shopping worth spending my hard earned american dollars on. Definitely a change for me in the past 10 years. Also interesting was this sign on a little park in the Central district.
After Shenzhen and Guangzhou, it was back north to Beijing to close out the trip. I got another weekend in Beijing, and made it back to Tianemen square in the daytime. Very cool place, full of history and emotion. There were hundreds of foreign tourists there, and thousands of chinese native tourists there wandering, taking pictures, and soaking in the museums and libraries.
We even wandered throught the forbidden city, and stumbled across this little gem. Since there were no Jaguar dealerships in China in the 1970's, you can presume that this little piece of work was in the service of a government official for most of it's life. It is surprisingly well preserved, and in much better shape than most american (or british) counterparts!
More later, I still have to write a bit about how 3-4 miles on the Great Wall is definitely not the same as 3-4 miles anywhere else on earth. I think there is a rift in the fabric of space-time on that thing!
The Q4 and end of FY whirlwind tours are completed, and I'm taking some time with the family at home. I thought it would also be a good time to catch up on the pile of pictures that are stacking up in my mobile phone, and write some blahg fodder. The first few entries will be home and travelogue, but I hope to get some more techie/geeky stuff up as well and get that off of my plate.
Mr. Cole has decided on two careers. Professional bowler (he is a natural lefty), and race car driver. He insists on wearing a helmet, and "eye protection" when driving his new race car around the driveway. Unfortunately, he drives in circles, so it looks like NASCAR might be the career choice. I'll keep working on the "road courses" and open wheel configurations though:
Cole explains the important aerodynamic features of the rear spoiler.
Cole also found an old telephone in the garage, and was fascinated with the "old school" rotary dial. He spent about an hour "talking to Nanny".
Shelby and I spent some time out shopping while I was home. We got all the goodies for soccer camp, and took a little break on the silly little rides at the mall. While she enjoyed the "NASCAR Experience", it was the simple horse ride that really lit her up. This one cost me a whole dollar, 25 cents at a time.
It was about five days at home, and then back on the road. I have been through China, India, and Mexico since my last blahg entry. Pictures of those excursions coming soon.
I've been in India for about 10 days so far, meeting with customers and partners. Our India crew is strong, with business doing well and some really good projects in flight. Kudos to Neel, Satish, Bhaskar, Suresh, and Bobby for making me feel welcome and keeping me busy with interesting challenges.
I thought that China was busy with construction and dust, but India puts it to shame on "improving infrastructure". There is a ton of building going on, but the really impressive pieces are in supporting infrastructure. There are new metro lines going in, flyovers (bypass to us Americans), and real progress being made to support the new economy of more cars and trucks.
Outside my hotel in Delhi (in the Diplomatic Enclave), I met a few new friends:
And if my new friends and I get into any trouble, I'm pretty sure that we can outrun the local police... on foot... without really breaking a sweat:
Delhi does have some righteous coffee for keeping the jetlag at bay with caffeinated infusion.
I did find a new girlfriend in Chennai. She is kind of quiet, but a very solid woman of good standing:
My favorite way to get around over here is "auto-rik", or auto-rikshaw. Imagine if someone took an old Honda motorcycle, and had a 100 kph accident with a Malay rikshaw. If Rube Goldberg took the pile of leftover bent parts and put it back together into a single vehicle, this is what you would get. Especially fun after a few beers, late at night, racing back to the hotel. Offer the rik driver 100 rupees if he beats your co-workers back to the hotel, and you are sure to have an exciting ride that eclipses the best western roller coasters!
Kingfisher is not only my favorite airline here (truly top notch for domestic travel within India), one of my favorite Formula One sponsors (go Force India!), a strong cricket supporter, but they also make a quite nice refreshing alcoholic beverage. Vijay Mallya has quite a set of quality brands going.
I'll dump more pics and write more when I return home next week. This has been a great trip!
I have discovered something interesting (to me anyway). The biggest industry in Shanghai appears to be building Shanghai. I have to dust my shoes every day after work. In the 3km of sidewalk between my hotel and the Shanghai Sun office, there are 8 places where the sidewalk is blocked off because someone is digging, laying pipe, or pouring concrete. Everywhere you look, there is construction. Outside the Sun office, they have been doing pipe repairs in one direction this week, while they are laying new cables and conduit in the other direction. This makes for one giant ditch and many interesting smells in the morning.
In this picture, you can see the backhoe digging (again) more stuff out of the ditch in front of the office, while in the background, you can see construction cranes putting up new buildings. I don't know what the average lifespan of a building here is, but they are definitely not in a "slow down" due to any economic crisis! Also note the "interesting" methods of cabling and wiring for electricity and cable TV. If you have some spare length leftover, just coil it up and hang it from the pole. I have been bumped on the head several times this trip with low hanging wires while walking down the sidewalk. I am a bit tall for China at 192+ cm, but it does scare the snot out of you on a dark street at night!
They do have some amazing buildings here though, especially in the new northern section of Pudong, across from the old Bund river side district. I didn't get pictures of the new Grand Hyatt or Park Hyatt, or the building that looks like a giant neon sail. Perhaps tonight. They are much more interesting with the lights on. For some reason, Shanghai folks love to put fancy crowns and funky accessories lit up with bright lights and neon on their buildings. Definitely makes it easier to navigate. "Go down Hengshan until you get to the big crown, make a left and continue until you get to the 4 teddy bears hanging from a blue neon beach ball, then go right towards the big neon domino tile with the sparkly lights on top. If you get to the KTV with all the skimpy dressed girls bugging you to come inside for lady-bar or good messadjeh, you went too far."
Tom Waits and Alphavile might be "Big in Japan", but my friends are artwork. Well, maybe they are just spackle to gloss over damage to walls by careless movers, but I'll give them the benefit of the doubt. First on the Shanghai Sun office wall of fame, we have Principal Field Technologist, and all around super-geek Jim Fiori. Notice that the poster used to cover the wall damage has also been damaged by careless movers.
Next up, we have Nigel Hawkes, world renowned uber-geek of Consolidation, Migration, and all of those other things that save customers money, and reduce idle assets in their datacenters. Also note the damage on the edge of Nigel's poster, more evidence that some movers just don't learn from their mistakes.
Well that's all for today, just a quick update between calls and meetings. Can someone please send the Shanghai Sun office some new posters to cover the old torn posters that are covering the damaged walls? I think I have some old Sun SPARCServer 690 and HyperSPARC posters somewhere in my garage...