Sunday Sep 27, 2009

Another new U-verse feature

Word of mouth is the best marketing. Literally in this case. My neighbor told me another new feature was delivered to AT&T set top boxes to allow you to play media content from your hope PC's to your TV. Already of course I could sync my iPod, and carry it down to the docking station in the living room. Or I could write music and photos to a USB stick and plug it into the TV to see and hear it. Or I could upload content to Yahoo and view it through U-verse.

This is different. The set top boxes simply connect to Windows Media Player on PC's through the home router to access content without sending anything outside the home. Well this won't work for me, I thought, because I don't run Windows native. I run Windows XP under VirtualBox virtualization software.

I only had to install Media Player 11 (version 9 is too old) and select "bridged adapter" networking for my Windows virtual machine. (I doubt that NAT would work since the virtual machine wouldn't be visible on the network, but I didn't try it.) Now all my TV's can browse music and photos on two PC's. Select background music and start a slide show of today's snapshots. Really nice.

Wednesday May 27, 2009

Paula Smith is SPECtacular

Another SPECtacular award from the SPEC annual meeting: Paula Smith (VMware) was honored for her tireless, competent and patient work managing the SPEC office and the people there. Paula consistently exhibits what make SPEC an unique place. The attention and enthusiasm she brings to her volunteer work make her a pleasure to interact with. She goes above and beyond in everything she does, and is often able to turn emergencies into opportunities. Most impressive is how she maintains this over time and in every interaction, despite many competing pressures for her attention. Beyond this management work, she also manages to handle the organizational and technical work of chairing the Virtualization committee, and of course her day job at VMware.

Thursday May 14, 2009

SPEC awards, virtualization

More 2009 SPECtacular awards. SPEC's forthcoming virtualization benchmark will provide meaningful metrics of hardware and software performance in data center consolidation. As complex as this benchmark is, running several different benchmarks together in virtual machines on a host system under test, the code is only half the story. As with all benchmarks the workload is vital, to represent realistic usage scenario(s) so that performance improvements made on the benchmark will also benefit real world users. And the run rules are vital, needing to accommodate technology improvements over the lifetime of the benchmark, while precluding unrepresentative optimizations exploiting rule loopholes. (Or what the layman might call “cheating”) There is spirited debate from companies representing rather diverse user communities, all with an interest in seeing that their customers' needs are addressed by the benchmark. In the end when this group of top engineers reaches a consensus you know they've come up with a benchmark that is as rock solid as is possible to make. From among this great team of partners and competitors, three were singled out for SPECtacular awards:

Andrew Bond of HP always steps forward when a person is needed to test new code, features, parameter tuning. He performed many experiments whose results showed the committee the sensitivity of the benchmark to various parameters, sizes, and configuration options, so that the right choices could be made for fair benchmark comparisons. He also created scripts to set up and configure new guest VMs for each workload.

Chris Floyd of IBM improved and tailored the mail server and application server workloads for the new benchmark. He's revamped these workloads several times to improve the I/O profiles and add burstiness to the application server transaction injection. He helps the other developers at regular on-line coding sessions, explaining new features, and resolving problems. He even helps out when on vacation.

Greg Kopczynski of VMware developed a (necessarily) complex and feature extensive harness for the benchmark. He responds to countless pleas for help, assistance, debugging, etc., in true SPEC fashion without asking whether the help is for a partner or a competitor. He added burstiness to the web server workload. And he integrates new code and changes from all the developers for each development kit revision.

Thanks for your great efforts!

Tuesday Feb 19, 2008

SPEC awards, virtualization

Virtualization is such a hot technology that Dilbert is poking fun at it: 2/12, 2/13 and 2/14. No wonder, since IT centers must use both hardware and energy more efficiently. At SPEC's 2008 annual meeting in San Francisco SPECtacular awards were given to members of the Virtualization committee. As always, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.

SPEC is working on a benchmark to model server consolidation of commonly virtualized systems such as mail servers, database servers, application servers, web servers, and file servers. Requiring a very different technical approach than SPEC's traditional benchmarks, virtualization has brought unique challenges. SPEC recognizes these engineers for outstanding contributions in meeting those challenges:
  • Andrew Bond, HP
  • Cathy Reddy, Unisys
  • Chris Floyd, IBM
  • Fred Abounador, AMD
  • Greg Kopczynski and another engineer, VMware
  • Nitin Ramannavar, Sun
  • Stephen Pratt, Communigate
  • and an engineer from a company so modest that they don't even want to accept public thanks.

Thursday Jan 10, 2008

Balance of Power

Electrical, not political. A DOE study found that - duh - if you give consumers information about time varying cost of electricity they will save money by shifting some power usage from peak to off-peak times. Consumers in the study lowered their electric bills by 10% and lowered their peak demand by 15%. This is a big deal because although the operating cost component of electricity (fuel) depends on the total energy consumed, the capital cost component (generating plants) depends on the peak power generation.

Solar power is particularly valuable to a utility because its peak production occurs in the middle of the day when summer demand from air conditioning is highest. But there's another peak around 5-6 when people come home from work and turn on appliances, and by then solar power production has fallen off. Thus adding photovoltaic power alone may not drastically reduce peak requirements for fossil fuel power plants.

Wind power along the California coast has an almost complementary generation curve to that of solar power, because of the onshore and offshore breezes in the mornings and evenings. Adding wind power alone may not drastically reduce peak fossil demand because the wind often dies down mid-day when the air conditioning load is highest.

But adding solar and wind power together could greatly reduce peak fossil demand, though perhaps not economically eliminate it entirely. Then if you added time of day metering to allow consumers to voluntarily shift their load, that would level even more peaks. Ditto various energy storage systems like the plan to use night time wind power to pump water back up a hydroelectric dam for use the next day, super capacitors, and plug-in hybrid cars. The key to effective and economical use of renewable energy is a balance of power supply with demand.

The computer industry tries to do the same thing with servers. Demand for computing services typically follows daily, weekly, and monthly cycles. When the data center is provisioned for the highest possible demand, there is a lot of wasteful excess capacity. Even with the most efficient hardware and the best power management software, running servers at low utilization is extremely wasteful compared to moderate utilization. So we try to balance computing supply with demand by virtualization and workload consolidation, especially if we can find workloads that are complementary (like wind and solar) in their resource requirements and/or their load versus time of day.

As network capacities increase and software becomes more sophisticated, you can imagine systems configuring computing resources worldwide to maximize computing power to the customer at minimum electric cost. Think of a customer connected from California in the middle of a hot day with time-of-day electric meters set to the highest price. Of course he might be routed to servers in Europe or India where the computing demand is off peak. He might also be routed to servers in Colorado where the computing demand might still be high, but the electricity demand and price might be lower. Or to Oregon where a heavy rainfall and cold wave might mean cheap renewable hydro-power, even at peak electric demand; and lower than usual data center cooling costs thanks to mixing filtered outside air.

Wednesday Oct 31, 2007

Blame storage

"Is storage becoming IT's Hummer?" asks The Register. Reporting from SNW Europe they report that as data centers reduce the power cost of computing, storage is poised to become the biggest energy consumer. Well that's just the outcome SNIA hopes to avoid with their Green Storage Initiative. The efficiency race between computing and storage is one where we can cheer for both sides. Besides, as Jonathan points out, the distinction between computers and storage is blurring fast.

The Reg says that virtualization will be primarily responsible for reducing computing power usage through consolidation. Certainly the most effective way to save energy is to follow your mother's command: "Turn that thing off if you're not using it!" But I think the Reg is a bit premature in giving the industry credit for solving the problems of computing power usage. Yes there's a lot of innovation in this area making data centers more efficient in many different ways. But there remains a lot of hard work to do by vendors and users alike.



I am a software engineer in San Diego, president of the Standard Performance Evaluation Corporation (, formerly a mathematician and a violist.


« June 2016