Monday Jun 08, 2009

Street Smart hybrid electric bicycle

Last weekend I went to Street Smart San Diego where, among many interesting booths, they offered test rides of various hybrid electric bicycles. I really liked the Eneloop from Sanyo which is coming to the U.S. this Fall. It's not just an electric assisted bicycle (in the spirit of "mild hybrid" automobiles) but a hybrid integrated drive (in the spirit of Toyota's hybrid synergy drive. You don't have to think about controlling the electric motor. The way you ask for power is to pedal, and the bike matches your effort 2-to-1 at low speeds and 1-to-1 at high speeds. Coast on a slight downhill and it reclaims some energy to recharge the battery. Brake and it reclaims more.

Friday Jun 05, 2009

SPECtacular awards & new web performance/energy benchmark

The last of the 2009 SPECtacular awards. SPECweb2005 is the industry standard performance metric for web servers, and today it is joined by SPECweb2009, the industry standard performance and energy metric for web servers. The benchmark includes a banking workload (all SSL), a support workload (no SSL), and an ecommerce workload (mixed). This is the first application of the SPECpower methodology to potentially large system under test configurations. In the initial benchmark results you can see one system with and one without external storage, and the test report lets you see the power consumption of just the server, of the storage, and of the entire configuration at various utilization levels. The entire committee did a fantastic job with this benchmark. As always, I won't list anyone's name without permission. (But give me the okay and I'll update this posting!) SPEC recognizes:

Gary Frost (AMD) who stepped in to fill a key developer role in an emergency with the release clock ticking. He took over the control code after a sudden reassignment, and frankly we handed him quite an undocumented mess. Gary was up to the challenge and produced the finished code.

Another engineer from AMD had primary responsibility for the reporting page generator. You often can't know exactly what information ought to go into a full disclosure report (FDR) until you see it. Nor how you want it organized and arranged. Nor what data integrity cross checks need be present to avoid errors. So the committee changed requirements often during development. But no matter how many requirements were placed on him, he turned around with the needed code within a week!

An engineer from Fujitsu Technology Solutions became the de facto quality assurance office because of his thorough and methodical testing practices. If there are a hundred ways software in general can go wrong, then there are a thousand ways benchmark software can go wrong, as by its nature it runs on systems stressed to the limit. When SPEC benchmark software just works that is largely due to people like this engineer who forsee, test, and diagnose every possible failure unanticipated by the authors.

And, if you'd like to see all of the SPECtacular awards, then follow the tags!

Friday May 22, 2009

SPEC awards, power performance

More 2009 SPECtacular awards. The SPECpower committee has been busy. They released version 1.10 of the SPECpower_ssj2008 benchmark as a no-cost upgrade to existing licensees. It adds support for measurement of multi-node (blade) servers, improves usability, and adds a graphical display of power data during benchmark execution. Review and publication of benchmark results continues apace, with a spirited competition for first place, and with ever more power analyzers accepted for testing, and more test labs qualified for independent publication. They have also been assisting several other benchmark committees inside SPEC, and other industry standard benchmark organizations, to implement energy measurement for their benchmarks. SPECpower is more than just a benchmark; it is a methodology, and the methodology is modified and expanded as necessary over time to accommodate energy measurements for all the different workloads which are relevant to the real world in those market segments. In alphabetical order SPEC recognizes:

  • Chris Boire (Sun Microsystems) – As release manager he coordinated and integrated development activities to keep the deliverables on schedule.

  • David Schmidt (HP) – He created stand-alone and network integrated tools for automated results checking to help insure that results submissions are correct and complete.

  • Greg Darnell (Dell) – Author of the PTDaemon, he helped many other groups get started measuring power for their benchmarks. He helps out with whatever needs to be done, technical or organizational.

  • Hansfried Block (Fujitsu Technology Solutions) - He automated the process of determining power analyzer precision, handled the acceptance of several new power analyzers, and was instrumental in getting multi-channel analyzers accepted.

  • Harry Li (Intel) – He was primary developer of the Visual Activity Monitor, giving an unique view of the system's activity.

  • Jeremy Arnold (IBM) – If I tried to recount all the accomplishments Jeremy was cited for I'd probably run into some internal blog size limit. Suffice it to say he is a primary developer on many parts of the code, who never turns down a plea for help, and who is never satisfied until the entire benchmark package is right.

  • Karl Huppler (IBM) – As primary author/editor of the Power and Performance Methodology, he organized the document to capture deep technical consensus in the committee, and made it readable and understandable for people new to the field.

  • Matthew Galloway (HP) – He designed the control software to drive multiple JVMs, enabling multi node (blade) testing.

  • An engineer (AMD) – Who created and maintained much of the web content explaining the benchmark and methodology to the public.

Monday May 05, 2008

ROI for Sun

Not NASDAQ - solar power. The ASES annual solar energy conference is in San Diego this week. The top question I get about my solar panels is how long is my return on investment? I did calculate it before we installed them. At current electricity prices and time value of money they will just break even over their useful life. (And we live near the ocean where morning fog obscures the panels on many summer mornings.) Still, if we had another price shock equivalent to the 1973 oil embargo they would pay back about twice the initial investment, in current dollars. And if we had another price shock equivalent to Kenny-Boy Lay's market manipulation, they would pay back about five times the initial investment. Economically, call the panels zero cost insurance.

Now what's the ROI on an SUV? Our solar panels cost about a quarter to a half the price of a big SUV. Will that Escalade have a productive life of 20 years? And over that time how many dollars will it return to your pocket? Or will it perhaps take more money out of your pocket? For the price of the SUV you could instead buy solar panels, zero out your electric bill, buy a Chevy Malibu (which sits in clogged traffic equally well as the SUV), and have enough money left over to pay for over 200,000 miles worth of gas for it, at $4/gallon.

So why aren't there more solar panels in sunny Southern California? Why is Germany, in the cloudy wintery north, so far ahead of the U.S.? Two reasons: (1) money, and (2) money.

(1) Lots of people don't have the luxury of deciding whether to spend discretionary money on a new SUV or on solar panels; they're deciding whether to pay the mortgage, pay the electric bill, or fill up the gas tank. Ditto businesses hard pressed to show a profitable bottom line. Increasingly solar energy entrepreneurs are in effect buying energy "drilling rights" on rooftops. LA's electric utility Edison is building the equivalent of a new generating plant by putting panels on the roofs of commercial and industrial buildings. The building owners pay nothing, and get a good long term locked in electricity rate. Here in San Diego, Hewlett-Packard is converting its campus to solar power. HP stockholders will pay nothing for it, and HP will get substantial energy cost savings in the future. While they're at it, HP is matching the rebate to their employees who want to put solar panels on their homes.

(2) The recent earth shaking discovery that people are more willing to give goods and services in exchange for money, than to give them with nothing in return. (See capitalism.) The biggest barrier to local development of solar energy in San Diego has been a convoluted rate structure that in many cases actually made businesses that installed solar generators pay more money to use less electricity, than before they installed them. Small wonder that northern California is far ahead of sunnier southern California in solar power installations. Now that crazy rate structure is changing, which could bring a boom in locally generated solar power.

For homeowners in San Diego no change is forthcoming. Germany has all those solar installations because of a rate structure that pays for solar electricity at much higher than market rates. In San Diego you see many solar panel installations like ours covering a small portion of the roof. The rate structure here is fair up to the point that you replace your total annual electricity usage with solar power. Produce more than you use, however, and all the excess is just "donated" to the utility without compensation. So you're okay if your solar installation is a bit smaller than you need, but it's economic madness to make it any larger than you need. If not for this rate structure, our solar panel installation could have produced enough electricity for one or two of our neighbors in addition to our own needs.

 

Tuesday Apr 15, 2008

Will it run on multi-chip CMT?

Cooltst v3.0 is out, updated to assess workload suitability for single- and multi-chip CMT. When the UltraSPARC T1 was released, Cool Threads Selection Tool (cooltst) was developed to help gauge how well given workloads might run on the new chip which traded speed for throughput, allowing cooler, lower power, lower cost computing for many applications. But which applications? A single threaded application would tap just a tiny fraction of the 8 cores and 32 hardware threads of the UltraSPARC T1 processor.

Iguazu FallsMuch has changed since then. There is much empirical data showing various applications running well on CMT. The UltraSPARC T2 processor was released, increasing CMT power to 64 hardware threads. This processor also added dedicated floating point units per core so that, far from being relegated to a niche web server market, it claimed (and still holds) a high performance computing record.

Now UltraSPARC T2 Plus systems have been released, further extending CMT power to 2 chips, 8 cores per chip, 8 hardware threads per core - 128 virtual CPU's in a 1RU box. Cooltst helps you assess how well your workload may tap that throughput potential. You can read about it and download it starting at sunsource.net.

There's nothing magical about cooltst's heuristics. You can make much the same assessment yourself using ordinary tools like ps (to look at the software threads) and cpustat (to look at instruction characteristics). All the source code is included so you can see what it's doing. On Linux systems a loadable kernel module is included to measure instruction characteristics in place of Solaris' built-in cpustat command. The output of cooltst is tabular data and a narrative description and  of your workload characteristics, and a bottom line recommendation.


Disclosure Statement:

SPEC and SPEComp are registered trademarks of Standard Performance Evaluation Corporation. Results are current as of 11/11/2007. Complete results may be found at the url referenced above or at http://www.spec.org/omp/results/ompm2001.html


My photo:

Iguazu Falls, on the border of Argentina and Brazil. It's over twice as big as Niagara Falls in terms of water flow, because it covers such a wide area.

Saturday Apr 05, 2008

Forward to the 32-bit past

After wrestling with incompatibilities of 64-bit Linux for a while, I finally downgraded my home PC to 32-bit Ubuntu 7.10 (Gutsy). I found some nice and less nice workarounds, like running the Windows version of Firefox under Wine in order to get Flash to work. I hadn't found a workaround for the Java browser plugin, or for Skype, and was considering a 32-bit chroot environment.

Finally one of those automatic updates decided it for me. You know the ones, the  messages offering later versions of software, critical security updates and recommended updates. Being able to just click OK to automatically be upgraded to the latest software is part of what makes Ubuntu so friendly. But this time it wasn't so friendly. Something left my PC unable to boot to multi-user, unable to start networking, and unable to start graphics. I don't know what because I didn't keep the disk image around for a post-mortem. It was much much faster simply to blow away my root partition with a complete new OS installation. So while I was at it, I dropped down to 32-bit.

Lots of things started working, but some got worse. I had always had a problem on Gutsy that after suspend/resume the Ethernet driver would get a reversed MAC address, complain that it was invalid, and switch to a new eth instance with a random MAC. Of course this played havoc with my router trying to keep track of where my PC was in order to provide DNS. This problem occurs in the forcedeth driver, reverse engineered for the nForce chipset. Some people worked around the problem with limited success by adding commands in the suspend/resume scripts to stop and restart networking.

But now on 32-bit Gutsy it got worse. Upon resuming the screen stayed black, and since the network seemed to be down I couldn't remotely login to find out what was wrong. I found lots of reports on the web about suspend/resume problems with the same error message in my .xsession-errors

Gtk-WARNING \*\*: This process is currently running setuid or setgid.

 

This seems related to my NVIDIA GeForce 6150 LE graphics. Like many others who posted their experiences, the problem occurred for me both with the generic open source driver and with the Nvidia proprietary accelerated driver. One person mentioned a workaround by logging out, logging in to a failsafe X-terminal, and suspending manually from there.

Irony: The main reason I'm running Ubuntu instead of Solaris is that Solaris doesn't yet have power management, and for a home PC, suspend and resume are essential. I've been eagerly watching the power mangement project Tesla at opensolaris.org, wondering why it's taking so long. I guess like most things it's easier to do, than it is to do right. By comparison, my PC's when running Windows 98SE often fail to wake up at all, and those running Windows XP tend to wake up by themselves, unbidden. The only systems where suspend/resume always worked were Linspire and, of course, MacOS.

Neither workaround by itself would work for me, but putting them both together I end up with a clumsy workaround that lets me suspend/resume, and may possibly point the way towards a less cumbersome workaround.

  1. Disable networking via gnome panel
  2. Logout
  3. Select failsafe X-terminal session
  4. Login
  5. sudo /etc/acpi/sleep.sh
  6. (system sleeps)
  7. (normal wakeup by pressing ENTER)
  8. Logout
  9. Select normal gnome session
  10. Login
  11. Enable networking via gnome panel

 



Wednesday Mar 26, 2008

SPEC award recipients, Power Performance

More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, to members of the power committee who produced the SPECpower_ssj2008 benchmark. This wasn't an easy benchmark to do, taking us into areas of engineering not so familiar to performance analysts. Along the way we picked up some new contributors, and some of us picked up some new knowledge and skills. Energy efficiency is increasingly important, and eventually I expect to see power measurements as part of every performance benchmark. But for now, SPECpower_ssj2008 is a great start that establishes a fair and practical methodology for consistent measurement.

As before I won't cite names without permission, but will add them later if given the okay. SPEC thanks:
  • Paul Muehr from AMD
  • Greg Darnell, and another engineer from Dell
  • Karin Wulf, and another engineer from Fujitsu-Siemens
  • Klaus Lange, and another engineer from HP
  • Jeremy Arnold, Alan Adamson, and another engineer from IBM
  • Anil Kumar, and two other engineers from Intel
  • an engineer from Sun
  • Michael Armbrust from UC Berkeley RAD Lab

Tuesday Feb 19, 2008

SPEC awards, virtualization

Virtualization is such a hot technology that Dilbert is poking fun at it: 2/12, 2/13 and 2/14. No wonder, since IT centers must use both hardware and energy more efficiently. At SPEC's 2008 annual meeting in San Francisco SPECtacular awards were given to members of the Virtualization committee. As always, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.

SPEC is working on a benchmark to model server consolidation of commonly virtualized systems such as mail servers, database servers, application servers, web servers, and file servers. Requiring a very different technical approach than SPEC's traditional benchmarks, virtualization has brought unique challenges. SPEC recognizes these engineers for outstanding contributions in meeting those challenges:
  • Andrew Bond, HP
  • Cathy Reddy, Unisys
  • Chris Floyd, IBM
  • Fred Abounador, AMD
  • Greg Kopczynski and another engineer, VMware
  • Nitin Ramannavar, Sun
  • Stephen Pratt, Communigate
  • and an engineer from a company so modest that they don't even want to accept public thanks.

Wednesday Jan 16, 2008

worse news about eWaste

If 30% recycling is the good news then what's the bad news? I coincidentally browsed two magazines the other day. Time had a short article on eWaste - old computers, monitors, and other electronic gear - and how it was recycled. Besides hazardous materials, it also contains many valuable reusable materials. The problem is that only 30% of eWaste is recycled and the other 70% piles up in landfills.

Then I read a long article in National Geographic that cast a dark shadow on the 30% of recycled eWaste. It showed Monitex, a Grand Prairie, Texas recycler that breaks down and safely recycles all the components. But more often recycling is outsourced. A foreign broker bids on the eWaste. It showed a lagoon in Ghana choked with monitors - the end point for so-called recycling. A boy carries cables to the fire fields where in thick clouds of dioxin and heavy metal laden fumes, the insulation is burned off so the copper can be sold. A man in India melts printed circuit boards to recover the lead - in the same pots where later the family meal will be cooked.

Perhaps it's too self-righteous for us in the west to say that rather than have a job we consider bad, poor third world people ought to have no job at all. I don't know how to judge what work is acceptable and unacceptable, and suspect I ought not to be the one to judge. But the point at which child laborers work in toxic conditions that drastically shorten their lifetime is definitely too much for me.

So I wouldn't say I'm glad for the 70% of eWaste that is dumped instead of recycled, that at least it isn't killing children in Ghana. But I might favor laws in the west against outsourcing recycling to countries that lack basic environmental and labor safeguards. And more than anything, I'm proud of Sun's efforts to keep the hazardous materials out of the computer gear in the first place. Cheers for the EU's "green design" directive! I hope all manufacturers apply those standards worldwide. That's the only truly humane long term solution to the problem.

Thursday Jan 10, 2008

Balance of Power

Electrical, not political. A DOE study found that - duh - if you give consumers information about time varying cost of electricity they will save money by shifting some power usage from peak to off-peak times. Consumers in the study lowered their electric bills by 10% and lowered their peak demand by 15%. This is a big deal because although the operating cost component of electricity (fuel) depends on the total energy consumed, the capital cost component (generating plants) depends on the peak power generation.


Solar power is particularly valuable to a utility because its peak production occurs in the middle of the day when summer demand from air conditioning is highest. But there's another peak around 5-6 when people come home from work and turn on appliances, and by then solar power production has fallen off. Thus adding photovoltaic power alone may not drastically reduce peak requirements for fossil fuel power plants.


Wind power along the California coast has an almost complementary generation curve to that of solar power, because of the onshore and offshore breezes in the mornings and evenings. Adding wind power alone may not drastically reduce peak fossil demand because the wind often dies down mid-day when the air conditioning load is highest.


But adding solar and wind power together could greatly reduce peak fossil demand, though perhaps not economically eliminate it entirely. Then if you added time of day metering to allow consumers to voluntarily shift their load, that would level even more peaks. Ditto various energy storage systems like the plan to use night time wind power to pump water back up a hydroelectric dam for use the next day, super capacitors, and plug-in hybrid cars. The key to effective and economical use of renewable energy is a balance of power supply with demand.


The computer industry tries to do the same thing with servers. Demand for computing services typically follows daily, weekly, and monthly cycles. When the data center is provisioned for the highest possible demand, there is a lot of wasteful excess capacity. Even with the most efficient hardware and the best power management software, running servers at low utilization is extremely wasteful compared to moderate utilization. So we try to balance computing supply with demand by virtualization and workload consolidation, especially if we can find workloads that are complementary (like wind and solar) in their resource requirements and/or their load versus time of day.


As network capacities increase and software becomes more sophisticated, you can imagine systems configuring computing resources worldwide to maximize computing power to the customer at minimum electric cost. Think of a customer connected from California in the middle of a hot day with time-of-day electric meters set to the highest price. Of course he might be routed to servers in Europe or India where the computing demand is off peak. He might also be routed to servers in Colorado where the computing demand might still be high, but the electricity demand and price might be lower. Or to Oregon where a heavy rainfall and cold wave might mean cheap renewable hydro-power, even at peak electric demand; and lower than usual data center cooling costs thanks to mixing filtered outside air.

Wednesday Jan 09, 2008

meeting in Second Life

I had my first business meeting in Second Life today, held jointly with a real world conference room. We remote participants travelled to the meeting by network rather than by airplane. It was a lot like ordinary meetings with remote participants. Which is to say we had a lot of trouble seeing and hearing the Bay Area participants, and vice versa. But instead of complaining about inadequate conference phones, mikes, and dedicated videoconference boxes, we got to complain about inadequate streaming bandwidth, poorly timed server maintenance, and incompatible video formats. At least I had the company of a few dozen colleagues from around the world while we discussed problems with the tele-link to the real-world conference room. I think virtual worlds have a lot of potential to overcome some of the social barriers to effective distance collaboration - barriers that put too many of us on airplanes too often. But I don't think the technology is quite up to the task yet. Sigh, that reminds me, I've got to go make a reservation. :-(

If you do visit Second Life, you must go see Hope & Happiness from Sun Microsystems. It is a delightful and magical walk through the wintery woods, following the glimmer of lanterns and faint music from distant warm cottages. You can make wonderful discoveries along the way, or you can get lost. And watch out for the bears and wolves, though I doubt they are truly dangerous, but it's fun to pretend. Especially if you missed out on a White Christmas this year, it's a great experience.

Monday Jan 07, 2008

wiki for Energy Camp

This Thursday is the OpenEco.org energy camp in San Francisco. In the spirit of the participant directed "unconference" they have set up a wiki for collaboration.

Thursday Dec 20, 2007

Bali summit for ordinary people

Jan 10, 2008 - OpenEco Energy Camp free event in San Francisco - the unconference. Sun is hosting this event to bring together environmental leaders, business leaders, open source developers, and people who want to make a difference. The attendees will set the agenda and steer the discussion. Dave Douglas, Sun's VP of Eco Responsibility will kick off the meeting. For more information and to register go to openeco.org.

Friday Dec 14, 2007

article on SPECpower_ssj2008

George Ou wrote the most detailed article I've seen on SPEC's new energy benchmark for Java server workload. You could reasonably think I like the article because he mentions my name. But actually I've been a fan of Ou's column for some time, like when he wrote about the Green Grid consortium, Wifi security, and how to build a 50 watt home PC out of commodity parts.

Tuesday Dec 11, 2007

SPEC announces SPECpower_ssj2008

Today SPEC announced SPECpower_ssj2008, the first industry standard power performance benchmark. It measures electric power used at various load levels from active idle to 100% of possible throughput. The workload tested is a server side Java workload. The methodology is applicable to many workloads, and I hope in the future we will see more standard benchmarks, and application of these methods to measuring power consumption of customers' own workloads. This benchmark is the result of long hard work by dedicated engineers from many companies, universities, and Lawrence Berkeley National Laboratory. Congratulations!

 


About

I am a software engineer in San Diego, president of the Standard Performance Evaluation Corporation (spec.org), formerly a mathematician and a violist.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today