e premte Sht 02, 2005

Katrina's impact on our customers

My job at Sun is to assist our customers with the architecture, design and implementation of complex systems. Naturally I am reticent to mention customer names lest there be legal or public relations issues. However, I spent a large part of last year preparing a customer in New Orleans for Business Continuity and Disaster Recovery and now those plans have been executed. I will be oblique about the precise details but two large systems integrators are assisting one of our government agencies to do a large Enterprise Resource Planning application that involves a web front end, application servers in the middle tier and a very large database on the back end. The task was so large that I worked extensively with one of our partners, Mr. Chip Elmblad of Sub2 Technology Consulting. Feeding this system are many computers and users from around the world. There are also many systems associated with the core of the ERP system that perform functions like reporting and ad hoc queries. Hopefully this diagram will help you get the picture.



The servers are located in a building overlooking Lake Ponchatrain very near this location from Google Maps. This view is Google's very cool hybrid satellite photo overlaid with street names. If you have seen some of the news coverage you will notice that one of the levee breaches was on the left side of this photo and the computers are housed in several buildings on the right side of the photo. Here is closer view of the buildings themselves (note the address is not accurate, its just to zoom in on the buildings.) While on site, various people told me that sea level was the 3rd floor of these buildings and one of my friends reported that as of Tuesday there was 10 feet of water in the buildings. I would call this a disaster.

As an aside, when working for a government agency, we don't want to do 'Disaster Recovery,' we do 'Continuous Operations.' Perhaps we are shying away from the negative connotations of the word 'disaster.' Last year we tested the system when Hurricane Ivan grazed New Orleans but did not do much damage. We did invoke the process of failing over to the remote computers and all of the core systems worked well. I think some people in New Orleans looked back to last year and thought, 'We were OK last year during Hurricane Ivan so I'm not going to bother to evacuate this year.'

To describe some of the technical details of continuous operations, our partners set up similar hardware in a location near Memphis, TN to match web servers, application servers, database servers and associated servers from New Orleans. The customer uses EMC as their storage vendor and so we used EMC's SRDF (Symmetrix Remote Data Facility) to replicate the data from New Orleans to Memphis. (Note that this could also be accomplished with Hitachi storage and True Copy.) Every day a consistent image of the updated database pages and associated files on other servers are shipped at specific times during the day. Our SLA (Service Level Agreement) was to be at most 12 hours behind in data replication. Perhaps this diagram can help illustrate some of the complexity of the data replication process.

Naturally the network at the remote site cannot have identical names and IP addresses with the primary site due to name space requirements. It simply takes careful planning and adjustments to control and configuration files to make certain that the remote site applications perform like the primary site. It must be tested, preferably under realistic scenarios, to verify that all aspects of the system and network function properly. The core of this ERP system worked very well both last year and this week, but certain associated systems had challenges establishing connectivity with the remote site.

To my friends Rene, John, Dana, Jarmaine, Linda, Brian, Jeanne, Trey, Marc, Jim, Eddie, Matt and all others, our prayers are with you. I hope your homes are OK.

e enjte Maj 12, 2005

The Utility Computing Tsunami

Yesterday's email brought a link to Nicholas Carr's provocative article in the MIT Sloan Management Review entitled "The End of Corporate Computing. The summary of the article begins with this quote..."Information technology is undergoing an inexorable shift from being an asset that companies own — in the form of computers, software and myriad related components — to being a service that they purchase from utility providers. Three technological advances are enabling this change: virtualization, grid computing and Web services." It concludes with this paradigm shattering, future as a tsunami coming at you assessment: "IT’s shift from an in-house capital asset to a centralized utility service will overturn strategic and operating assumptions, alter industrial economics, upset markets and pose daunting challenges to every user and vendor. The history of the commercial application of IT has been characterized by astounding leaps, but nothing that has come before — not even the introduction of the personal computer or the opening of the Internet — will match the upheaval that lies just over the horizon."
In my assessment of our Customer Engineering Conference I stated that there is the distinct possibility Sun will survive and that I am excited to be part of the plan. However, in any fundamental economic shift, lots of companies face metaphorical extinction as new companies adapt to the changes faster. Sun's thought leaders (and here) are already preparing to catch this wave and I am thinking that a $1 per CPU per hour is a great price for the computational commodity. "May I help you assess your computational requirements, Sir? How many petabytes storage do we need to go with those MegaSpecInts?" Somebody hand me my sunglasses...the future is looking pretty bright around here. Where did I put that surfboard wax?

e martë Mar 01, 2005

Customer Engineering Conference 2005



Top Ten Great Things about CEC
10. Installfest for Solaris 10 on laptops...Bob Netherton, Joe Cicardo, Alan Duboff and others gave of their time to help others over the hurdle of repartitioning their disks and putting Solaris 10 on their laptops. Many took advantage of the opportunity but I did see one crash and burn like mine. Avoid using Windows tools like Partition Commander or Partition Magic and stick to Unix/Linux tools like qtpartd.

9. My roommate, Mike Belch, from the UK. If he will ever get it together and get started on his external blog, I will add him to my blogroll. Excellent guy who is mad about motorcycles.

8. Seeing old friends and meeting new ones...uber geeks with outrageous IQs that do very creative work. Since Sun is world wide many of us only rarely get together face to face. An email or a phone call is just not the same thing. And some of you are too busy to call or write...you know who you are.

7. New ideas and new ways to express old ideas. Sun is made up of really bright guys who are very creative. I'll be stealing your presentation slides to present to my customers so your ideas will be reused.

6. The encouraging belief that Sun has a definite chance to survive and thrive. As you might have noticed I am a movie buff and that is an oblique reference to "Ghostbusters," a truly great comedy about a bunch of geeks. The threat to Sun's existence is real. If history is a guide, very few companies whose stock drops below $10 recover. Also the history of the computer industry is littered with companies that had great runs and caught one or two technical waves but were unable to make the transition to the next wave. Its incredibly difficult and only a few companies have pulled it off. This one might be able to see and catch the next wave and I hope to be a part of it thriving.

5. Robert Youngjohns and his enthusiam for utility computing and the storage grid he is building. One of the themes of the conference was that computing should be a utility like electricity, gasoline, or phones. Today most companies do not generate their own electricity. Can we be the company to begin the transformation to computing as a utility?

4. Andy Bechtolsheim and the new servers which his team has developed. If I said much about the new servers I wouldn't get to stay with Sun but Andy proves that this is a company where geeks rule. Stay tuned to see some very rockin' servers. Really, technology can be very exciting. Maybe I do fit in here.

3. Scott McNealy admitting some of the previous big bets were mistakes...Can you say 'Cobalt?' The lesson from history seems to be executive hubris may have contributed to the computer companies which failed. A dose of humility bodes well for our chances of bucking the odds.

2. Jonathan Schwartz in a Dallas Mavericks jersey (NB, I am from the Dallas area) adlibbing about Dallas owner Mark Cuban (NB Mark made his billions from Broadcast.com.) Another Sun executive who does have a clue and who may make us the exception to historical trends and further proof that geeks make good at Sun.

1. AC DShe--the girls who rocked Club CEC.

Ten Worst Things about CEC

10. Wet sandwiches for lunch.

9. Only 2 drink tickets for each night...this was deftly circumvented by friends who are teetotalers...you know who you are :)

8. Missing the Oscars and the red carpet for this. AC/DShe does partially make up for it.

7. Giving up my weekend for a technical conference...I travel every week but this was important.

6. The large number of homeless and hopeless on the streets of San Francisco...it breaks my heart.

5. Having my idea for a presentation rejected...my feelings are hurt :)

4. Having a roommate (but see #9 above.)

3. Geeks who present vital information dully and who mix the important with the trivial.

2. Too little time to work out or spend in contemplation. I mean Sunday school is important too.

1. Executives who are completely clueless and who can't or won't admit their mistakes.

Thanks to Sun management (Hal and Jim especially) for putting it on. It does help those of us in the field catch up to what is happening back in the lab. The interpersonal contacts are crucial and improve productivity. Even if it changes form somewhat these type of events are vital.

e hënë Jan 10, 2005

Solaris 10 x86 Notes



So I had some fun this weekend playing around with Solaris 10 x86 on my Toshiba 9100 laptop. Here are a few notes and observations. First an embarrassing confession. My network got screwed up a while back. When I booted up, everything came up. An ifconfig -a and a netstat -rn command showed reasonable values, but Mozilla would barf on most sites after loading part of the data. I tried a couple of nslookup www.cnn.com and dig google.com commands to make sure name resolution was working. The name lookups worked but still Mozilla wouldn't cooperate. Friday afternoon I finally checked /etc/nsswitch.conf and it was hosed. Looked like part of my xorg.conf had overlayed it. I copied /etc/nsswitch.dns to /etc/nsswitch.conf, rebooted and voila, back in business. Annoying to me that nslookup and dig don't complain about the bogus nsswitch file but its all better now.

So now I am ready to try out several new things. I have been hearing good things about Firefox as a browser and Thunderbird as an email client. Supposedly leaner and meaner than Mozilla. Each package mentioned above available to you for Solaris 8 x86 to Solaris 10 x86 at one of my favorite sites, SunFreeware.com, at this special link. I downloaded FireFox and Thunderbird and installed according to the directions. The package install placed the basic executable in /usr/local/bin. I then used Google image search for the Firefox and Thunderbird logos, downloading them into /usr/share/pixmaps. I am using the Java Desktop so I did a right click on the background and did a Create Launcher. I named Firefox and did a Browse for the command, finding it in /usr/local/bin/firefox. I then clicked the No Icon button and found my downloaded icon and selected it.

I tried out Mozilla, Firefox and Thunderbird for a comparision. I hate to say it but I like Mozilla because of the fonts. The fonts on the Mozilla panels render the letters clearer than the Firefox panels. Also on serveral web pages the fonts on the Mozilla browser were thicker and easier to read than on the Firefox browser. So then I checked the fonts in the preferences and the proportional fonts are the same. The rest are not filled in Mozilla and I can't find the same fonts from FireFox in the Mozilla list. I also cannot clear the different fonts in Firefox. However, that is simply my own aesthetic opinion which may not be shared by others. What about technical aspects of the situation?

I fired up each browser and ran pmap on each of them to see if Firefox is leaner and meaner than Mozilla. Mozilla had a total memory size of 66316K and a heap size of 9956K while Firefox had a total memory size of 38096K and a heap of only 7364K. So Firefox is only 57% of the size of Mozilla. But when my laptop has 512MB do I really need to worry about 28MB? The prstat and vmstat of the startup of Mozilla and Firefox seem very similar. I guess I am going with my own aesthetic opinion. Perhaps a developer or a browser guy can comment on my font issues.

e mërkurë Jan 05, 2005

Solaris Internals



An email for from a friend today asked the question "What do you suggest to help a programmer understand Solaris Memory internals?" I thought about it and suggested Richard McDougall and Jim Mauro's book Solaris Internals. However, that book is a perfect illustration of my theory of the "half life of information." The book was released in the year 2000 and covered Solaris 7. Mssrs. McDougall, Leventhal, Cantrill, Bonwick, Price, Shrock et al. have been extremely busy and much improved Solaris from the days of priority paging in Solaris 7. In Solaris 8 and beyond the page scanning algorithm is now called Cyclical Page Cache so the book is outdated in some respects. The term 'half life' is drawn from radioactivity and refers to "the length of time in which half the nuclei of a species of radioactive substance would decay." The image of 'information half life' is how much of the material in the book from 2000 is still accurate. My belief is that much of the material in the book is still relevant since the early architecture of Solaris has carried through to Solaris 10 (download and play with your copy from here.) The information in the book has been updated for later versions of Solaris (8 to 10) in a set of 367 slides, dated November, 2004, in an Adobe acrobat file available here. Those of you on dialup do not want to download that file and you are already mad at me because of the number of images on my page.
And in a late breaking update, Richard just asked me to review the new chapters for Solaris 10. Hope that the publisher can get the revised version out soon so the information half life will be longer.

e martë Nën 30, 2004

Grids Redux



One of my first posts was a plea to sign up for the United Devices Grid to participate in cancer research in cooperation with the University of Oxford and the National Foundation for Cancer Research.

Now the Human Proteome Folding Project has begun. I have noticed my computers participating in this project recently. CNN reported on this project here. You can download the software here and join the SunOne team to show your support. While you are typing or out for coffee or away from your computer, it will be doing its part to cure cancer or fold proteins, participating with more than 1 million other computers in these projects.

Remember, Sun's illustrious leader is positively mental over grids also.

e hënë Nën 22, 2004

Immersion Week




Here's part of the gang that got together at the Q center outside of Chicago last week for Sun's Immersion Week. This fine group is part of the Central US Data Center Practice that got together for a 'Birds of a Feather' meeting Thursday night. Standing on the left is Bill Pilarski, our fearless Practice Manager, and standing on the right side is Brian Ahearn, our Director. Squatting 2nd from the right is Phil Morris, our CTO. We got together to learn about Sun's new technology and strategy for the next year. As usual for this type of gathering, the classes contained important material but some presenters could have had better skills. The Solaris 10 Dtrace sessions and Zones sessions were good but I was in too few of them. Famous Sun Bloggers who I know were there include John Clingan, Glenn Brunette, John Beck, and Bart Smaalders. If I missed any other famous bloggers who attended, I apologize.

e premte Nën 12, 2004

FireEngine aka Solaris 10 Network Stack



How did they get this past the lawyers??? They are actually saying that the new network stack is up to 45% faster. For a performance guy, this announcement is truly amazing. This article also discusses the coming 10 Gigabit networking. You can download the latest version of Solaris 10 x86 from here and take this screaming network stack out for a spin. Run your own speed comparisions against Linux, Windows, or whatever. (Disclaimer, your results may vary. Please do not use ftp as a networking benchmark, it sucks. Use the ttcp utility.)

e mërkurë Nën 10, 2004

Good News - Niagara in the public eye

Yesterday's news was depressing, but The Inquirer has this article, Sun's Niagara Falls Neatly into Multithreaded Place, discussing our 8 CPU core massively multithreaded processor code named Niagara. The diagram below attempts to illustrate the text of the article which says in part, "On a macro level, it will have eight cores, each core capable of running 4 threads in parallel, for 32 concurrently running threads." Naturally the illustration is chopped off at 4 cores, but its for illustrative purposes only. The C's in the diagram are compute time for the thread and the M's are the memory latency of the thread. By switching between threads on a core, we hope to minimize the time waiting for memory to catch up.
As a performance guy, this is exciting news. I can't wait to run cpustat and busstat on one of these processors.


e martë Nën 09, 2004

Ouch...Bad News if its True

The front page of the Wall Street Journal was depressing today with a lead article entitled Drag on High-Tech Recovery: Companies Do More with Less (Free this week only.) A few relevant quotes (read 'em and weep with me):

"Corporate spending on technology gear grew roughly 15% in the first half of the year...but...the recovery already is losing steam. The growth in corporate technology spending slowed to 9% in the third quarter.
The shift has big implications for the broader economy. It's terrific news for corporate buyers but is holding back a major driver of overall economic growth...Productivity, driven in part by technology, has risen for 14 straight quarters, the longest stretch in 60 years, though it too is showing signs of slowing."
"Forrester Research, which focuses on technology, says buyers are in a long period of 'digestion' that will extend until 2008. Corporate technology spending will grow only about 6% a year between now and then, Forrester says, as businesses figure out how to extract value from their purchases of the 1990s."

And this really hurts:
"Instead, tech buyers are most excited about new offerings that help them cut costs. Computer servers from Dell replace far more expensive models from vendors such as Sun Microsystems Inc. The free Linux operating system reduces software costs. Other emerging open-source programs -- which generally are available free of charge and are easily modified by users -- threaten traditional business-software packages from suppliers such as Oracle Corp. Internet telephone systems slash communications costs. Indian programmers reduce the cost of writing custom programs."

I see all sorts of problems with these assertions. If the period of 'digestion' runs until 2008, companies would be running on servers that are 10 years old or at least 3 generations out of date. At a certain point, the Mean Time To Failure of 10 year old electronic components rises to the level at which the servers must be replaced on economic/availability/supportability grounds. Not to mention that the increasing sophistication of applications tends to require later generations of hardware just to keep up with the complexity.

e hënë Tet 18, 2004

Capacity versus Performance

When I started this blog I said that I did performance and capacity planning for Sun's customers. I want to offer up a technical study or two to help others with performance issues. I entitled this Capacity vs Performance in order to highlight the difference. Often a capacity limitation manifests itself as a performance issue. In order to differentiate between performance and capacity, performance might be defined as 'How fast it is going' while capacity is 'the maximum performance of the system or an individual component.' Imagine capacity as the dump truck carrying a load and performance as a sports car racing. Even a sports car has to slow down for corners. Not to be too simple but we need to look at each component of the system's performance, CPU, memory, network, disk and tape.

One specific example was a customer who has a directory on the internet. Their customers submit searches from multiple sites and the Service Level Agreement (SLA) was no more than 5% of requests with response times of over 3 seconds. Currently 15% of request take more than 3 seconds which puts our customer in a penalty situation. The system is a 6800 with 12x900MHz CPUs. Unfortunately someone attempted to fix the problem by 'throwing more iron' at it and adding CPUs and memory without knowing why there was a problem. Lets look at a few numbers. From vmstat:

 procs     memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m1 m1   in   sy   cs us sy id
 0 2 0 8948920 5015176 374 642 10 12 13 0 2 1  2  1  2  132 2694 1315 14  3 83
 0 19 0 4089432 188224 466 474 50 276 278 0 55 5 5 4 3 7033 6191 2198 19  4 77
 0 19 0 4089232 188304 430 529 91 211 211 0 34 8 6 5 4 6956 9611 2377 16  5 79
 0 18 0 4085680 188168 556 758 96 218 217 0 40 12 4 6 4 6979 7659 2354 18 6 77
 0 18 0 4077656 188128 520 501 75 217 216 0 46 9 3 5 2 7044 8044 2188 17  5 78

There is something odd about these numbers. On vmstat, we look at the right 3 columns, us=user, sy=system and id=idle, so there is over 50% idle CPU available to throw at the problem. One way to detect a memory problem is to look at the sr, Scan Rate, column of vmstat (near the middle of the display.) If the page scanner ever starts running, or sr gets over 0, then we need to dig deeper into the memory system. The very odd part of this display is that the blocked queue on the left of the display has 18 or 19 processes in it but there are no processes in the run queue. That means we are blocking somewhere in Solaris without using all the CPUs available to us.

So now, we need to turn to the I/O subsystem. With Solaris 8, the iostat command has a new switch, -C which will aggregate I/Os at the controller level. My favorite iostat command is iostat -xnMCz -T d (interval in seconds) (count of iostat outputs):

                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
  396.4   10.7    6.6    0.1  0.0 20.3    0.0   49.9   0 199 c1
  400.2    8.8    6.7    0.0  0.0 20.2    0.0   49.4   0 199 c3
  199.3    6.0    3.3    0.0  0.0 10.1    0.0   49.4   0  99 c1t0d0
  197.1    4.7    3.3    0.0  0.0 10.2    0.0   50.4   0 100 c1t1d0
  198.2    3.7    3.4    0.0  0.0  9.4    0.0   46.3   0  99 c3t0d0
  202.0    5.1    3.3    0.0  0.0 10.8    0.0   52.4   0 100 c3t1d0

Whoa! On controller 1 we are doing 396 reads per second and on controller 3 we are doing 400 reads per second. On the right side of the data we see that iostat thinks the controller is almost 200% busy (iostat error...never checked to see if there has been a bug filed.) So then the individual disks are doing almost 200 reads per second and iostat figures thats 100% busy on the disks. That leads us to a rule of thumb or hueristic, that individual disks perform at approximately 150 I/Os per second. This does not apply to LUNs or LDEVs from the big disk arrays. So our examination of the numbers lets us suggest adding 2 disks to each controller and relaying out the data. Unfortunately, due to the disk array configurations, we could only add 1 disk to each controller. That did improve the situation as seen by the next iostat:

                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
  410.6    5.4    4.8    0.0  0.0  5.7    0.0   13.7   0 218 c1
  386.0    9.0    4.6    0.0  0.0  5.3    0.0   13.4   0 211 c3
  129.4    2.2    1.5    0.0  0.0  1.9    0.0   14.7   0  73 c1t0d0
  139.4    1.8    1.6    0.0  0.0  2.3    0.0   16.0   0  79 c1t1d0
  141.8    1.4    1.7    0.0  0.0  1.5    0.0   10.4   0  66 c1t2d0
  133.0    1.0    1.6    0.0  0.0  2.1    0.0   15.6   0  76 c3t0d0
  125.4    2.2    1.5    0.0  0.0  1.9    0.0   14.6   0  72 c3t1d0
  127.6    5.8    1.5    0.0  0.0  1.4    0.0   10.2   0  63 c3t2d0

We are still close to the top end of the performance of an individual disk but we dropped from 15% of transactions out of the SLA down to 6 or 7% of transactions out of the SLA. And the CPUs look good:

 procs     memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m1 m1   in   sy   cs us sy id
 0 2 0 9283064 5482928 787 1293 36 0 0 0 0  0 23  0 13 5145 14763 1394 27 6 67
 0 1 0 6547512 2483056 869 984 110 0 0 0 0  0 14  0  8 5377 8114 1372 23  6 71
 0 1 0 6525816 2461496 1190 1230 0 0 0 0 0  0  0  0  0 6414 17808 1402 33 9 58
 0 1 0 6516240 2451976 1316 481 0 0 0 0  0  0  0  0  0 5432 8226 1509 30  7 63
 0 1 0 6506616 2442768 684 660 0 0 0  0  0  0  0  0  0 5188 16922 1259 26 7 67

Now we still have plenty of idle CPU time and only 1 or 2 in the blocked queue. It would have been nice to be able to add 2 disks to each controller but even 1 disk on each alleviated this problem. After this, the customer studied some of the internal design of their directory search algorithms. As the proverb says, Fixing one performance or capacity limitation only exposes the next issue.

The point of this exercise is looking at all the numbers and attempting to locate the precise nature of the problem. Do not assume adding CPUs and memory will fix all peformance problems. In this case the search programs were exceeding the capacity of the disk drives which manifested itself as a performance problem of transactions with extreme response times. All those CPUs were waiting on the disk drives. One other thing to note in this example is that there were no 'magic' /etc/system parameters to tweak. There are fewer and fewer knobs (or parameters) in Solaris to adjust to improve performance.

e mërkurë Tet 13, 2004

Solaris 10 x86 Confession

Team,

I tend to participate in aliases and share problems I have run into. Tonight I have to confess that I croaked my Windows 2000 partition when I attempted to dual boot my laptop last Saturday. First let me reassure you that I know this can be done and I completed the process Thursday night. However, I will confess my failure in order to save you heartache and grief.

I am excited about trying out the new features of Solaris 10 x86 like DTrace and Containers. Download your copy of Solaris 10 x86 here. I first cleaned up my harddrive by deleting outdated files and taking out the trash. Then I defragmented my drive using the Windows 2000 Disk Defragmenter. This is important for resizing the Windows partition. Then I did follow RULE #1 and made a backup of my important files. I used the backup facility of the Nero tool (burning CDs, not Rome, get it.) I made 2 mistakes here. Mistake #1 was I did not follow RULE #2...Verify your backup and so several files would not restore later because of media problems. Mistake #2 was that I backed up my data starting at the level of 'My Documents,' not at the level of my User ID (one level up) which would have included my Application Data folder, that is my bookmarks file and my Outlook PST file. Now I have 'recent' backups of those files, but I lost 2 weeks of data when I thought I had fresh backups of these files.

My problem was not understanding some features of the tools I attempted to use. I picked the tool Partion Commander to resize the 40GB harddrive into a 25GB partition for Windows 2000 and a 14+GB partition for Solaris 10 x86. Unfortunately for me, Partition Commander installed a utility, checkmbr (Check Master Boot Record) which automatically attempts to reinstall a base Master Boot Record. When you install another OS like Linux or Solaris x86, the new OS must update the master boot record and offer you the choice of which OS partition to boot. The repartion worked and the Solaris x86 install worked fine. I rebooted Solaris x86 several times and was fine. The problem occurred when I rebooted the Windows 2000 partition and the automatic utility checkmbr found the Solaris boot partition chooser in the master boot record. It attempted to restore it to its original state and then neither partition would boot.

I believe you can and should do this. There are issues in doing this that are challenging but documents like this one can help you. I happened to have a Toshiba Tecra 9100 laptop which needs some BIOS updates:

Disable USB Legacy FDD support

Disable USB Legacy support for keyboard and mouse if a separate setting

Disable Parallel port

On Thursday night, I got out my Knoppix CD, which everyone should have in their CD case as a rescue CD. It has a utility, qtparted which I used to partition my hardrive. Other versions of Linux also have this utility. I then rebooted Windows 2000 and let checkdisk run to get used to the new partition size. Then I took my stack of 4 Solaris 10 x86 CDs and ran the install. Sucess! I am ironing out a few display issues but looking forward to writing my first DTrace program tomorrow.

e premte Tet 08, 2004

Get on the Grid!

Everbody is positively mental over Grid computing (aka distributed computing) today. You may be curious about the Grid phenomena but you are worried that you have not yet installed Solaris x86 at home. Not to worry. I know you are going to make the switch Real Soon Now, but even before you do, you can participate in the grid with your home computer even if it runs an operating system from another company. :)

Grids have been a long time. In the 90's I participated in SETI@Home aka Search for Extraterrestrial Intelligence aka Search for Little Green Men. Its an impressive project that has 5.2 million users who have donated 2 million YEARS of computer time to the project. Today they are working at 66.5 Teraflops (trillion floating point operations) per second. That's some serious computational rock and roll. For my money though Little Green men is just so last millenium. I mean if Agent Mulder has given up the search, why am I still working on it? :)

Another of my favorite grid projects is Folding@Home . This Grid project is modeling "protein folding, misfolding, aggregation and related diseases" (like Alzheimer's.) Currently they have 171,628 CPUs running Windows, Mac OS X and Linux, which means that they have 196 Teraflops on the problems. The image preceding this paragraph is a Beta Amaloid peptide, "thought to be responsible for nerve cell death in Alzheimer's Disease." It is a part of Projects 722-724 so your computer could be a part of helping with Alzheimer's research. Truly a worthy cause and you can join up here.

I was folding proteins for Professor Pande, but 2 years ago I had a colon polyp removed that was becoming cancerous. So these days my computers are working on cancer research headed up by the University of Oxford. There are over a million members with almost 3 million computers working on the problem and yesterday 270,000 results sets were submitted. If you are concerned about cancer, download the software from here. Then join my team-- SunONE. I am not actually the captain but I liked the name of the team.

I urge you to pick one of these projects and let your computer(s) crunch numbers while you are not actively using it. Each one of these processes runs in the background at a lower priority than all other work your computer is doing. Most can be paused but you will not notice them in the background because your OS only schedules them when there are idle cycles available. If you are surfing the web, they back off. Even while you are typing they work, and trust me, a modern CPU is doing a lot of waiting around even if you type 300 words per minute. If "a mind is a terrible thing to waste," wasting good computer cycles is just criminal. Thanks for your support

e mërkurë Tet 06, 2004

Work

I've been with Sun for 7 years in Professional Services.   I primarily do large systems, HA clusters, Oracle 9iRAC clusters and performance and capacity planning on these systems for our customers.  I will talk more about my work later.

 

About

pcr

Search

Archives
« korrik 2014
DieHënMarMërEnjPreSht
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today