Tuesday Sep 11, 2007

Starbucks, Your Brand isn't Worth $9.99

I decided I would get out of the house today to do some more work on the XAM stuff I'm doing. I have a nice setup at home and I get to avoid the long commute, save a little gas, yadda yadda yadda but some days you just have to leave for a few minutes to regroup.

So, I decided I would wander over to Starbucks and bring up my NetBeans IDE to do some coding. I grab a coffee, grab a chair that 5,000 people before me sat on so there is no telling what is in the crevices, and I pop open the laptop. Gack...they are still partnered with TMobile. Guess what? My account was "cancelled" by them and they want me to call customer service (probably otherwise known as "the Sales Department"). I surf around a bit and they still want $9.99 for a day pass.

Listen, Starbucks, it is easier for me to get up, move my butt down the street and get a faster and more reliable Wi-Fi connection for free. Further, the Starbucks brand no longer demands a premium for "coolness" since the market is saturated (600 Starbucks in the San Francisco area...you have GOT to be kidding me), so its not even COOL for me to be sitting in Starbucks working, it is far cooler at the Tattered Cover or even Panera.

So, for me to get a few minutes of work in (for which I DO buy coffee, yogurts and other stuff since I come in before Breakfast), you want me to


  • Purchase a few things at your store
  • Drink McDonald's-like Coffee that tastes the same every day, day in day out with no different flavor and no personality
  • Pay $9.99 for a day pass that I will use for maybe an hour while I eat Breakfast
  • Actually "talk" to the TMobile Sales...err...Service people that deactivated my account
  • Not support my local businesses that have free Wi-Fi (I know Panera isn't local but the Tattered Cover is)

Listen, it doesn't add up! It is just not cool to carry around a Starbuck's cup, it is just coffee, coffee I can buy at approximately 200 other places in the Denver area. And paying over $17 (TMobile + Food) for the privilege of Starbucks just doesn't add up anymore when I can take $10 off by going to one of the other myriad of places that have Wi-Fi. Heck, I could go to the hospital again and get free Wi-Fi (would insurance help me out on that one????).

And if you want a clue why your U.S. expansion is stalling, it doesn't take a brain surgeon to figure it out. And, frankly, the iPhone partnership isn't going to help you. Get out of the TMobile partnership, get some food that isn't stale and unappealing, get some hipness...you're a dinosaur.

Wi-Fi is ubiquitous. What is a brand worth to a consumer when that brand has hit the saturation point? I'm not a genius, but its not $17.

Tuesday Sep 04, 2007

CPU and Storage all the way down

This weekend I had the privelege of finding the last campsite in all of Colorado. I went to Stillwater Campsite in the Arapahoe National Forest area, just down the road from Rocky Mountain National Park. Everything was spectacular about the entire trip, and I had my Canon Elph along for the ride (I bought a Canon Rebel right before we left but...forgot the battery!!!!!!).

The panoramas in the Rocky Mountain National Park area are fantastic.

If you haven't dug into your latest digital camera + editing software, you may not have noticed a little feature that is called something like a "photo stitcher". It basically allows you to take a series of photos and stitch them together into one large picture, like the one here:

The original source is much larger than what I show above (9448x1823 to be exact) and is stitched together from a whopping 5 pictures taken in a line across the horizon. If you like the above picture, it is a scene off the roadside in Rocky Mountain National Park. Long's Peak (a 14er) is on the right side of the shot. Is it a good picture? I don't know, you can decide. Fortunately for you, if you are in Rocky Mountain National Park, you actually have to try if you want to capture a bad picture.

What is particularly interesting to me is the number of processors and the amount of intelligence that is inserted between the original captured content and the final printed result. What we are witnessing is the increasing encroachment of system and application intelligence in the path of the data content. This is occurring everywhere, from the smallest of devices up to the largest of storage systems. Further, companies that can insert applications along the datapath can do a great service to the consumers of the content along the way. My stitched together photo is a perfect example.

Start with me standing on a ridge in Rocky Mountain National Park. I would LOVE to have the entire panorama I'm seeing in a single picture, but my lense simply won't allow it. Enter the "Stitch Assistant". Instead of my camera being a light capture and store device, it is a light capture, process, store and assist device. Using the stitch assistant (that my friend found for me), I was able to take the first picture, and my camera would show the edge of that picture so I could line up the next picture with some intelligence. Not only that, it would keep the photos organized so that they are easily found and accessed on the SD card.

Next, I upload the pictures to my laptop, where there is more storage and CPU processing. The Canon software takes the images that were grouped on the camera and stitches them back together into one large panorama, crops it, and stores it as new content. Now comes a FATAL flaw in the Canon Software, there is no built in uploader to online content repositories.

So, I bring up Kodak Easyshare or the SmugMug uploader and grab my panorama (SmugMug is actually proving to be easier). SmugMug then processes the image AGAIN on the way into their content repository and transforms it to thumbnails and metadata. Finally, I choose to print my panoramic view (here all of the storage services fall down, they give the traditional sizes for printing (4x6, 5x7, etc...) and don't tailor their services to us budding panoramites (I would love to choose a picture and have them create a custom size for me that best fits the picture).

Think about this, every storage operation was preceded by application intelligence that made meaningful transformations of my content. With better programmers and more time, the storage operations could become far more intelligent as well. For example, there is absolutely NO reason that my pictures that I used "Stitch Assistant" on shouldn't have been weaved together into a single panorama when my pictures were transferred from my camera to my laptop. The storage operation itself should be able to apply meaningful application logic to the process.

Where these transformations take place will be dependent on the particular application. For example, the Canon Camera uploader is a natural place for the stitch assisted photos to be re-stitched because of the lack of CPU on my camera and the lack of standards for metadata in images. Had my camera more CPU, the camera itself could do the stitching. If images had standard metadata about what picture in the series was what, you could push the stitching into the storage device itself (you would store 5 pictures and get 6 back in your folder!). Even the Operating System could add the functionality.

Each step of the way has distinct advantages and disadvantages. More and more we are going to see application logic being placed in between the capture and raw bits of our digital life. It just makes sense that our storage devices should be able to do some work on our behalf. Further, these abilities are going to creep into our lives almost unnoticed...except by those companies that are enabling these leaps in customer functionality, like Sun Microsystems :-)

Friday Aug 17, 2007

Happy Birthday to the CD...not many left!

A shout out to the birthday...thing this weekend, the Compact Disc (CD), congratulations on your 25th birthday! Ahhh, I have plenty of reminiscing about vinyl on my blog as I rip my vinyl to my hard drive and then, yes, burn the MP3s onto CD to play on my car stereo (when I don't have my iPod with me).

My first CD? Well, I believe it was 10,000 Maniacs In My Tribe purchased while I was in college at Winona State University, circa 1990 I believe...its so hard to remember. By the time I was done with my undergrad I had all of the essentials (Dire Straits Brothers in Arms, Cowboy Junkies Trinity Sessions, a Pink Floyd or two, etc...).

My last vinyl album (to date) was Son Volt - Trace, I still have it unopened, the CD was good enough at the time.

The CD is one of those ubiquitous technologies. Consider how far usage has crept. From ABBA and Dire Straits recordings to a backup technology. The CD squashed use of cassette tapes. CDs don't get warped and all funky when you set them on your dashboard or toss them into the back window of your car (at least...not very often). The record industry certainly didn't like cassettes, but the lifetime of a cassette was about 1 year in my hands and the quality of the recording between vinyl and cassette was lacking at best.

Once CDs became home recordable, look out record industry!

But think about that transition. When the recording industry went from analog to digital, it made the transition from (basically) an infinite number of data points to recreate the music to a finite number of bits. With the CD, you could actually treat music as little magnetic pieces, what a revolution. Once home media moved to digital, the flood gates opened (though it did take a while) to migrating that content to other media and leverage growing CPU power and bandwidth to migrate it around. Time and time again the record industry has been caught off guard and fearful of the natural evolution of media and transmission that CDs ushered in.

As the price of record able CDs dropped, diskettes became challenged to deliver more capacity for computing, but the drop off in diskette sales was inevitable. CDs became too cheap, too fast, and ushered in massive recording capabilities for data (not just music).

Today, I rarely use the original CD in my life. I buy them, rip them to a portable format, then burn them back onto record able CDs that I can use in my car and put them on my iPod. I use CDs to back data up and go through them like crazy when I'm building boot media for my Solaris Nevada builds.

This next part is for the ears of the CD only:

Unfortunately, I must also declare the coming death of you, the Compact Disc. You have certainly aged well, I won't deny that. You'll be around for a few more years but you are starting to remind me a lot of that 3 1/2" diskette format that held on for dear life trying to cram a few more K onto it. Frankly, my USB keys and the network are replacing you faster than you can say the word DVD...heck, the DVD format wars have barely finished and they are already being replaced.

Heck, its hard to even find you at conventions anymore, USB keys have all but replaced you. Maybe you didn't notice, but AOL hasn't even delivered me a CD in YEARS (my last mailing was a DVD if I remember right and even that was about 2 years ago).

Now, if one of these storage utilities would host some boot images and tell me how to boot my laptop from it (after a small stub loads my wireless driver), I would be DONE with media...buh bye.

Still, I will keep of spindle of you around at all times, at least for the next two years until I replace my car stereo with one that has a USB jack (HELLO CAR STEREO MAKERS...WHAT THE HECK?) or an iPod with Bluetooth with a compatible car stereo Bluetooth interface.

Wednesday Aug 15, 2007

Wikis Solve World Hunger (and make coffee)

As you know, Sun launched wikis.sun.com earlier this month. I even have a public space I'm working on for Storage System Patterns and some private spaces I'm participating on.

Personally, I've been in and out of the Wiki world for over 5 years. A co-worker at J.D. Edwards used to expound endlessly on how Wikis were going to change the world. Oddly enough, Satish was right in many ways...some wikis have changed the world! Other wikis just stink. You know, to misquote a famous politician: "It's the content stupid".

Before you get to content though, you also have to be careful to choose the right tool for the job at hand. Here's an example. Let's say that I have 100 Business students (most of whom have only used iTunes in their life) at a college that need to learn how to run business applications in a Solaris environment. As a school administrator, I'm left with two options:


  • Sign the students up and let them come into the classroom and communicate together in an attempt to learn the new CRM system
  • Sign the students up, assign a teacher well versed in Solaris and CRM and let them teach the class...facilitating comments and questions from the students as they go

This is a no-brainer...if you want to promote pirating of movies, choose the first one, if you want to teach the students something, choose the latter. Why? Your community (Wiki) will get hijacked to serve the desires of the community that you assembled. This desire is clearly NOT Solaris and CRM...these are business students. The latter (a Blog) is NOT a community that can run amok, it is a soapbox discussion that is directed towards the assembled readers with tiny spaces during the day to facilitate comments. The comments rarely rise above the importance of any lecture and, in fact, relate only to that lecture (a blog post).

There are a couple of other avenues for content these days that put things onto the web:


  • The venerable static web page content / update (no participation facilitated)
  • Group blogs (very, very useful for teams of people that all have something to say but don't want to collaborate), everyone gets to have their own soapbox.

There are more, I'm sure, but I wanted to keep this short and sweet. Here is another way to think about these things:


  • Wiki - Group barn raising, you probably have a "moderator" cleaning up loose ends and directing people, but you live and die by the workers building the structure, not by what you produce as an individual. Some BIG downsides of Wikis are the "group" mentality for page formatting and design and when individuals try to hijack a barn for themselves
  • Blog - A single person's soapbox, like this one. Blogs aren't about "participation" so much, they are about easy publishing, quick ways to get information out, and directed comments back about the particular topic.
  • Email Lists - Ahhh, email lists are HUGE...they are immediate and targeted with even less formatting issues. Long live the email list. This is a great "forum" avenue.
  • Static Web Content - Publish

This is clearly, clearly a simplification of life. BUT, I often have these conversations when I start educating folks on the differences in various avenues for publishing content. The barn raising, soapbox, publish metaphors seem to hit the spot pretty well.

Monday Aug 13, 2007

Google Storage Offering / Microsoft SkyDrive

I don't think anyone would argue that Google is one of the masters of Web 2.0. One thing that Google does particularly "Web 2.0" is the Perpetual Beta. They release code early, they release code often, and more often than not, the application is a glimmer of what "could be" rather than the all-conquering application that is.

Google's Storage Offering is one of those things that could be. With the wealth of APIs available (including a Google Data APIs, Google Web Toolkit and a Picasa Web Albums Data API, the new storage offering has more than enough potential for conquering the world.

But, lo and behold, true to the Web 2.0 roots, the first outing for Google's Storage Offering only "integrates" the Gmail application and the Picasa Web Albums storage (so that both applications can access the same storage). Obviously, Google is moving to a consolidated storage / application model rather than separate stovepipes...though I have to question making people pay for this feature (free storage in the two applications is still stovepiped). In addition to "integrated" storage, you also get more storage.

So, as of right now, the model for Google Storage pricing is different from Amazon.com's pricing. In a way, this is an apples to oranges comparison as the accessibility of the storage is different (Google is accessed THROUGH an application and Amazon is accessed BY an application). I can ONLY assume that there will be parity in writing to a data API at some time (though I believe the capabilities of the data API will be slightly different).

Recall that Amazon.com's pricing is $0.15 per GB per month with a bandwidth price of $0.10 / GB data transfer in and tiered pricing for data out ($0.18 for < 10GB,

The cost of Google storage is $20/year for 6GB, &75/year for 25GB, $250/year for 100GB and $500/year for 250GB. Because there is no "application" API at this time, it makes sense there is no bandwidth charge.

Its very, very difficult to make a comparison here, but at the 6GB cost per year (assume 12 GB of inbound data and maybe 18GB of outbound data), you end up with $20 for Google and $15.24 for Amazon.com. But, again, what you can do with that storage is radically different so its really not even a fair comparison. Today, you would choose your provider based on your needs (do you use Google apps, then buy storage from Google, do you want to write an application that requires storage, then you would use Amazon.com).

Will Google give access to the storage from the Google Data API, I have to assume so...some day.

Architecturally, if you make the assumption you can switch from Amazon.com as your application storage provider to Google storage at some point in the future, you will need to build an adapter/model layer in your application so you can plug in the Google API as a target (assuming there is a level of parity in the API capabilities).

Finally, I have to briefly mention Microsoft's SkyDrive Beta. SkyDrive is a web-based application that you can sign up for with a Microsoft Live Account. It provides 500 MB storage with a web-based interface for dragging and dropping files. There doesn't appear to be a CIFS API yet for you to mount it on a machine and use it seamlessly, but you can almost guess this is in the works. But, keep in mind SkyDrive is just part of a suite of applications for sharing that Microsoft Live puts out (also includes a Photo Gallery, Blogs, EMail, Maps, Search, etc...). Its pretty obvious when you put the storage offerings from Google and Microsoft in perspective, they are simply ways to enhance their web application/desktop efforts! Microsoft Live + XBox 360 can extend Microsoft into your living room, is Google looking for this extension? (Remember, Amazon is already in your living room with its movie service via Tivo).

Perhaps a year from now, we will be looking at two different "APIs" for accessing your online storage utility:
- CIFS/NFS for use with normal File I/O libraries
- Application-aware interfaces (such as Amazon.com and our own StorageTek 5800 Storage System)

Where will Google land (if it lands?), both? Data API and CIFS/NFS?

The next few years will prove interesting as the apps and storage come into your house via so many different access points (consoles, phones, PCs, DVRs, etc...).

Wow, somehow an apples to apples comparison of storage utilities (of course, by definition, the only "storage utility" here is Amazon's) turned into an online application blog post?????? What's with that?

Friday Aug 10, 2007

5 Years at Sun...

...and this swanky new watch to prove it!

Well...its swanky to me at least. My last watch was about $18.

Also, keep this in mind, this is huge. Five years is longer than I was married! My previous longest term commitment was to IBM, a total of 7 years (besides my family...).

I could reminisce about all of the great times at Sun but, really, 5 years isn't that long! I've had some great managers (not a single dud in the bunch), I've had wonderful projects, I've had some great teams. There have DEFINITELY been some rocky times but that's to be expected.

Thanks Sun! I realize you sent me the watch as a thank you gift, but it really is a privilege to work for a good company. Hopefully you'll have me around for another 5 :-)

Paul

Friday Aug 03, 2007

Storage System Patterns Wiki Site

It looks like wikis.sun.com went live, public and "beta" this morning! As a result, I can mention a little experiment I've been working on for deployment and implementation patterns for storage systems :-)

Over the past few months I've been brainstorming about how to capture the substantial community knowledge around implementing storage solutions that are centered around "off the shelf" systems coupled with "off the shelf" software. I will admit the obvious bias towards Sun Systems and using OpenSolaris as the Storage Operating System, but when a pattern is posted there is no reason you can't fork it and add other systems and software implementations.

The space is pretty sparse right now, but you can log in, go to the System Patterns Space, and start editing it, linking things, helping to organize, adding and altering the pattern I put up already (more are in the works), etc...

My plan is to "moderate" the space and hopefully guide it into usefulness, keep the space full of "positive" kharma, and add patterns as my time permits me to experiment in my architecture role (this isn't my full time job).

The first pattern I posted is a reformat of the two part blog posting on ZFS Snapshot and Amazon S3 (Part 1 and Part 2) into a single editable File System Backup to Storage Utility Storage System Pattern.

I will post my partial pattern on zones and MediaWiki over the weekend. I haven't had a lot of time to do the software install in my zones, so any help would be appreciated.

I do have goals for the space, but since its my intent to make this a community site, I'm not particularly interested in my goals if it is morphed into something more useful. Here is what my original set of goals were:


  • Create a set of architectural blueprints, recipes and implementations for interesting and non-intuitive storage solutions that start with servers as the primary building block
  • Anyone can plug in different implementation or morph the patterns to be more useful
  • Let individuals post requests for solutions to a particular problem that others may have seen or are interested in solving with the community
  • Try to maintain the space as "problem solution" centric rather than technology-centric
  • Make sure links are provided to patterns on the Internet
  • Ensure folks using the patterns are encouraged to put notes into the pattern or keep content fresh if things get out of date

Again, I'll be blunt, I love the idea of this being Sun-centric but I have to be honest, I know many of you have solutions that don't include Sun gear, so it would be naive of me to believe a wiki-space would be useful to you if there was some "Sun-gear Only" clause ;-) But the power of the Wiki is that when you post a pattern and recipe that uses non-Sun gear, the community can sit and ponder the solution you present and fork a section with a bill-of-goods for Sun Gear :-) heh heh (if it exists, of course).

So, chip-in if you have the time or interest, send me ideas if you don't want to edit the space directly, enjoy the openness! You can also ignore it and it will probably quietly go away some day into the lava field of interesting ideas that got stale and regurgitated as something else ;-)

Wednesday Jul 25, 2007

Amazon Unbox - Off the Hook - Good Stuff

Ok, after I ranted away on how bad Amazon Unbox let me down while I was in the hospital, I am giving it a cautious thumbs up now. I was checked into the hospital again for the recurring pneumothorax and had another surgery. Prior to that, I had contacted the gentelman at Amazon that wanted to look at the problem I had with getting to movies while I was laid up. After sending him the logs, he saw there was definitely an issue with the network sending SOAP requests back and forth through the wireless proxies at the hospital...my guess is its because I hadn't yet connected to the hospital wireless when I brought the client software up.

Basically, the hospital network uses Cisco software. When you connect to the wireless, you automatically get routed to an "agreement" page where you accept the terms and conditions, then you "appear" to have open access to the network. Well, the requests between the Amazon Unbox Client and the Server software appear to have gotten snagged somehow in the middle. This type of setup is the same as Starbuck's TMobile, the airport, yadda yadda.

Armed with some new knowledge I grabbed the updated 2.0 Unbox client software. I was SURE to be logged into the hospital network prior to the client starting, wondering if that was one of the numerous problems I may have encountered.

The 2.0 software is getting better I must say! The user interface had Unbox sale videos right in the window. This was nice, I found a $0.99 rental (Man of the Year) and clicked on it. This brought up my browser and took me to Amazon.com, hopefully the software folks can finish the integration so I never have to leave the client itself...I hate browsers popping up, it is very disconnected.

I hit the rental button in the browser and waited to see if the download would start in the client software. It started within a few minutes! First barrier overcome. Now, the download did take a LONG time, unfortunately. But with the 2.0 software, there was a nice progress bar right under the video along with the bit rate that the video was being tranferred at. Much nicer, easy feedback is what we need.

The gentleman that contacted me ALSO let me know the screen saver issue I ran into would be fixed in this version of the software as well.

Now, listen up movie studios, moguls, and executives. You have to work with Amazon to fix this one day to watch problem. There is a massive revenue stream sitting here for you to take advantage of. Allow a Netflix / Blockbuster / Hollywood Video style service where I can rent two movies from Amazon, but cannot get another until I've returned the movies I've watched. Give me time to watch the movies (not 1 day). Give me a "return the movie" button as well so I can return a movie unwatched (Amazon, even consider refunding part of my rental if I don't watch it). Amazon has a great service here and it is improving. There will ALWAYS be hackers and people will always find a way around copyrights. BUT, if you make the service as seemless as Amazon is getting (especially with the Tivo integration), and you have lots of $0.99 sales, there is no incentive to try to get around it. This is a great and promising service, work with Amazon to be successful rather than allow the streaming networks to win.

If you work with Amazon, and the Amazon client software gets better than the BitTorrent client software (and it is already there as far as professionalism and polish), Amazon will help you win against the pirates. Most pirates aren't pirates out of saving $3, they are pirates because it is EASIER to use the pirate software and it gives them something they can't have...like easy video downloads.

Here's my advice to Amazon Unbox Users (Tivo or PC):


  • Watch the sales, good deals!
  • DO NOT expect to watch the video immediately, rent in the morning, watch in the evening. It is faster than NetFlix, slower than going to Blockbuster or Hollywood Video up the street.
  • If your download doesn't start quickly, contact Amazon, the folks seem friendly and nice so help them to succeed. Of course, contacting them is a little mysterious, it looks like you can go to the Unbox Support link at the top of the page, and then there is a "Contact Us" link in the right column.

I did see the other day that Blockbuster was eating into the Netflix profit and Netflix was losing customers for the first time ever. Personally, I have a Hollywood Video monthly agreement where I can just walk up (2 blocks away) and get 2 videos. It is quick and convenient. I believe NetFlix is about to be in a fight for its life. Amazon is a more "traditional" rental service, no long term agreement/monthly agreement necessary. I like it and may have to toss my Hollywood Video agreement soon...if we can get the downloads a bit quicker and more consistent.

For now, I have every intention of continuing to leverage Amazon Unbox for when I travel, get stuck back in my hospital bed, or have the urge to drop a movie to my Tivo when I'm up at work.

This is good stuff. Amazon definitely has a good lead with video delivery. My advice to Amazon:


  • Focus on appliances and get agreements with the DVR gurus, you need to conquer the appliance space...what about Microsoft XBox and Sony Playstation 3, seriously...get a channel with them (I don't think my Wii is reasonable, not enough memory).
  • The software for PCs is OK, but you probably know that it is not where you are going to make a killing. The PC software will be used for convenience...things like my hospital use case, travel with the kids in a hotel room (this is FAR cheaper than a rental in a hotel), business trips. So the PC software must be completely laptop friendly.
  • Work with these movie studios, they are making your service less competitive than traditional media, that doesn't seem right. It works for them to have your client software better than BitTorrent.

Thanks for the help Sam!

Monday Jul 16, 2007

Amazon Unbox - Utter and Complete Irritation

I gave Amazon Unbox the old 2.0 college try over the past week out of desperation. The promise remains, but I am completely and utterly disappointed in Amazon Unbox rentals. The summary:


  • While the software installed smoothly, my rental at amazon.com didn't even register with my player for over 24 hours!
  • It would have been faster to order a Netflix movie through the mail than it took me to receive the Unbox download for my rental once it started
  • The 24 hours to watch your rental once you start is utterly foolish

Here's the whole story. I was stuck in my hospital room with only the Season 1 House DVD to keep me company and a wireless connection. Since I had a fever and a chest tube I wasn't really going to spend any time working (sorry...). My friend offered me her amazon.com account to rent a movie from Amazon Unbox.

I downloaded the software and installed it, no problem...oh, I had to reboot before it ran but I've been there before with Amazon (see a prior post).

I launched amazon.com and found "The Bourne Identity" and hit the rental button. The terms were interesting, I have 30 days to watch it once its downloaded, but I have to watch the whole thing in 24 hours once I start. No problem, I'm in a hospital bed.

The download didn't even REGISTER with the software for over a day, then once it started downloading, it took over 36 hours to download! UNBELIEVABLE. By the time it downloaded I was SO frustrated I didn't even watch it at the hospital.

So, I started watching it last night...I'm still in recovery so I only got about 30 minutes in and had to tune out to catch some rest. I ran up to my room to beat the 24 hour deadline tonight, and I seemed to have made it. At the 1:20 mark (a 1:40 long movie), my screen saver kicked in. Ouch, I flicked the mouse, no picture...sound, but no picture. I played around to no avail so I restarted Amazon Unbox. My 24 hours had expired!!!!!!! My video was gone with 20 minutes left to watch.

UNBELIEVABLE.

The worst part? When I was getting the details on the rental times for this blog post I "1 clicked" and re-rented the @)(#$&\*#&$ movie for another 3.99. GAGGHHHHHHHHHHHHH.

Amazon Unbox, I have tried you twice. This time you completely and utterly failed me when I needed you. To be honest, I don't really give three strikes to things. I registered my Tivo with your service but I am going to have to pass until I find better rental terms and a guaranteed delivery time. If you can't beat a Netflix Rental by snail mail, you really don't deserve to be in this game and, further, if you are going to force me to watch a movie in my hectic life within 24 hours of when I first press play, you have SERIOUSLY miscalculated my lifestyle.

The worst part of this experience? DUDE, I WAS DEPENDING ON YOU TO ENTERTAIN ME WHEN I WAS KNOCKED OUT IN A HOSPITAL ROOM, it was your PERFECT chance to shine...and you struck out in so many ways.

Control Panel / Add or Remove Programs / Amazon Unbox Video / Change/Remove

To reiterate, I won't try you on my Tivo and your software is gone. Here is what you have to change before I install you again:


  • Guaranteed delivery time (sorry, I know its hard, but you have ways to calculate overall connection speeds...do it)
  • A "panic" button if your download doesn't start
  • 30 days to download and watch the program, no 1 day limit once its started...end of story

Thursday Jun 28, 2007

Spontaneous Pneumothoraxes, X-Rays, Hospital Tech, Tracking Devices, Unstructured Content

What a week its been! I started out on Monday giving a presentation to Victor Walker's Staff on collaboration and I ended Monday lying in a hospital room with a (get this) Spontaneous Pneumothorax (air in my chest cavity that results in a collapsed lung). Guess what, these things "just happen" to tall, skinny guys. I'm not kidding.

So, here's what I have to say about Spontaneous Pneumothorax's and treatment (in case you happened here trying to understand what they are), then I want to tell you about the cool hospital stay I had!

Yes, if you say Spontaneous Pneumothorax enough, you will think you are in a live Dr. Seuss book. But once they jam the chest tube into you, suppress these thoughts, you seriously do not want to be laughing. I went to the Dr. originally because of a weird shortness of breath...I mean, really short. I'm not usually that way. Another symptom was that I kept trying to decide to go for a run, but whenever I would step down hard, it felt like my upper chest was collapsing (hint...it was). Check this out:

See the shadow of my lung going half way across my chest cavity? Its supposed to go all the way across my chest cavity. My first x-ray was sort of an out-patient thing. They said they'd call me on Wednesday with the results. So I settle in at home to finish my day and I get a phone call from the radiologist: "Were you in an accident?" (no), "Have you been hit by anything lately?" (no), "Well, don't be too concerned but drop what you are doing and get a ride to the hospital, I am checking you into the ER now, what hospital are you going to, do not drive yourself whatever you do.".

Ok, to cut the story short, Dr. Kim at Littleton Adventist Hospital had a chest tube in me within 45 minutes of that phone call. Ouch. I'll forgo the details, but it basically sucks...but getting it out sucks even more since you aren't on the same pain medications :-)

Now here is what is interesting to us techy folks. I was in the new wing of the hospital that has wireless. While I didn't do my normal work, I was relatively effective for a guy with a tube in his side and without a shower. Very cool. I actually started this entry while I was lying around.

Next, I had X-Rays every 8-12 hours, so I was joking with the guy that I'd like a copy. Ben says, "I'll bring you a CD". Next thing I know I have all of my X-Rays on a CD that I can browse through and try to be a pain to Dr. Kim. Needless to say, I didn't have much luck diagnosing myself, its difficult to see the lung in the pictures...I know they are trained, but a computer guy just doesn't do Biology.

Interestingly, the X-Rays are a perfect example of unstructured data. Unstructured data is essentially data that is not easily read by a machine. You would want to associate a lot of metadata with my X-Rays, obviously my name, date of birth, date the X-Ray was taken, problem indicated by the X-Ray, etc... Then you would want an application that you could submit queries to the repository of images to search the metadata and retrieve specific images based on the criteria. For example, if someone wanted to see a Spontaneous Pneumothorax, they could look it up.

Using the Honeycomb API, the application could query the Content Addressable Storage (CAS) that a Honeycomb System offers like this:

[fn LCASE(patient.diagnosis)] LIKE 'spontaneous pneumothorax' AND [fn LCASE(artifact.type)] LIKE 'xray'

Submitting this query via one of the language APIs should return a set of object IDs that match the particular diagnosis and artifact that I'm looking for (assuming I named the metadata attributes like that).

The Honeycomb system is a hardware and software stack, but the API and a simulator can be downloaded from sun.com if you want to play with it. The simulator runs in Java so you can run it just about anywhere.

No, I don't think Littleton Adventist was using Honeycomb, but medical is definitely one place that could make heavy use of this type of unstructured data interface.

There was another very cool system in place at the hospital though. All of the staff had tiny radio devices attached to themselves, they were about the size of a tag that goes on clothing at a store or the size of one of our own Sun SPOTs. The tag emits a radio signal that allows the personell to be found wherever they are in the hospital.

I remember hypothesizing about this type of technology and how invasive it would be but within the hospital setting it makes perfect sense. You are dealing with real time events where you have to coordinate resources within seconds in the case of a trauma. Why not know who is closest for responding and be able to call them by name. In fact, the RN's were often called from in my room, since they have speakers and intercoms within the room. This way, the entire hospital does not have to be bothered with constant communications and, instead, the communication can be localized to be point to point without requiring pagers or walkie-talkies on the nurses and doctors.

At any rate, I'm back in the office today and hanging around trying to do things before my follow-up appointment tomorrow.

I have to say hospitals are a hotbed of technology but, in retrospect, I would prefer to schedule a customer visit rather than have one scheduled for me :-)

Cheers!

Monday Jun 18, 2007

Emulex Joins OpenSolaris

I haven't seen an "official" looking announcement yet, but Emulex has Open Sourced the Emulex Fibre Channel Device Driver and put the .tar.bz2 of the source in the downloads section of the project. The source was actually posted June 15th!

What does this mean to OpenSolaris? Well, it continues to gain momentum as a solid, open source, storage operating system. What does it mean to one of the many, many OpenSolaris or Solaris users? Well, hopefully it means more transparency into the storage stack that you are running, if you need it. To open source developers, they have some first rate driver code to look at and participate with.

The Emulex Driver uses the Leadville stack within OpenSolaris to plug in its functionality. I have to tell you though, I'm not a Solaris Driver guru, so let's hope Dan or Scott get down to business and do a blog for us on how the Emulex code interacts with the Leadville stack!

Nice job Emulex! Keep up the good work.

Friday Jun 08, 2007

ZFS Snapshot and Amazon S3 (Part 2 of 2)

Alright, I made you read my blather about costs and such, now its time for some code! Well, you may be disappointed to know there is really very little code to creating a ZFS snapshot, sending it to Amazon S3, and restoring the file system back to its original state after retrieving the snapshot BACK from Amazon S3.

This version of the implementation is going to take the "simple is elegant" route. I use the built-in capabilites of ZFS to take snapshots, save them, and restore them and I couple this some Java code to move to and from Amazon S3 (I'm much quicker with Java over the many other languages you can use for Amazon S3, Perl / PHP / etc...).

Basic Setup
The following was set up on my system:
\* One storage pool: zfs create media c0t1d0
\* A file system within the media storage pool: zfs create media/mypictures
\* Mountpoint change to /export: zfs set mountpoint=/export/media/mypictures media/mypictures
\* Shared over NFS: zfs set sharenfs=on media/mypictures
\* Compression turned on: zfs set compression=on media/mypictures

I copied some pictures into the /export/media/mypictures directory (enough to create an acceptable snapshot...193 Meg).

Snapshot and Store
The steps for creating the snapshot and sending it to Amazon S3 are:
\* Create a snapshot of the file system
\* Compress the snapshot and redirect it to a file
\* uuencode the snapshot (removes control characters)
\* Send it to Amazon S3 with appropriate metadata

To create a snapshot, use the "zfs snapshot" command. I will snapshot the entire /export/media/mypictures directory and name the snapshot "20070607" with the command:

zfs snapshot media/mypictures@20070607

The snapshot should initially take up no additional space in my filesystem. As files change, the snapshot space grows, since the original blocks must be retained along with the new blocks. Still, saving the snapshot will require the full amount of space since I am creating a "file" full of the snapshot of the data (which happens to be all of the original data). It is relatively easy to turn the snapshot itself into a stream of data, simply "send" the snapshot to a file. Enroute to the file, pass it through gzip for any extra compression (remember, we'll be paying for this space):

zfs send media/mypictures@20070607 | gzip > ~/backups/20070607.gz

Mileage varies with compression on snapshots.

Next, I uuencode the file to prepare it to be sent over the Internet. The uuencode process ''expands'' the file size by about 35% so its highly likely that any gains we made through compression are taken back out by uuencoding. Here is the uuencoding process from the command line:

uuencode 20070607.gz 20070607.gz > 20070607.gz.uu

That's it on the Solaris-side, now I can send the snapshot to Amazon S3.

I will assume that a "bucket" is already created and that we are merely sending the final, uuencoded snapshot to the Amazon S3 bucket. To be honest, I tried using Curl, Perl, and a variety of other things and I couldn't quickly get the right libraries to create the signatures and I just hate scrounging around the Internet for the right this or that and changing compilation flags and recompiling and... So, I went with the Java - REST approach.

Use the Amazon S3 Library for REST in Java library. This has classes for doing all of your favorite Amazon S3 operations and was quite easy to use. You can see the whole program with the keys removed, or you can just take a look at the essential pieces of code for uploading here:


AWSAuthConnection conn =
new AWSAuthConnection(awsAccessKeyId, awsSecretAccessKey);
S3Object object = null;

try {
byte[] bytes = new byte[(int)length];
/\* READ SNAPSHOT INTO BYTE ARRAY HERE \*/
object = new S3Object(bytes, null);
} catch (IOException ioe) {
}

Map headers = new TreeMap();
headers.put("Content-Type", Arrays.asList(new String[] { "text/plain" }));
System.out.println(
conn.put(bucketName, keyName, object, headers).connection.getResponseMessage()
);



Run the code and you are GOOD TO GO!

Retrieve and Restore
The process of retrieving and restoring the snapshot when you lose your data or want to return to a previous time in your history is relatively simple as well, simply have a program to reverse the process above. Here is the essential Java code (using the Amazon S3 Java REST libraries again) for retrieving the blob from Amazon S3:


AWSAuthConnection conn =
new AWSAuthConnection(awsAccessKeyId, awsSecretAccessKey);

System.out.println("----- getting object -----");
byte[] bytes = conn.get(bucketName, args[0], null).object.data;
/\* WRITE THE DATA TO A NEW SNAPSHOT FILE \*/

Once you have your uuencoded, gzipped snapshot back from Amazon S3, decode it and decompress the snapshot like this (the reverse of what you did before):


# uudecode 20070607.gz.uu
# gunzip 20070607.gz

Now you have to decide what to do with your snapshot. I moved the existing mypictures pool and restored my old snapshot into its place to give me a complete time travel back to my snapshot. Here are the commands:


# zfs rename media/mypictures media/mypictures.old
# zfs receive media/mypictures < 20070607

That's it! Going to /export/media/mypictures will bring me to the pictures I snapshotted on June 7, 2007!

Issues with implementation
Yes, there are plenty of issues with the implementation above...I will simply call it a "reference implementation", right?
\* Amazon S3 size limitations - In the "Terms of Use", Amazon S3 specifies the following: "You may not, however, store "objects" (as described in the user documentation) that contain more than 5Gb of data, or own more than 100 "buckets" (as described in the user documentation) at any one time. As a result, one would want to slice the snapshot up appropriately so as to conform to the Amazon S3 limitations, or possibly work with Amazon S3. The limitation IS completely reasonable though due to lengths and limitations in the HTTPS protocol itself, anything bigger and you will have to "chunk" it up for storage...this will take many more than 3 lines of code, but not too much.
\* Encryption - Data should be encrypted appropriately before being sent to a third-party storage site
\* Cron Job - Timely snapshots
\* Non-Java - It would be nice to do the whole process from scripts, but I got hung up on signing the header and streaming it over Curl so I hopped to my native tongue (Java).
\* Heap Limits in Java - It is insane to read a 5GB file into Java, but I was being simple and quick, I think Curl would be a better solution if I could get the header signing right.

Total Cost
I incurred a total cost of $0.07 to my Amazon.com Web Services account for the raw storage and bandwidth for the blog post, to be charged July 1, 2007. As I want to save a little money for my company, I will swallow the cost and call it a day (or two).

Thursday Jun 07, 2007

ZFS Snapshot and Amazon S3 (Part 1 of 2)

There are a few things on my mind these days: ZFS, Amazon S3, Project Blackbox, Why the "C" on my keyboard is sticking so much, OpenSolaris, Network Attached Storage, Storage System Blueprints, Communities, and on and on. I finally decided to sit and think about the economics of utility storage vs. rolling your own as well as whether I could create a very simple BluePrint for backing up and restoring ZFS to a storage utility, like Amazon S3. Here's a hint...ZFS to Amazon S3 is insanely simple.

Needless to say I incurred my own credit card bills to bring this to you since I figured the cost of backing up some of my pictures was going to be less than filling out an expense report :-) SO, since I burned the money, you are forced to sit through a 2 part blog post that first looks at VERY HIGH LEVEL economics of storage utilities and is followed by the "recipe" for implementing a simple ZFS snapshot to Amazon S3 backup and restore process.

Before I got too involved with the whole implementation, I needed to see what the whole process would cost me, cost is king in a start-up. This is a nowhere-near completely scientific breakdown, but it is based on how I've seen some friends do this type of thing, so it is decent. I would really suggest you do your own up front planning and roll your own costs into the mix and do comparisons to colocation services along with your own server or even some of the ISPs around that give you liberal storage quotas.

Here are the simple requirements:


  • Off-site backup location
  • 40 GB/Month Backup/Archive, maxing out and staying even at 400 GB after 10 months (continuing to upload 40 GB/Month but deleting snapshots prior to that)
  • 2 times a year download of 40GB for restore

Notice I have not discussed access rates or access latency. My assumption that "Internet Time" for access rates and latency is sufficient. The choice of 40GB a month is "relatively" arbitrary so please, please do your own pricing (using Sun Servers of course if you are doing a comparison).


  • Amazon S3 - Pricing is located on the site and is based on bandwidth plus storage.
  • The total price for new storage + bandwidth for my requirements is $10
  • The raw storage cost increases each month until I hit 400GB ($6, $12, ... , $60 and remaining at $60/month afterwards).
  • I planned two downloads per year of 40GB which will cost me $7.20 each, $14.40 per year
  • The TOTAL cost for my 2 years of 400GB of storage should be about $1294.80.

At this point, I will give you a very important disclaimer. The cost seems relatively reasonable for a worry-free 400GB, but using some other trajectories, you can exceed the costs of your own server and hosting. The cross-over point for my overly simple world seems to be around a year when you are using terrabyte economics. Colocation of a Sun Fire X4500 can be extremely competitive or better if you are looking at backups hitting the 24TB range. For the SmugMug architecture and storage requirements (well over the TB range), Don MacAskill does see great cost savings with Amazon S3.

Consider the simple modification to assume that the storage is capped at 400GB but 400GB is replaced each month, increasing the bandwidth consumption. This brings the entire cost for 24 month span up to $2428.80, add another 10x for 4TB per month and we start talking real money (OK...to me, real money started at anything over $30). But raw storage and bandwidth are not the only concerns, what about system administration that goes into rolling your own server and maintaining it, that costs money and time!

You really need to ponder these types of decisions long and hard. I am a HUGE advocate of storage utilities for the short-handed, but I am also a huge advocate of rolling your own to get your own economies of scale (did I mention that Sun provides some great solutions?). Every company is unique and has its own requirements.

There are many, many factors to consider for your own business. There are CLEAR advantages to storage utilities like Amazon S3, here are a few:


  • Pay for what you use
  • No system administration on your part
  • Single throat to choke when something is down
  • You reap the benefits of their efficiencies of scale

There are disadvantages too


  • You are somewhat locked into your storage utility
  • Misjudging your requirements can result in high cost to you
  • You cannot create better efficiencies of scale as you reach high amounts of data capacity (lack of hierarchical pricing in Amazon S3 based on aging is a bummer)

Still, Amazon S3 is clearly useful to startups and even your own personal backups. SmugMug uses it liberally as well as a variety of other startups. We shouldn't eliminate the other storage utilities around either, there are quite a few to choose from these days.

I have about 5GB of pictures that I need to keep backed up. That backup would cost me mere dollars a month, 10s of dollars for a year...that is pretty reasonable when you consider they are not replaceable and my house may be hit by a power surge at a moment's notice.

Part 2 will be posted soon. I used the ZFS snapshot capability and a simple Java program to move data back and forth to the Storage Utility! With ZFS, combining with Amazon S3 is a piece of cake.

Thursday May 31, 2007

What is a Blueprint?

Besides being a drawing that was traditionally "blue" in color, the notion of a Blueprint in software has a decent definition Wikipedia definition:

Software blueprinting processes advocate containing inspirational activity (problem solving) as much as possible to the early stages of a project in the same way that the construction blueprint captures the inspirational activity of the construction architect. Following the blueprinting phase only procedural activity (following prescribed steps) is required.

Of course, Thomas Edison may argue with this, saying that "Genius is one percent inspiration and ninety-nine percent perspiration", which would beg the point of why blueprint at all?

My interpretation of a "Blueprint" after much reading and many years in the "(software) architecture" field is simply: A blueprint documents the set of requirements and constraints that need to be fulfilled for a particular solution along with the materials and instructions that implementors would need to solve for the requirements consistently and repeatably. A "good" blueprint will go further to enumerate requirements that the blueprint does not address as well as the specification for materials so that an implementor could safely choose other materials and instructions than were originally specified in the blueprint.

Where blueprints often come into heated debate is in "What form does a blueprint take?" as well as "How deep should the requirements be?". There really is not a simple answer here. In physical construction, the blueprint is a time-honored carbon-like thing with strict rules around elevations and material specifications...I remember this from my 11th grade architecture class and lots of head scratching when I have tried to build out a basement. For software and systems, the answer becomes more complex and is often determined by the tier or layer of the application for which the architect is building the blueprint for as well as the intended audience for the blueprint. A GUI application may have wireframes and look a lot like a physical construction diagram, with border widths and behaviors of the areas of the form. A model or data access tier will likely have table specifications and/or UML modeling involved. A "system" or a "server" (single box) will have a detailed schematic, much like a building architect's blueprint.

Things become more complex with blueprints for solutions that incorporate multiple audiences and types of components (software, systems, cables, cards, etc...). A blueprint for a SAN or a Storage Server would have deployment diagrams, hardware component specifications and software specifications involved.

The blueprint documentation itself must spend considerable time on the requirements and specifications for which the blueprint was constructed and will, hopefully, create a linkage between the requirements and specifications and the materials and instructions used to construct the solution. This documentation helps a user determine if the blueprint meets their needs and where they should concentrate on modifying the blueprint if it is a close match.

For example, I may build a system blueprint for a storage grid built with off the shelf servers. In the requirements and specifications, I concentrate on a a few simple requirements:


  • I must be able to add capacity without downtime, from 100MB to 1 Zetabyte
  • All storage must be available in a single namespace
  • All storage is represented as files
  • A storage node must be able to fail in place

I now give you a blueprint for this solution that gives you information about the hardware to use, the software to use, the cabling, the configuration, and more. It looks fantastic.

You easily reproduce the solution from the blueprint. BUT, the result is that you get 1MB throughput with a 5 second latency on a file lookup. This would be the equivalent of giving you instructions to build a house, but not telling you the house is 5" with a total of two square feet space. Ouch. 1% inspiration, 99% perspiration.

Check out some of Sun's Blueprints, they exist for software, hardware as well as solutions. Maybe they'll solve some of your needs. Further, with a little collaboration, maybe we can make the blueprints better by being broader or narrower...or simply more complete have better specifications in areas to aid you in selecting different components or meeting different requirements.

Here are a few interesting blueprints for you to take a look at:

Here are some locations at sun.com to find blueprints:


  • Sun BluePrints Program - Self described as "Technical best practices, derived from the real-world experience of Sun experts, for addressing a well-defined problem with Sun Solutions.".
  • Ajax BluePrints

Finally, a Java Blueprint Community is available on java.net

Friday May 25, 2007

Storage Remote Monitoring...got that...

One of my many projects is to tackle the product-side architecture for Remote Monitoring of our storage systems. Remote Monitoring is a fascinating problem to solve for many, many reasons:


  • There are different ways to break the problem up, each being pursued with almost religious fanaticism, but each having its place depending on the customer's needs
  • It is a cross-organizational solution (at least within Sun)
  • It has a classic separation of responsibilities in its architecture
  • It solves real problems for customers and for our own company
  • It is conceptually simple, yet extremely difficult to get right

The problem at hand was to create a built-in remote monitoring solution for our midrange storage systems. Our NAS Product Family and anything being managed by our Common Array Manager was a good start. Our CAM software alone covers our Sun StorageTek 6130, 6140, 6540, 2530, and 2540 Arrays. Our high-end storage already has a level of remote monitoring and we already have a solution to do remote monitoring of "groups" of systems via a service appliance, so our solution was targeted directly at monitoring individual systems with a built in solution.

This remote monitoring solution is focused on providing you with a valuable service: "Auto Service Request", ASR. The Remote Monitoring Web Site has a great definition of ASR: Uses fault telemetry to automatically initiate a service request and begin the problem resolution process as soon as a problem occurs. This focus gives us the ability to trim down the information being sent to Sun to faults, it also gives you a particular value...it tightens up the service pipeline to get you what you need in a timely manner.

For example, if a serious fault occurs in your system (one that would typically involve Sun Services), we will have a case generated for you within a few minutes...typically less than 15.

The information flow with the "built in" Remote Monitoring is only towards Sun Microsystems (we heard you with security!). If you, the customer, want to work with us remotely to resolve the problem, a second solution known as Shared Shell is in place. With this solution, we work cooperatively with you so that you can collaborate with us to resolve problems.

Remember though, I'm an engineer, so let's get back to the problem...building Remote Monitoring.

The solution is a classic separation of concerns. Here are the major architectural components:


  • REST-XML API
  • HTTPS protocol for connectivity
  • Security (user-based and repudiation) via Authentication and Public / Private Key Pairs
  • Information Producer (the product installed at the customer site)
  • Information Consumer (the service information processor that turns events into cases)
  • Routing Infrastructure

The REST-XML API gives us a common information model that abstracts away implementation details yet gives all of the organizations involved in information production and consumption a common language. The relatively tight XML Schema also gives an easily testable output for the product without having to actually deliver telemetry in the early stages of implementation. Further, the backend can eaily mock up messages to test their implementation without a product being involved. Early in the implementation we cranked out a set of messages that were common to some of the arrays and sent them to the programmers on the back end, the teams then worked independently on their implementations. When we brought the teams back together, things went off without much of a hiccup, though we did find places where the XML Schema was too tight or too loose for one of the parties, so you do still have to talk. The format also helps us bring teams on board quickly...give them an XSD and tell them to come back later.

Here is an example of a message (real data removed...). Keep in mind there are multiple layers of security to protect this information from prying eyes. We've kept the data to a minimum, just the data we need to help us determine if a case needs to be created and what parts we probably need to ship out:


<?xml version="1.0" encoding="UTF-8"?>
<message xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="message.xsd">
<site-id>paul</site-id>
<message-uuid>uuid:xxxxx</message-uuid>
<message-time timezone="America/Denver">2005-11-22T12:10:11</message-time>
<system-id>SERIAL</system-id>
<asset-id>UNIQUENUMBER</asset-id>
<product-id>uniqueproductnumber</product-id>
<product-name>Sun StorageTek 6130</product-name>
<event>
<primary-event-information>
<message-id>STK-8000-5H</message-id>
<event-uuid>uuid:00001111</event-uuid>
<event-time timezone="America/Denver">2005-11-22T12:10:11</event-time>
<severity>Critical</severity>
<component>
<hardware-component>
<name>RAID</name>
</hardware-component>
</component>
<summary>Critical: Controller 0 write-cache is disabled</summary>
<description>Ctlr 0 Battery Pack Low Power</description>
</primary-event-information>
</event>
</message>

Use of XML gives us the ability to be very tight with use of tabs and enforce particular values, like severity, across the product lines.

The format above is heavily influenced by our Fault Management Architecture, though an FMA implementation is not required.

What we've found is that good diagnostics on a device (and FMA helps with this) yields a quick assembly of the information we need and fewer events that are not directly translated into cases. FMA and "self healing" provide and exceptional foundation for remote monitoring with a heavy reduction in "noise".

The rest of the architecture (the services that produce, consume, secure, and transport the information) is handed off to the implementors! The product figures out how to do diagnostics and output the XML via HTTPS to services at Sun Microsystems. Another team deploys services in the data center for security and registration (there are additional XML formats, authentication capabilities and POST headers for this part of the workflow). Another team deploys a service to receive the telemetry, check the signature on the telemetry for repudiation purposes, process it, filter it, and create a case.

There are additional steps that each product needs to go through, such as communicating across organizations the actual message-ids that a device can send and what should happen if that message-id is received.

In the end, the centerpiece of the architecture is the information and the language that all teams communicate with. Isn't this the case with any good architecture? Choose the interfaces and the implementations will follow.

Keep in mind, this remote monitoring solution is secure end to end. Further, remote monitoring is only one piece of the broader services portfolio...I'm just particularly excited about this since I was privileged to have worked with a great, cross-organizational team to get it done! The team included Mike Monahan (who KICKS BUTT), Wayne Seltzer, Bill Masters, Todd Sherman, Mark Vetter, Jim Kremer, Pat Ryan and many others (I hope I didn't forget any). There are also lots of folks that were pivotal in getting this done that we lost along the way (Kathy MacDougall I hope you are doing well as well as Mike Harding!).

This post has been a long time in coming! Enjoy!

About

pmonday

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today