Sunday Jul 19, 2009

Social Networks Connect, they don't Replace

July 17th, I woke up and watched my friend's facebook page turn into a memorial to the great and loving husband that he was. I watched my other friend's page turn into an outpouring of friendship, stunned sadness, and finally...offerings to join her IRL. The effectiveness of a social network for connecting is obvious, the depth of those connections is a bit more mysterious. In the background of the connecting a more human network is coalescing ... one made of phone conversations, emails to coordinate activities, and finally, traveling to build a village around my friend.

The whole episode made me think again about where social networks ... the online variety ... fit in life.

What good are 100 'friends' on a social network if even one of those friends inhibits you from being yourself and really connecting to the people that know you and want to be there for you?

Intertwined social networks become emotional nightmares as people reach out to friends in coded messages while trying to maintain professional coverings. Wrecked relationships, depression, friending and unfriending, bad days ... all of this conducted on tiptoes because peers, bosses, and coworkers are watching over your moves, your playlists, your pictures.

And now I go back to my friend who remains my social network friend but is no longer on our little blue planet.

157 friends on facebook and the only reason this day is anything but an empty mess is because I have the luck of surfing in Hawaii with my boy ... Who doesn't even have an fb account ... And an old (well, she's young ...) friend from Minnesota that delivered her phone number as soon as she connected the subtly coded status message.

So, here's the thing ... Social networks connect ... But they don't replace ... They don't even give insight into where a person's soul is unless that person is willing to risk their social standing at work or was smart enough to keep their work and life networks separate...after all, does your boss really need to know that you're depressed, let alone the reason ...

So...the time has come for me...time to tighten up the social network so my friends can be my friends and let it all hang out ...time to have a beer with a friend IRL ...and time to re-grow that LinkedIn network for the professional side of life.

And ... For my friend ... I'm sorry we didn't have more time here ... Perhaps we will have a chance somewhere else but if we don't ... Well ...

:-(

Friday Jul 03, 2009

The thing about trees ... and my Kindle DX

I can summarize my thoughts on trees pretty succinctly, I love a good tree. And with that in mind I finally made the leap and bought my Amazon Kindle DX.

I can safely say that I am holding onto a little bit of the future, it is incredible. The things that drove me over the top to get the Kindle are:


  • I am VERY schizophrenic in my reading enjoyment, sometimes having as many as 4 books for reading open, 2 or 3 technical documents, and several large PDFs full of APIs
  • I received The Denver Post Thursdays through Sundays and didn't read it most of those days ... but never knew when I wanted it
  • I go through spurts of travel and I ALWAYS have the wrong book with me ... my moods swing notoriously when I'm traveling and what I want to read or learn about swings with it
  • I get tired of lugging power cords everywhere

And so my great Kindle DX experiment began, just about Father's Day (my gift to myself for being a single Dad with little or no time for book enjoyment).

First off, it is very, very easy to spend money with it ... albeit there is a discount on most books that you purchase for the Kindle. Oddly, this doesn't hold true for most Computer Books, they seem to be full price or near full price across the board. I've already purchased and read several books, including the amazing Darwin's Radio and Darwin's Children books from Greg Bear (conveniently in one "book" for the Kindle) and am now working on The Road and Anathem from Neil Stephenson. All bought sitting, literally, from the comfort of my bed or while camping. I have also downloaded quite a few work PDFs and presentations to it and receive The Denver Post electronically each day, automatically delivered.

Here's a hint on Anathem, I bet my back doesn't hurt from lugging it around and yours does :-) ... and guess what, I have a built-in dictionary so I can just move the cursor to a word and the definition appears on the bottom of the screen.

The downside of newspapers is that they are difficult to read and don't include the two best parts ... the comics and the ads. As such, I still recieve a Sunday Denver Post ... but my recycling bin has definitely went down in size.

When folks ask me about it, I put my Kindle experience rather succinctly:


  • Book Reading (fiction / non-fiction): A+ (as easy on the eyes as paper, easy to navigate, etc...)
  • Reference Literature that you Skip Around in: C (VERY difficult to skip around a book and "leaf through" the pages
  • Newspapers: B-
  • Ease of Use: B+
  • Ease of Shopping: B (I usually shop on a web browser rather than the built in store but if you know what you are buying, the built in store on the Kindle is easy enough)
  • Cost: C- (this has GOT to come down and I completely object to paying for blog subscriptions ... sorry Amazon and Slashdot, ain't gonna pad your profit margins)
  • Ease on the Eyes: A+ (as good as paper and my eyes don't get all wobbly like on a computer...I was literally reading under the covers so-as to not annoy my son at Scout Camp...and it's not backlit so I had my flashlight with me, it was just like when I was a kid!)
  • Portability: A+ (I have cut down the size of my backpack by half ... I still have to carry my Mac)

The power lasts FOREVER on the thing. Even with 3G on I plug it in at most once a week.

Now here is the BIGGEST complaint I have. Flying to San Francisco they SPECIFICALLY point out that your Kindle must be turned off. People, it barely uses power. Might I suggest you just ask folks to turn off the 3G on the Kindle instead?

I have even put all of our docs for the Sun Storage 7000 Appliance onto it for quick access at customer black or gray sites. Very handy to have along.

Now here is a tip for those of us who carry these around, you will have people tell you straight out that you are a heretic for abandoning "the feel of paper" and contributing to the "demise of the book" (Auntie, if you are reading this...this is for you :-). It is SIMPLY not true. In fact, there are a few things that actually seem to work for the author:


  • There is no after-market for a book, this is frustrating to me but as long as most books are discounted I am willing to deal with it for now. But consider, I can't resell a book that is on my e-reader and I understand that this is a slippery slope to create.
  • Personally, books, PDFs, and stuff like that contribute to about 30% of the clutter in my house and frankly, I'm not one to enjoy having bookshelves of the things I've read contributing to the dust in my environment...they are pretty much g-o-n-e
  • I've already saved multiple trees through e-news delivery and Anathem

But in the end, it's not for everyone. But after the end, the demise of the paper novel is nigh my friends, it may not be my generation but I guarantee that my son will be 50% e-books at least in his High School days, by college 75% and his children will not buy paper books, end of story.

I notice this post is exceeding the expected length, drop me a line if there is anything you want to know about!

Sunday Feb 08, 2009

Heroes Included at the OpenSolaris Storage Summit!!!!!!

I was just browsing the agenda for the OpenSolaris Storage Summit that is being held on February 23rd 2009 in San Francisco (city with that crooked street ya know).

It turns out one of the big talks is going to be from Don McAskill from SmugMug ... and I kid you not and I'm not doing any schmooozing here, but it is my favorite favorite photo sharing site so I've already signed up. Also on the list are Mike Shapiro or Randy Bias from GoGrid. What does Storage and OpenSolaris have to do with the cloud????

It turns out, everything in the cloud has to end up SOMEWHERE and if its not using the cloud itself as a storage utility (via the likes of Amazon S3, GoGrid and others), the cloud applications themselves use a boatload of storage. The Sun Storage 7000 family and the Sun Open Storage J4000 family coupled with OpenSolaris are definitely the way to go for infrastructure.

Doh, I'm totally geeked and off to the travel site to figure out how to book my trip and get my kiddo taken care of for the day (though as you can see...he loves San Francisco).

Thursday May 29, 2008

Preservation and Archive Special Interest Group (PASIG)

I am in San Francisco at Sun's Preservation and Archive Special Interest Group (PASIG) Face to Face meeting. Preservation and Archiving is one of the most fascinating and complex problems to face our digital world, in my humble opinion. I've chatted about this before in the blog so I won't belabor the point that taking a chisel to the walls of this beautiful location and etching in the contents of your e-book would have a longer life span than most of our media types in use today.

Still, chiseling has some pretty serious downsides


  • It doesn't scale well (oh, and you have to scale to get up the wall)
  • You have to travel to this site to view the contents (while that might be pleasant, it can be quite inefficient)
  • Its difficult to create a geographically remote backup site
  • It seriously blemishes a beautiful location

It turns out, there is a community of people that think about this problem as their day job and PASIG is a gathering point for that community. The community is made up of the "new" digital librarians, software that provides the foundation for digital preservation and archives, storage vendors that can provide "temporal" storage locations for the digital archive, and system integrators that can customize the entire stack and tailor it for a particular domain (astronomy, human sciences, etc...).

The community is global, but their is an obvious disparity in how this type of work is funded across countries and boundaries. Here in the U.S., the leaders of preservation and archiving efforts appears to be the education community, with a variety of colleges (Penn State, University of Pittsburgh (Pennsylvania), etc...) and individual government agencies (Library of Congress, National Archives, etc...). Abroad, it seems the repositories are often nationally funded.

While various technologies were interesting, it was also interesting to hear the oft recurring theme of "sustainability"...how can we "sustain" the rate of ingest and cost over time of these types of solutions? Consider the cost of the initial storage of 1 GB of data today (just the storage medium), its between $0.25 and $1.00 depending on your storage medium. Now add in the cost of the various transitions that data makes over 20,000 years (life of a cave painting) the system administration, the building of new equipment, the power involved, etc... At this stage of human evolution, it is difficult to even predict what type of media we will store data on in 20 or 30 years. How can one predict the cost of sustaining these archives, when we won't know what the archive will look like, let alone the environment around it, in 20 years, 100 years or even 1,000 years.

Which brings up another interesting thread...whereas a cave painting is required to stay on the rock it originated on, the life of preserved digital data (and archived) must be assumed to migrate over the data generations. Consider each generation of media about 4 years (3-5 is common), tape can technically survive multiple generations and one can build systems (like the Sun Storage 5800 (Honeycomb) platform) that can theoretically store data for 100,000s years. But the reality is that the lifetime of a technology and a media is that you migrate approximately every 3-5 years.

This is a conceptually different model then many people think of when they think of an "archive" or a "storage vault". Rather than think of a monolithic system that weathers the years, its important to think of data as migratory, and evolutionary (much like humans themselves). To survive 30,000 years, our digital data originates...often in the physical world...but increasingly our data originates in the digital world itself...and migrates to a Tier 1 storage device. Regardless of whether it remains there, over the next 20 years, that data will move across 4 or more devices (not counting replicants or backups) if it stays "online".

So what are the three most important attributes of an archive and/or a preservation system (IMHO):


  • The information is safe, stable, and can be verified to be "original" while it resides temporarily within the system
  • The information can be moved off the system when the time is right and can still be certified as "original" data rather than a derivation of the data
  • Information can be "moved into" a new preservation / archive system.

In the end, long-term data is migratory...and our systems and software that support this field must start thinking of data that way. This type of thinking can also be applied backwards into more traditional storage paradigms, how useful would it be for a photo repository to be able to migrate to bigger and bigger system with higher capacity and lower power consumption? At some point, there is a trade-off that will trigger that company to move, and facilitating that move is in the best interest of all companies.

What else came up at the conference? Tons of things, have we thought enough about:


  • Access Rights (and access across time)
  • Roles and Responsibilities in a digital archiving and library maintenance system (the new librarians)
  • Geo-political boundaries as repositories move into the "cloud"

At this very minute, Sam Gustman is discussing The Shoah Foundation. The mission of The Shoah Foundation is To overcome prejudice, intolerance and bigotry - and the suffering they cause - through the educational use of the Institute's visual history testimonies. The documentation of The Holocaust contains over 120,000 hours of online video testimony. The foundation indexes minute by minute of this testimony and makes it accessible for people to teach and understand the impacts of the holocaust. The foundation moves beyond The Holocaust into testimonies of other events in human history that must be recorded and understood. As George Santayana said: Those who do not learn from history are doomed to repeat it.

Sam's presentation isn't online, I'm trying to get Art to post it. This work alone shows that digital archiving and preservation is not just an interesting segment of compute and storage...its a moral imperative.

I found this link to a video that discusses Fedora Commons, one of the open source repository software initiatives for creating and managing digital libraries and archives. The video discusses some of the solutions that are being built with the Fedora Commons software solution:

What a great opportunity the conference has been to sit and think about the big picture (and archiving and preservation truly are the big picture).

Monday Apr 14, 2008

Power Consumption of Hanging Clothes out to Dry

I had a section to do for Kai to become a Wolf in Cub Scouts...luckily, the pack has decided to go out and do some neighborhood cleanup. Still, I was interested in completing my fact finding mission anyway. My goal was to create a set of information on how "little things" add up to "big things" (remember "think globally, act locally").

There is a lot to choose from in my life, I figured I would take them one by one and see if a little "group think" can help me fix any logic problems. After the Jack Johnson Curious George Soundtrack song, I figured I would categorize into:


  • Reduce
  • Reuse
  • Recycle

And I would take a few simple tasks a child could complete with their parents:


  • Encourage your parents to hang out 2 loads of clothes a week
  • Stop using 5 plastic bags a week
  • Use refillable water bottles instead of purchasing bottles of water
  • Walk to school

I haven't dug up all of the facts yet, but I'm committing to it here on the blog!

First task: Encourage your parents to hang out 2 loads of clothes a week.

(Yes, that's one of the loads I hung out today)

I have an LG Tromm Dryer (courtesy of a good quarter or two here at Sun Microsystems). It runs at 240 Volts / 30 Amps, ouch. That's about 7200 Watts per hour. A load of laundry takes about an hour to dry. I've committed to hang out 2 loads per week of laundry, 14,400 Watts per week \* 52 weeks = 748,800 Watts saved per year.

I was browsing around the web trying to find the conversion and I found the Ask a Scientist Web Site that said 1gm of coal could power a 60w light bulb for 550 seconds (let's do some rounding and just say 9 minutes). Now we have to do some normalization.

A 60 Watt light bulb with 1 gm / 9 Minutes = 6.67 grams / hour.

My two loads of laundry actually use 14,400 Watts per week (that's two hours of power), for 1600 grams of coal per week \* 52 weeks = 83,200 grams which, for my fellow U.S. citizens is 182 pounds.

Another site estimated the creation of 1 Kilowatt for each pound of coal (roughly), which would bring us to about 748.8 pounds instead of 182. The first calculation was in an ideal world, the latter appears to not be in an ideal world.

Doesn't seem like very much does it. I'll use the 182 figure to be conservative. Each of those pounds of coal produces about 3.7 pounds of CO2 so on the conservative chart, I am stopping about 673 pounds of CO2 from going into the atmosphere.

That's quite a bit, but is it worth the trouble?

My little suburb has about 80,000 folks in it, let's say there are 20,000 homes and each one of those families could save 136 pounds of CO2 from going into the atmosphere. That would be about 13,460,000 pounds of CO2 that does not enter our atmosphere each year.

Is that interesting yet? Probably not in the context of all of the CO2 created in a year, but I think that is starting to get interesting.

For a child, what is 13,460,000 pounds?

I saw on Yahoo Answers that a Pontiac Vibe is approximately (and conveniently) 2,700 pounds.

(Image borrowed from Edmunds)

My little suburb of Highlands Ranch, if each family would hang 2 loads of wash out per week rather than dry them, would keep about 4,985 Pontiac Vibes from entering our atmosphere and hanging over us each year. Imagine what a country like the U.S. could do by hanging their clothes out to dry?

If you see any MORE logic errors, please let me know and I'll keep trap and fix them.

Note: I was off by a power of 10 (sloppy night time work) and my dryer was 240 volts. Michael Lyle sent the correction and also noted that I should use an ammeter to get an actual measurement. Stay tuned!

Tuesday Oct 16, 2007

Rockies in the Series - Whoo hoo!

It was a LONG night last night at Game 4 of the National League Championship Series (NLCS). BUT, it was worth it, the Rockies swept the D-Backs to go into their first world series ever! We compiled a TON of pictures, here are a few:

First pitch!

Last out!

The end of game scrum!

The Rockies pulled this off with a payroll in the Bottom 1/3 of the league (along with the Arizona Diamondbacks and Cleveland Indians). Only the 1-game behind Boston Red Sox are in the top of the league in terms of spending.

Pretty amazing to think the Rockies were one strike from wrapping up the season a few weeks ago.

I guess the lessons are


  • Money does not equal success
  • It takes a team to succeed
  • The game is not over until the last out
  • The season is not over until you are at home watching the World Series

Yes, the photos are a bit blurry...we were above the purple seat line (indicating 1 mile elevation mark), so yeah...could have been better seats but it was worth it!

Go Rockies!

Tuesday Sep 25, 2007

Taking a few mins...

Sometimes you have to take a few minutes and chalk up one of those things you have always wanted to do. The interesting thing about digital cameras (combined with Google) is that they give you the ability to "compress" the learning cycle. Something that would have taken a person months and a ton of money to learn about now takes a week and a good memory card (oh, and lots of hard drive space). Of course, it is probably not as gratifying which gives our society the ability to catch up with our predecessors but continue to build...standing on the shoulder's of giants.

With that, here is a shot of tonight's full moon (I missed the picture of the werewolf in my backyard...sorry).

The shot was done with a Canon Rebel XTi with the following:


  • F5.6 Aperture
  • 1/250sec Shutter Speed
  • ISO 100
  • Focal Length 300mm

Enjoy :-)

Wednesday Sep 12, 2007

Pneumothorax Recovery - 2 Month Recovery Diary

Just to recap: 4 Spontaneous Pneumothoraxes in under a month (frequent enough that I called them "not so spontaneous pneumothoraxes"), 2 thorocoscopies (surgeries that go in and staple your lung then permanently "glue" it to your chest cavity), 6 chest tubes, 4 hospital stays, and 2 sessions of "aspirating the air and liquid" under a CAT scan machine. Its been about 2 months since my last surgery and try as I might, I haven't found ANY decent descriptions of recovery periods on the web.

So, I'm going to deviate from my normal "blog tenor" to document some of my own recovery. To start with, though, a few notes:


  • Everyone is different, listen to your Doctor and your body
  • My case appears to be much worse than the normal pneumothorax, so what I say may be worse than what you are going through
  • I fully cannot understand why someone would choose to wreck their lungs with smoking or any other lung-destroying activity...

So, two months along, I just came from a checkup with Dr. Hofer. I still have 200cc's of fluid (approximately) in the bottom part of my chest cavity. The fluid comes from the irritation and scarring of the lung to the wall of the chest cavity when they put in the sterile talc...gross huh? I've been very, very careful over the last couple of months...I have 8 obvious scars along my side from the procedures and innumerable other holes from needles from the aspiration sessions. My first 3 weeks after the second surgery were completely miserable. Lots of fevers, very uncomfortable laying down, and a favorite chair to lay in. As of right now, the fevers are long gone and I got myself off pain medication as quickly as I could and onto Advil. After about 3 weeks I was off the Advil completely and by now I've been Advil-free for quite some time. Discomfort still reigns, I've switched from sleeping on my back prior to the surgeries to sleeping face down...the weeks after the surgery I slept on my back but propped up on pillows. If I do one of those "half runs" where you are late to school to pick up the kids so you sprint out, I have to stop in about a block cause it feels like someone is squeezing the bottom part of my lung, yech.

Around a month I went out for a bike ride, 7 miles total. I'm not a biker at all so it was a change of pace. I went on a road bike but switched to my mountain bike for the next time out. I am usually a jogger/runner, but the biking was a lower impact and the mountain bike even lower. My first few bike sessions I came home with a "stitch" in my side and I had to do some serious sleeping. In all, I've done about 4 7 mile sessions, a 20 mile session and a couple of 12 mile sessions. The bike is where its at for athletic recovery in the first few months...I'll tell you that.

I have seriously thought about giving up running altogether, but for me it is a nice release. Further, to do an apples to apples comparison on my recovery, I figured I had better do some running. Dr. Hofer gave me the thumbs up for running today so I was off to the races. To give you some idea of where I'm at, prior to the first thorocoscopy, my "inspired volume" for my lung capacity was around 4,250 mLs. Now I'm around 3,000 peaking at 3,250. I am also seriously out of shape. My weight also dropped precipitously to below 135, I was 142 before the surgeries and now have a goal to push my weight to a more healthy 155 or so. For those of you who do not know this, skinny does not equal healthy and skinny does not equal in shape. My goals for my first run were pretty simple:
- Jog over a block
- Keep my heart rate down in my normal running range (approximately 155) and go as slow as I have to for maintaining it
- Stop if I hurt at all
- Don't push it

To be honest, the recovery is going to be as much mental as physical. I've turned into a complete lung hypochondriac...

It turns out I did FAR better than I had hoped:
- I jogged about 5k (a little more). I promised I wouldn't time myself but to give you a ballpark, my Bolder Boulder time (my personal best) was 10k in 42 minutes...I ran my 5k today in about 35 or so minutes...so not fast, but steady
- My heart rate spiked towards the end to 172...ouch, I pulled up at that point and walked. The recovery period was very long too. A high heart rate probably reflects both my lower lung capacity and my being so out of shape
- I didn't hurt. After the first block or so that eery feeling of someone being inside your chest squeezing part of your lung was basically gone...for the first time in months! It did feel creepy on my right side still, but the glued up lung is supposed to not be felt at all. Dr. Hofer mentioned the nerves are pretty messed up on my right side so these feelings are to be expected. In some cases for my rather long hospitalization, he is saying about a year to be completely back to normal "nerve" wise.
- I didn't push it, 5k was good and not too much

And that, my Spontaneous Pneumothorax buddies, is the 2 month recovery diary. There are more people than you think who have been through this, so don't be shy about talking to people. As far as I can tell, everyone has recovered fully, even with the glued lung!!!!! Feel free to email me if you want to know some more about what I'm going through to help yourself, or if you want to sponsor my recovery (I'M JOKING).

Friday Aug 17, 2007

Happy Birthday to the CD...not many left!

A shout out to the birthday...thing this weekend, the Compact Disc (CD), congratulations on your 25th birthday! Ahhh, I have plenty of reminiscing about vinyl on my blog as I rip my vinyl to my hard drive and then, yes, burn the MP3s onto CD to play on my car stereo (when I don't have my iPod with me).

My first CD? Well, I believe it was 10,000 Maniacs In My Tribe purchased while I was in college at Winona State University, circa 1990 I believe...its so hard to remember. By the time I was done with my undergrad I had all of the essentials (Dire Straits Brothers in Arms, Cowboy Junkies Trinity Sessions, a Pink Floyd or two, etc...).

My last vinyl album (to date) was Son Volt - Trace, I still have it unopened, the CD was good enough at the time.

The CD is one of those ubiquitous technologies. Consider how far usage has crept. From ABBA and Dire Straits recordings to a backup technology. The CD squashed use of cassette tapes. CDs don't get warped and all funky when you set them on your dashboard or toss them into the back window of your car (at least...not very often). The record industry certainly didn't like cassettes, but the lifetime of a cassette was about 1 year in my hands and the quality of the recording between vinyl and cassette was lacking at best.

Once CDs became home recordable, look out record industry!

But think about that transition. When the recording industry went from analog to digital, it made the transition from (basically) an infinite number of data points to recreate the music to a finite number of bits. With the CD, you could actually treat music as little magnetic pieces, what a revolution. Once home media moved to digital, the flood gates opened (though it did take a while) to migrating that content to other media and leverage growing CPU power and bandwidth to migrate it around. Time and time again the record industry has been caught off guard and fearful of the natural evolution of media and transmission that CDs ushered in.

As the price of record able CDs dropped, diskettes became challenged to deliver more capacity for computing, but the drop off in diskette sales was inevitable. CDs became too cheap, too fast, and ushered in massive recording capabilities for data (not just music).

Today, I rarely use the original CD in my life. I buy them, rip them to a portable format, then burn them back onto record able CDs that I can use in my car and put them on my iPod. I use CDs to back data up and go through them like crazy when I'm building boot media for my Solaris Nevada builds.

This next part is for the ears of the CD only:

Unfortunately, I must also declare the coming death of you, the Compact Disc. You have certainly aged well, I won't deny that. You'll be around for a few more years but you are starting to remind me a lot of that 3 1/2" diskette format that held on for dear life trying to cram a few more K onto it. Frankly, my USB keys and the network are replacing you faster than you can say the word DVD...heck, the DVD format wars have barely finished and they are already being replaced.

Heck, its hard to even find you at conventions anymore, USB keys have all but replaced you. Maybe you didn't notice, but AOL hasn't even delivered me a CD in YEARS (my last mailing was a DVD if I remember right and even that was about 2 years ago).

Now, if one of these storage utilities would host some boot images and tell me how to boot my laptop from it (after a small stub loads my wireless driver), I would be DONE with media...buh bye.

Still, I will keep of spindle of you around at all times, at least for the next two years until I replace my car stereo with one that has a USB jack (HELLO CAR STEREO MAKERS...WHAT THE HECK?) or an iPod with Bluetooth with a compatible car stereo Bluetooth interface.

Wednesday Aug 15, 2007

Wikis Solve World Hunger (and make coffee)

As you know, Sun launched wikis.sun.com earlier this month. I even have a public space I'm working on for Storage System Patterns and some private spaces I'm participating on.

Personally, I've been in and out of the Wiki world for over 5 years. A co-worker at J.D. Edwards used to expound endlessly on how Wikis were going to change the world. Oddly enough, Satish was right in many ways...some wikis have changed the world! Other wikis just stink. You know, to misquote a famous politician: "It's the content stupid".

Before you get to content though, you also have to be careful to choose the right tool for the job at hand. Here's an example. Let's say that I have 100 Business students (most of whom have only used iTunes in their life) at a college that need to learn how to run business applications in a Solaris environment. As a school administrator, I'm left with two options:


  • Sign the students up and let them come into the classroom and communicate together in an attempt to learn the new CRM system
  • Sign the students up, assign a teacher well versed in Solaris and CRM and let them teach the class...facilitating comments and questions from the students as they go

This is a no-brainer...if you want to promote pirating of movies, choose the first one, if you want to teach the students something, choose the latter. Why? Your community (Wiki) will get hijacked to serve the desires of the community that you assembled. This desire is clearly NOT Solaris and CRM...these are business students. The latter (a Blog) is NOT a community that can run amok, it is a soapbox discussion that is directed towards the assembled readers with tiny spaces during the day to facilitate comments. The comments rarely rise above the importance of any lecture and, in fact, relate only to that lecture (a blog post).

There are a couple of other avenues for content these days that put things onto the web:


  • The venerable static web page content / update (no participation facilitated)
  • Group blogs (very, very useful for teams of people that all have something to say but don't want to collaborate), everyone gets to have their own soapbox.

There are more, I'm sure, but I wanted to keep this short and sweet. Here is another way to think about these things:


  • Wiki - Group barn raising, you probably have a "moderator" cleaning up loose ends and directing people, but you live and die by the workers building the structure, not by what you produce as an individual. Some BIG downsides of Wikis are the "group" mentality for page formatting and design and when individuals try to hijack a barn for themselves
  • Blog - A single person's soapbox, like this one. Blogs aren't about "participation" so much, they are about easy publishing, quick ways to get information out, and directed comments back about the particular topic.
  • Email Lists - Ahhh, email lists are HUGE...they are immediate and targeted with even less formatting issues. Long live the email list. This is a great "forum" avenue.
  • Static Web Content - Publish

This is clearly, clearly a simplification of life. BUT, I often have these conversations when I start educating folks on the differences in various avenues for publishing content. The barn raising, soapbox, publish metaphors seem to hit the spot pretty well.

Friday Aug 10, 2007

5 Years at Sun...

...and this swanky new watch to prove it!

Well...its swanky to me at least. My last watch was about $18.

Also, keep this in mind, this is huge. Five years is longer than I was married! My previous longest term commitment was to IBM, a total of 7 years (besides my family...).

I could reminisce about all of the great times at Sun but, really, 5 years isn't that long! I've had some great managers (not a single dud in the bunch), I've had wonderful projects, I've had some great teams. There have DEFINITELY been some rocky times but that's to be expected.

Thanks Sun! I realize you sent me the watch as a thank you gift, but it really is a privilege to work for a good company. Hopefully you'll have me around for another 5 :-)

Paul

Wednesday Jul 25, 2007

Amazon Unbox - Off the Hook - Good Stuff

Ok, after I ranted away on how bad Amazon Unbox let me down while I was in the hospital, I am giving it a cautious thumbs up now. I was checked into the hospital again for the recurring pneumothorax and had another surgery. Prior to that, I had contacted the gentelman at Amazon that wanted to look at the problem I had with getting to movies while I was laid up. After sending him the logs, he saw there was definitely an issue with the network sending SOAP requests back and forth through the wireless proxies at the hospital...my guess is its because I hadn't yet connected to the hospital wireless when I brought the client software up.

Basically, the hospital network uses Cisco software. When you connect to the wireless, you automatically get routed to an "agreement" page where you accept the terms and conditions, then you "appear" to have open access to the network. Well, the requests between the Amazon Unbox Client and the Server software appear to have gotten snagged somehow in the middle. This type of setup is the same as Starbuck's TMobile, the airport, yadda yadda.

Armed with some new knowledge I grabbed the updated 2.0 Unbox client software. I was SURE to be logged into the hospital network prior to the client starting, wondering if that was one of the numerous problems I may have encountered.

The 2.0 software is getting better I must say! The user interface had Unbox sale videos right in the window. This was nice, I found a $0.99 rental (Man of the Year) and clicked on it. This brought up my browser and took me to Amazon.com, hopefully the software folks can finish the integration so I never have to leave the client itself...I hate browsers popping up, it is very disconnected.

I hit the rental button in the browser and waited to see if the download would start in the client software. It started within a few minutes! First barrier overcome. Now, the download did take a LONG time, unfortunately. But with the 2.0 software, there was a nice progress bar right under the video along with the bit rate that the video was being tranferred at. Much nicer, easy feedback is what we need.

The gentleman that contacted me ALSO let me know the screen saver issue I ran into would be fixed in this version of the software as well.

Now, listen up movie studios, moguls, and executives. You have to work with Amazon to fix this one day to watch problem. There is a massive revenue stream sitting here for you to take advantage of. Allow a Netflix / Blockbuster / Hollywood Video style service where I can rent two movies from Amazon, but cannot get another until I've returned the movies I've watched. Give me time to watch the movies (not 1 day). Give me a "return the movie" button as well so I can return a movie unwatched (Amazon, even consider refunding part of my rental if I don't watch it). Amazon has a great service here and it is improving. There will ALWAYS be hackers and people will always find a way around copyrights. BUT, if you make the service as seemless as Amazon is getting (especially with the Tivo integration), and you have lots of $0.99 sales, there is no incentive to try to get around it. This is a great and promising service, work with Amazon to be successful rather than allow the streaming networks to win.

If you work with Amazon, and the Amazon client software gets better than the BitTorrent client software (and it is already there as far as professionalism and polish), Amazon will help you win against the pirates. Most pirates aren't pirates out of saving $3, they are pirates because it is EASIER to use the pirate software and it gives them something they can't have...like easy video downloads.

Here's my advice to Amazon Unbox Users (Tivo or PC):


  • Watch the sales, good deals!
  • DO NOT expect to watch the video immediately, rent in the morning, watch in the evening. It is faster than NetFlix, slower than going to Blockbuster or Hollywood Video up the street.
  • If your download doesn't start quickly, contact Amazon, the folks seem friendly and nice so help them to succeed. Of course, contacting them is a little mysterious, it looks like you can go to the Unbox Support link at the top of the page, and then there is a "Contact Us" link in the right column.

I did see the other day that Blockbuster was eating into the Netflix profit and Netflix was losing customers for the first time ever. Personally, I have a Hollywood Video monthly agreement where I can just walk up (2 blocks away) and get 2 videos. It is quick and convenient. I believe NetFlix is about to be in a fight for its life. Amazon is a more "traditional" rental service, no long term agreement/monthly agreement necessary. I like it and may have to toss my Hollywood Video agreement soon...if we can get the downloads a bit quicker and more consistent.

For now, I have every intention of continuing to leverage Amazon Unbox for when I travel, get stuck back in my hospital bed, or have the urge to drop a movie to my Tivo when I'm up at work.

This is good stuff. Amazon definitely has a good lead with video delivery. My advice to Amazon:


  • Focus on appliances and get agreements with the DVR gurus, you need to conquer the appliance space...what about Microsoft XBox and Sony Playstation 3, seriously...get a channel with them (I don't think my Wii is reasonable, not enough memory).
  • The software for PCs is OK, but you probably know that it is not where you are going to make a killing. The PC software will be used for convenience...things like my hospital use case, travel with the kids in a hotel room (this is FAR cheaper than a rental in a hotel), business trips. So the PC software must be completely laptop friendly.
  • Work with these movie studios, they are making your service less competitive than traditional media, that doesn't seem right. It works for them to have your client software better than BitTorrent.

Thanks for the help Sam!

Thursday Jun 28, 2007

Spontaneous Pneumothoraxes, X-Rays, Hospital Tech, Tracking Devices, Unstructured Content

What a week its been! I started out on Monday giving a presentation to Victor Walker's Staff on collaboration and I ended Monday lying in a hospital room with a (get this) Spontaneous Pneumothorax (air in my chest cavity that results in a collapsed lung). Guess what, these things "just happen" to tall, skinny guys. I'm not kidding.

So, here's what I have to say about Spontaneous Pneumothorax's and treatment (in case you happened here trying to understand what they are), then I want to tell you about the cool hospital stay I had!

Yes, if you say Spontaneous Pneumothorax enough, you will think you are in a live Dr. Seuss book. But once they jam the chest tube into you, suppress these thoughts, you seriously do not want to be laughing. I went to the Dr. originally because of a weird shortness of breath...I mean, really short. I'm not usually that way. Another symptom was that I kept trying to decide to go for a run, but whenever I would step down hard, it felt like my upper chest was collapsing (hint...it was). Check this out:

See the shadow of my lung going half way across my chest cavity? Its supposed to go all the way across my chest cavity. My first x-ray was sort of an out-patient thing. They said they'd call me on Wednesday with the results. So I settle in at home to finish my day and I get a phone call from the radiologist: "Were you in an accident?" (no), "Have you been hit by anything lately?" (no), "Well, don't be too concerned but drop what you are doing and get a ride to the hospital, I am checking you into the ER now, what hospital are you going to, do not drive yourself whatever you do.".

Ok, to cut the story short, Dr. Kim at Littleton Adventist Hospital had a chest tube in me within 45 minutes of that phone call. Ouch. I'll forgo the details, but it basically sucks...but getting it out sucks even more since you aren't on the same pain medications :-)

Now here is what is interesting to us techy folks. I was in the new wing of the hospital that has wireless. While I didn't do my normal work, I was relatively effective for a guy with a tube in his side and without a shower. Very cool. I actually started this entry while I was lying around.

Next, I had X-Rays every 8-12 hours, so I was joking with the guy that I'd like a copy. Ben says, "I'll bring you a CD". Next thing I know I have all of my X-Rays on a CD that I can browse through and try to be a pain to Dr. Kim. Needless to say, I didn't have much luck diagnosing myself, its difficult to see the lung in the pictures...I know they are trained, but a computer guy just doesn't do Biology.

Interestingly, the X-Rays are a perfect example of unstructured data. Unstructured data is essentially data that is not easily read by a machine. You would want to associate a lot of metadata with my X-Rays, obviously my name, date of birth, date the X-Ray was taken, problem indicated by the X-Ray, etc... Then you would want an application that you could submit queries to the repository of images to search the metadata and retrieve specific images based on the criteria. For example, if someone wanted to see a Spontaneous Pneumothorax, they could look it up.

Using the Honeycomb API, the application could query the Content Addressable Storage (CAS) that a Honeycomb System offers like this:

[fn LCASE(patient.diagnosis)] LIKE 'spontaneous pneumothorax' AND [fn LCASE(artifact.type)] LIKE 'xray'

Submitting this query via one of the language APIs should return a set of object IDs that match the particular diagnosis and artifact that I'm looking for (assuming I named the metadata attributes like that).

The Honeycomb system is a hardware and software stack, but the API and a simulator can be downloaded from sun.com if you want to play with it. The simulator runs in Java so you can run it just about anywhere.

No, I don't think Littleton Adventist was using Honeycomb, but medical is definitely one place that could make heavy use of this type of unstructured data interface.

There was another very cool system in place at the hospital though. All of the staff had tiny radio devices attached to themselves, they were about the size of a tag that goes on clothing at a store or the size of one of our own Sun SPOTs. The tag emits a radio signal that allows the personell to be found wherever they are in the hospital.

I remember hypothesizing about this type of technology and how invasive it would be but within the hospital setting it makes perfect sense. You are dealing with real time events where you have to coordinate resources within seconds in the case of a trauma. Why not know who is closest for responding and be able to call them by name. In fact, the RN's were often called from in my room, since they have speakers and intercoms within the room. This way, the entire hospital does not have to be bothered with constant communications and, instead, the communication can be localized to be point to point without requiring pagers or walkie-talkies on the nurses and doctors.

At any rate, I'm back in the office today and hanging around trying to do things before my follow-up appointment tomorrow.

I have to say hospitals are a hotbed of technology but, in retrospect, I would prefer to schedule a customer visit rather than have one scheduled for me :-)

Cheers!

Friday Jun 08, 2007

ZFS Snapshot and Amazon S3 (Part 2 of 2)

Alright, I made you read my blather about costs and such, now its time for some code! Well, you may be disappointed to know there is really very little code to creating a ZFS snapshot, sending it to Amazon S3, and restoring the file system back to its original state after retrieving the snapshot BACK from Amazon S3.

This version of the implementation is going to take the "simple is elegant" route. I use the built-in capabilites of ZFS to take snapshots, save them, and restore them and I couple this some Java code to move to and from Amazon S3 (I'm much quicker with Java over the many other languages you can use for Amazon S3, Perl / PHP / etc...).

Basic Setup
The following was set up on my system:
\* One storage pool: zfs create media c0t1d0
\* A file system within the media storage pool: zfs create media/mypictures
\* Mountpoint change to /export: zfs set mountpoint=/export/media/mypictures media/mypictures
\* Shared over NFS: zfs set sharenfs=on media/mypictures
\* Compression turned on: zfs set compression=on media/mypictures

I copied some pictures into the /export/media/mypictures directory (enough to create an acceptable snapshot...193 Meg).

Snapshot and Store
The steps for creating the snapshot and sending it to Amazon S3 are:
\* Create a snapshot of the file system
\* Compress the snapshot and redirect it to a file
\* uuencode the snapshot (removes control characters)
\* Send it to Amazon S3 with appropriate metadata

To create a snapshot, use the "zfs snapshot" command. I will snapshot the entire /export/media/mypictures directory and name the snapshot "20070607" with the command:

zfs snapshot media/mypictures@20070607

The snapshot should initially take up no additional space in my filesystem. As files change, the snapshot space grows, since the original blocks must be retained along with the new blocks. Still, saving the snapshot will require the full amount of space since I am creating a "file" full of the snapshot of the data (which happens to be all of the original data). It is relatively easy to turn the snapshot itself into a stream of data, simply "send" the snapshot to a file. Enroute to the file, pass it through gzip for any extra compression (remember, we'll be paying for this space):

zfs send media/mypictures@20070607 | gzip > ~/backups/20070607.gz

Mileage varies with compression on snapshots.

Next, I uuencode the file to prepare it to be sent over the Internet. The uuencode process ''expands'' the file size by about 35% so its highly likely that any gains we made through compression are taken back out by uuencoding. Here is the uuencoding process from the command line:

uuencode 20070607.gz 20070607.gz > 20070607.gz.uu

That's it on the Solaris-side, now I can send the snapshot to Amazon S3.

I will assume that a "bucket" is already created and that we are merely sending the final, uuencoded snapshot to the Amazon S3 bucket. To be honest, I tried using Curl, Perl, and a variety of other things and I couldn't quickly get the right libraries to create the signatures and I just hate scrounging around the Internet for the right this or that and changing compilation flags and recompiling and... So, I went with the Java - REST approach.

Use the Amazon S3 Library for REST in Java library. This has classes for doing all of your favorite Amazon S3 operations and was quite easy to use. You can see the whole program with the keys removed, or you can just take a look at the essential pieces of code for uploading here:


AWSAuthConnection conn =
new AWSAuthConnection(awsAccessKeyId, awsSecretAccessKey);
S3Object object = null;

try {
byte[] bytes = new byte[(int)length];
/\* READ SNAPSHOT INTO BYTE ARRAY HERE \*/
object = new S3Object(bytes, null);
} catch (IOException ioe) {
}

Map headers = new TreeMap();
headers.put("Content-Type", Arrays.asList(new String[] { "text/plain" }));
System.out.println(
conn.put(bucketName, keyName, object, headers).connection.getResponseMessage()
);



Run the code and you are GOOD TO GO!

Retrieve and Restore
The process of retrieving and restoring the snapshot when you lose your data or want to return to a previous time in your history is relatively simple as well, simply have a program to reverse the process above. Here is the essential Java code (using the Amazon S3 Java REST libraries again) for retrieving the blob from Amazon S3:


AWSAuthConnection conn =
new AWSAuthConnection(awsAccessKeyId, awsSecretAccessKey);

System.out.println("----- getting object -----");
byte[] bytes = conn.get(bucketName, args[0], null).object.data;
/\* WRITE THE DATA TO A NEW SNAPSHOT FILE \*/

Once you have your uuencoded, gzipped snapshot back from Amazon S3, decode it and decompress the snapshot like this (the reverse of what you did before):


# uudecode 20070607.gz.uu
# gunzip 20070607.gz

Now you have to decide what to do with your snapshot. I moved the existing mypictures pool and restored my old snapshot into its place to give me a complete time travel back to my snapshot. Here are the commands:


# zfs rename media/mypictures media/mypictures.old
# zfs receive media/mypictures < 20070607

That's it! Going to /export/media/mypictures will bring me to the pictures I snapshotted on June 7, 2007!

Issues with implementation
Yes, there are plenty of issues with the implementation above...I will simply call it a "reference implementation", right?
\* Amazon S3 size limitations - In the "Terms of Use", Amazon S3 specifies the following: "You may not, however, store "objects" (as described in the user documentation) that contain more than 5Gb of data, or own more than 100 "buckets" (as described in the user documentation) at any one time. As a result, one would want to slice the snapshot up appropriately so as to conform to the Amazon S3 limitations, or possibly work with Amazon S3. The limitation IS completely reasonable though due to lengths and limitations in the HTTPS protocol itself, anything bigger and you will have to "chunk" it up for storage...this will take many more than 3 lines of code, but not too much.
\* Encryption - Data should be encrypted appropriately before being sent to a third-party storage site
\* Cron Job - Timely snapshots
\* Non-Java - It would be nice to do the whole process from scripts, but I got hung up on signing the header and streaming it over Curl so I hopped to my native tongue (Java).
\* Heap Limits in Java - It is insane to read a 5GB file into Java, but I was being simple and quick, I think Curl would be a better solution if I could get the header signing right.

Total Cost
I incurred a total cost of $0.07 to my Amazon.com Web Services account for the raw storage and bandwidth for the blog post, to be charged July 1, 2007. As I want to save a little money for my company, I will swallow the cost and call it a day (or two).

Thursday May 10, 2007

SmugMug + Project Blackbox Blog + Google Maps

It should be no secret by now that I'm the mysterious Project Blackbox blogger. I get lots of help from our marketing folks as it tours, as well as Dave and Paul the drivers. The Web 2.0 mashups are left to me. I'll be honest, I'm more of a middle-tier programmer and large application architect, so the mashups come with painful lessons these days. One of the reasons I took the job as Project Blackbox blogger was to "close the gap" between my architecture role and understanding the complexities of Web 2.0 programming. At the architecture tier, Web 2.0 is more or less obvious. The realities of what can and cannot be done and how long it takes to do it are extremely important to understand.

I just finished an update of the blog template to include two maps last night, and I am still tweaking the look to try to pack as much content as I can onto the page. In the spirit of open source and transparency, I figured I would post the current solution here so I could possibly help others as they do similar tasks.

Technologies
With virtually all Web 2.0 mashups, you are going to use a whole pile of different technologies. Everyone has their own APIs and the Web 2.0 programming technologies are a dime a dozen, each with their own benefits and drawbacks.

My technology choice was also constrained by my vendor choice. I had determined that I was going to use SmugMug as my gallery provider and Google as my map provider. It turns out that Flickr! had some of the advanced APIs I needed but they don't have a first class geography tagging facility (I would have to use keywords). I am very happy with the personal level of support that the Smug crew gives so I am a big fan of theirs. I felt in the end getting responses and help from smug for the API additions of callback and geo information was far more likely then getting geo information inserted into the Flickr! site (I've had some bad customer support experiences with Yahoo...though I do get my own personal domains from Yahoo!).

The technology list that I ended up with is:
- Google Maps as the map provider
- SmugMug as the picture host, using the gallery RSS feed
- Rome RSS Parser for parsing the RSS
- Java with the NetBeans IDE in a standalone program to parse the SmugMug Feeds and converting to JavaScript.
- Curl for uploading the JavaScript to an intermediate site
- JSON for the JavaScript Object formatting
- JavaScript for the embedding into the blog

SmugMug APIs and Feeds
My original goal was to tie the blog directly to SmugMug through their APIs. Unfortunately, the APIs were geared toward either having applications built on the SmugMug infrastructure (you can build your own templates at SmugMug) or having control of the hosting infrastructure so you could use a proxy solution to resolve cross-domain security issues. The SmugMug crew is adding a callback mechanism to their JSON API to help me, but this wasn't ready in time for the blog.

Even with the callback mechanism, the geographical information that you can associate with a picture in your gallery is not available from the API, only from the RSS stream. This will also be resolved by the SmugMug crew in time, as you can see from this thread in the support forum.

In the end, each gallery has a robust RSS feed with the geographical information. I have two galleries with the feeds for the U.S. tour and the Europe tour coded into the RSS standalone parsing program. You will need the Rome RSS Parser and JDOM (a dependency of ROME) to compile it.

I upload the resulting JavaScripts from the conversion (that use JSON for the object format) to an intermediate web site using CURL. You may ask why I don't upload the JavaScript to the Sun blogs site. This is because there does not appear to be an FTP port open to insert it into my blog resources...and so I use an intermediate location.

Google Map APIs
The Google Map API is well studied. After importing the JavaScript that I converted and uploaded, I go through the array of pictures (one for the U.S. and one for Europe) and create "Markers" for each one. The RSS feed has the first item as the latest posted, so I open an HTML window for it.

JavaScript in the Blog Template
The scripts for the mashup all occur in the portion of the template. In the HTML body, right above the for loop for the weblog entries, I inserted the two div markers where the maps will be placed.

<div id="usmap" style="height: 250px;margin-left: 20px;margin-right: 20px;border-style: double;"></div>
<div id="eumap" style="height: 250px;margin-left: 20px;margin-right: 20px;border-style: double;"></div>

These were VERY difficult to get right. I started with two separate side by side maps, but the rendering was slow and inconsistent across browsers. I ended up with div's to place the maps above and below each other. This worked well.

The scripts took some time to get right. I actually edited and played with the maps on a sandbox web site from my hosting provider before I edited the templates. This gave me better formatting and quicker turnaround time. Here are the final JavaScripts for the Map and JavaScript import.


<script src="http://intermediatesite/eupictures.js"></script>
<script src="http://intermediatesite/uspictures.js"></script>

<script src="http://maps.google.com/maps?file=api&v=2&key=mykeyfromgoogle"
type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
function load() {
if (GBrowserIsCompatible()) {
var usmap = new GMap2(document.getElementById("usmap"));
usmap.addControl(new GSmallMapControl());
usmap.addControl(new GMapTypeControl());
usmap.setCenter(new GLatLng(39.096, -96.59), 3);

var eumap = new GMap2(document.getElementById("eumap"));
eumap.addControl(new GSmallMapControl());
eumap.addControl(new GMapTypeControl());
eumap.setCenter(new GLatLng(48.46, 8.88), 3);

function createMarker(point, text) {
var marker = new GMarker(point);
GEvent.addListener(marker, "click", function() {
marker.openInfoWindowHtml(text);
});
return marker;
}

for (i=0 ; i var point = new GLatLng(uspictures[i].latitude, uspictures[i].longitude);
var text = "<a href=\\"" + uspictures[i].link + "\\">";
var text = text + "<img src=\\"";
var text = text + uspictures[i].thumbnail +"\\" width=\\"150\\" height=\\"101\\" />";
var text = text + "</a>";
var text = text + "<br/><small>" + uspictures[i].title + "";
var marker = createMarker(point, text);
usmap.addOverlay(marker);
if(i==0) {marker.openInfoWindowHtml(text);}
}

for (j=0 ; j var point = new GLatLng(eupictures[j].latitude, eupictures[j].longitude);
var text = "<a href=\\"" + eupictures[j].link + "\\">";
var text = text + "<img src=\\"";
var text = text + eupictures[j].thumbnail +"\\" width=\\"150\\" height=\\"101\\" />";
var text = text + "</a>";
var text = text + "<br/> var marker = createMarker(point, text);
eumap.addOverlay(marker);
if(j==0) {marker.openInfoWindowHtml(text);}
}

}
}
//]]>
</script>

And that is about it!

Some Extra Commentary
So, this took WAY longer than it should have, thus my desire to do a blog entry in case other folks try to do the same thing. The reasons that it took a while are:
- The cross-domain security issues and workarounds are a pain and assume you own infrastructure for the most part (except the new JSON callback solution which I'm very excited about). The amount of material (some links above) I had to digest and learn about was astounding, I had to be duplicating effort of someone else but there doesn't seem to be a single location for this information, it is literally cutting edge. In the end, these solutions didn't work. Pure client solutions for AJAX / Web 2.0 Mashups are very hard to come by. Even the new JSON Callback mechanisms raise my eyebrows around security. The community around mashups is also relatively naive around pure client applications. Hats off to Google Maps for getting it and to SmugMug for being responsive.
- What language to use? COME ON PEOPLE, these Web Tier languages are a dime a dozen now. I understand each has benefits and drawbacks but the long term support of this tier is going to be terrible. There WILL BE consolidation in this tier to reel in the learning curve and focus developers (in tools, in companies, etc...). What will be the standing survivor? I have to look into the JavaOne announcements in this tier...sorry, I'm a Java survivor and I love the language. It is powerful with tons of libraries and I don't have to download and compile source code whenever I want an update to the language.
- Browser compatibility is close, but getting the div's to work right was a complete pain.
- Coding tools are primitive in this tier. I spent a lot of time with the Error console in Firefox to get this right, and lots of uploading and downloading of .html files. Also, lots of pure head scratching.

Enjoy!

About

pmonday

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today