Thursday Feb 05, 2009

IT'S THE little THINGS

In the storage industry (and the computer industry in general) we have this term known as a "dissatisfier". A dissatisfier is basically a feature or a product that people have to use that no matter how well you do it, you will almost never receive a "win" (or even an acknowledgment of how well you implemented the feature) since the customer isn't really buying the product for that feature. On the other hand, if you implement the feature poorly, you can lose deals and cause a great deal of consternation on customer sites.

Solaris 10 Update 6 (as well as the latest OpenSolaris release) fixes a huge dissatisfier for me. After choosing what devices to install the system on, in previous versions of Solaris, you were met with a prompt asking if you wanted to partition the storage by default or customize the directory allocations. At this point you would go in and choose how much storage goes into the root (/), how much goes into home directories (/export/home), how much is swap and you could add your own partitions. I know I'm bad, but since I'm an engineer I never use /export/home on my system so I have to jigger the directories and fix how much storage is allocated.

Along comes ZFS Boot. Now, there are a LOT of reasons to love ZFS boot (we use it on the Sun Storage 7000 and on most of our developer systems). On previous projects, we noticed how quick and easy it was to do things like LiveUpgrade.

Now, remember that ZFS is natively thin provisioned...meaning, you can create file systems that all sit over the same pool of storage (multiple drives, devices, enclosures) and each file system vies for the storage (you can add quotas and reserve space, don't worry).

Today I installed Solaris 10 U6 and blew through the installation process without EVER having to finagle a screen saying how much storage should be allocated to each directory. YAAAAYYYYYYYYYYYYY :-)

Basically, you choose to install with ZFS, select two disks (so the install can be mirrored and one device can fail) and off the install goes happily ever after.

Ahhhhh, its the little things ....

Thursday Jul 17, 2008

BitTorrent on Solaris (Nevada, Solaris 10 Update 5)

After years of trying to squirrel away a few minutes to install my only BitTorrent swarm in our lab, I've finally done it! I thought I'd put the instructions out here on the web since it seems that explicit Solaris-based instructions are relatively sparse (they are sparse because the version I'm using is relatively easy to use and not relegated to a single platform...thus the lack of Solaris-specifics).

I started with a clean build of Solaris 10 Update 5 on one machine, and Build 94 of Nevada on another set of 4 machines. These are the Sun Fire x4150s I've discussed in other posts.

Retrieving all of the packages I needed was straightforward with Blastwave.org (an open source software repository for Solaris). In fact, with just a few commands I had a complete BitTorrent stack installed that runs with Python (don't have Python on the system...Blastwave takes care of that too).

Install BitTorrent
First, read the HOWTO on using Blastwave. I only made it through a few steps of the HOWTO before I was up and running, so don't sweat it if it seems long and you are attention-challenged (like me without a special Nespresso Level 8 or above espresso). Of course, I'm fumbling around more than I should so be sure to read the whole thing when you do have time, it contains information on directories and where stuff gets put (at least I think it does...). Installing what you need to use Blastwave takes only a few minutes. Once you have Blastwave ready and running, retrieve BitTorrent and all its dependencies (primarily Python):

/opt/csw/bin/pkg-get -i bittorrent

That's it, BitTorrent is now installed in /opt/csw/share/BitTorrent

Remember the BitTorrent Components
Here is the 2 minute review of important BitTorrent Network Components and terminology that follows the general deployment diagram here (this is the swarm I've installed in my lab so I'll refer to it throughout):


  • Torrent - A "file" that is prepared to be distributed by BitTorrent, torrent may also refer to the metadata file prepared from the contents of the file you will be distributing
  • Tracker - A "tracker" helps refer downloaders (peers) to each other, sort of an organizer of nodes
  • Swarm - A group of peers that are delivering content to a requester (the downloader itself often becomes a part of the swarm for another requester very early in the download process)
  • Seed - A full copy of a file that clients can obtain. This seems like a heavy load on a single server, but very early on clients will lessen their load on the Seed systems and depend on other peer clients for chunks of the torrent.

In the scenario below, the Tracker and the Seed for my torrent (OpenSolaris 08.05 ISO) are one and the same. The Seed should typically be split from the Tracker though...the Tracker is a bottleneck and you don't want it to be pounded on by Peers.

Start a Tracker
We will start the tracker first. A tracker can be started as a background task. It will sit on a port and listen for requests and write output to a file.

./bttrack.py --port 6969 --dfile dstate --logfile tracker.out

Your tracker is now up and running and ready to organize clients.

Create a Torrent
A "torrent" metadata file contains all of the information about a particular file that can be retrieved through BitTorrent. The torrent is associated with a tracker and when complete will contain a variety of information to ensure the integrity of the final torrent when a client receives it.

The README.txt with the BitTorrent download didn't have proper instructions for building a torrent (seemed a little out of date), so be careful and don't get frustrated. Use the btmaketorrent.py command to build the torrent file. I've chosen to create a torrent from the OpenSolaris 08.05 distribution (my favorite distribution of all, of course...):

./btmaketorrent.py --comment "OpenSolaris 08.05 from Pauls Swarm" http://x4xxx-01.sun.com:6969/announce os200805.iso

Notice the use of the tracker that I set up. You can show the meta information that was generated by using the btshowmetainfo.py command:

# ./btshowmetainfo.py os200805.iso.torrent
btshowmetainfo 4.0.4 - decode BitTorrent metainfo files

metainfo file.: os200805.iso.torrent
info hash.....: fdf239d2524e44432892d01ab354e20a8b77b7e6
file name.....: os200805.iso
file size.....: 719087616 (2743 \* 262144 + 26624)
announce url..: http://x4xxx-01.sun.com:6969/announce

Setting up a Seed
You need a web server available from the system that is going to be a "Seed". Nevada has Apache 2.2 (at least in my build) installed by default, you simply have to turn it on (mileage may vary). You can do this by typing:

# svcadm enable apache22
# svcs | grep -i apache2
online 20:41:36 svc:/network/http:apache22

A better instruction set can be found on BigAdmin.

With a little nosing around the system, I found my httpd.conf file at /etc/apache2/2.2. Look in the file for the DocumentRoot, something like this works:

# cat httpd.conf | grep DocumentRoot
# DocumentRoot: The directory out of which you will serve your
DocumentRoot "/var/apache2/2.2/htdocs"

We will place our .torrent file (created earlier) in the DocumentRoot.

Most web servers you also have to associate the mime type "application/x-bittorrent" with the file extension "torrent", this was already done for the pre-installed Apache 2.2 on Nevada.

Finally, put the complete file on a server (the same one would work like I am doing, but a separate server is recommended). Run the BitTorrent download, saving the file into the location that the file already exists. BitTorrent is smart enough to see all of the chunks are there, but it spends time verifying the are correct. When complete, the computer you run the download is a Seed. This command works for the OpenSolaris 08.05 torrent I created earlier:

# ./btdownloadheadless.py --url http://x4xxx-01.sun.com/os200805.iso.torrent --save_as os200805.iso

This takes a while as it runs through hash checks on the existing file.

The torrent is now ready as a Seed for peer to peer access. You can add as many Seeds as you want depending on how popular you think your file is going to be.

Use another Client to Download a File
I can now go to x4xx-02 and start the headless download, just as the previous one ran.

# ./btdownloadheadless.py --url http://x4xxx-01.sun.com/os200805.iso.torrent --save_as os200805.iso

Remember the torrent itself points back to the tracker. There should be some brief activity by the tracker during the download to see if other peers in the swarm can help.

During the download, you will become a part of the swarm from which other clients can download chunks of OpenSolaris (you have to give it, if you want to get it - The Suburbs).

As the file starts to download, you will see the transfer rate start to go up. With a single Seed, I started slowly ramping up. BitTorrent is careful to balance requests. Remember, I now have a Seed and a Peer with chunks of the file. With a single download in progress, I reached about 20 KB/sec (the upload from the Seed and the download from the client)...there is some obvious throttling going on somewhere.

What if I start a download of the torrent on x4xxx-03? The Seed remained uploading around 20 KB/sec. After an initial hit on x4xxx-02 down to about 14 KB/sec, I quickly moved back to over 20 KB/sec, while x4xxx-03 was peaking near 30 KB/sec. As more chunks moved onto x4xxx-03, x4xxx-02 also sped up, since it could grab chunks from two peers. The Seed remained constant at a 20 KB/sec upload, but x4xx-02 was now also uploading at 20 KB/sec and x4xxx-03 was able to have the aggregate bandwidth.

But wait, I have two more clients sitting idle, I started up x4xxx-04 and x4xxx-05 with the download. Again, an initial hit from x4xxx-02 occurred, as it was heavily relying on the original Seed and clients were also relying on it. Within about a minute, the original Seed was still uploading at 20 KB/sec, but -02, -03 and -04 were also uploading at 20 KB/sec. All clients were now downloading at 20 KB/sec with the original seed still uploading at a constant 20 KB/sec...peer to peer amortization of upload bandwidth at its finest.

As the download moved along, more chunks flooded onto the remaining peers and the sharing became much more efficient. Each of the clients regularly found their way over 25 KB/sec and often into the 30 KB/sec range.

I decided that the graphical output wasn't very fun, so I added my home system that I VPN with into the party using Azureus. With Azureus, I'm able to get graphical displays of what's going on. Here is a picture of my swarm:

Note that the center circle is my client, the fully blue circle is the seed, and you can see the other peers don't yet have a lot of chunks of the file.

Here is another interesting view from Azureus:

Of particular note is the Swarm Speed, we have hit about 100 KB/sec with our 5 peers and an obvious upload throttling on each client at around 20 KB/sec. Well, I'll obviously be looking into that...but I think I've run out of words for the evening and I've more than achieved my purpose tonight. Enjoy BitTorrent on Solaris!

Now, I just have to figure out what other kinds of fun I can have with my swarm :-)

Monday Mar 03, 2008

Computational Photography and Storage

There is a great article on CNet's news.com about computational photography, "Photo industry braces for another revolution". It is basically about Photography 2.0. The first wave of digital photography seeks to reproduce film-based photography as well as it can. Photography 2.0 advances hardware while taking advantage of higher processing power within the camera to take advantage of the new hardware, replace hardware functionality with software functionality or bring image detection and manipulation capabilities that are not possible in the hardware space.

There are a few developments worthy of note, and all of them involve bringing more CPU capabilities into the camera:


  • Panoramic photography - I enjoy these types of scenes (one shown below), though I don't think they are the future of photography at all
  • Depth of field and 3-D photography - There is an excellent example of this in the CNet article. Personally, depth of field is arguably one of the most difficult techniques to master since this is purely 4-dimensional using our current lenses (aperture size decrease increases time of exposure and more depth will be in focus, etc...)

There are many other ideas in the article...detecting smiles (an extension of this is closed or open eyes), better light detection, self-correcting for stabilization (this is done with high priced hardware today in Image Stabilized lenses), etc... Clearly a Photography 2.0 revolution is in the works.

Photography 2.0 is really the same trend we see in the storage business...Storage 2.0. There are simple changes in the industry, like the incredible increase in CPU driving software RAID into storage stacks again. A huge benefit with software RAID is the decrease in hardware costs that it drives. This is very similar to the Photography 2.0 concept of moving image stabilization out of the hardware (the lenses) and into the software.

Storage 2.0 also brings us projects like this one: Project Royal Jelly. Project Royal Jelly encompasses two important pieces, one is the implementation of a standard access model to fixed content information, a second is the insertion of execution code between the storage API and the spinning rust. The ability to "extend" a storage appliance (or device) via a standard API will allow us to leverage the proliferation of these inexpensive and high-powered CPUs. A common use-case for an execution environment embedded in a storage device would be an image repository or a video repository. Every image submitted goes through a series of conversions: different image formats, different image sizes (thumbnail, Small, Medium, Large), and often a series of color adjustments. Documents go through similar transformations: a PDF may have different formats created (HTML primarily), the document will be indexed, larger chunks will be extracted into a variety of metadata databases for quick views, etc...

These transformations can arguably be the responsibility of the storage operation, not the application operations, especially when the operations can be considered part of an archiving operation. While indexing and manipulation could be considered a higher tier, storage tiering and taking advantage of storage utilities could also benefit from a standard storage execution platform. Vendors could easily insert logic onto storage platforms to "move" data and evolve a storage platform in place rather than authoring applications that have to operate outside of the storage platform.

Just some Monday morning musings...have a great week.

Monday Feb 25, 2008

DVD Player = Trash Bin

My nearby Hollywood Video shuttered its doors this weekend. My MVP membership was automatically canceled by the company. As far as the personalities go, I enjoyed all of the folks there, it was nice to get real recommendations from real people. As I stood in line with my son to purchase a few stray DVDs I talked to my son that in 3-5 years he would never step into a video store again. I got a few stray looks in line. The more I told him about never being in a video store or a music store again, the more I realized what a foolish purchase those plastic cases and metallic disks in my hands were.

I convinced my son we didn't need the DVDs and I promised he could login to Amazon Unbox on the Tivo and rent a movie when we got home. We set the DVDs down, left the line, and I walked out of a video store with my son for the last time ever.

We went home, rented "Mr Bean's Holiday" at home (no, it wasn't my choice) and I'm boxing the DVD player and putting away whatever movies I do have.

And here is a message for Blu-Ray. I haven't looked at a Blu-Ray player, I'm not looking at a Blu-Ray player and I swear I will never have one in my house, period (unless I purchase a video game system with one but then I won't get movies for it anyway). Revel in your victory oh Blu-Ray backers, but it will be short lived, physical media for content is short-lived. I actually think Microsoft XBox is fine if they can get the video downloads working ASAP on their platform. These are pro-gamers, renting online will not be a problem and I doubt pro-gamers show off their DVD collection.

When I don't have my Tivo with me, I can download to my Media Center PC and plug it into the TV through the S-Video port, works great, portable media, very nice.

One less black box in my house is a good thing. Thanks for your time of service DVD player, I'll show you to the door.

Tuesday Sep 11, 2007

Starbucks, Your Brand isn't Worth $9.99

I decided I would get out of the house today to do some more work on the XAM stuff I'm doing. I have a nice setup at home and I get to avoid the long commute, save a little gas, yadda yadda yadda but some days you just have to leave for a few minutes to regroup.

So, I decided I would wander over to Starbucks and bring up my NetBeans IDE to do some coding. I grab a coffee, grab a chair that 5,000 people before me sat on so there is no telling what is in the crevices, and I pop open the laptop. Gack...they are still partnered with TMobile. Guess what? My account was "cancelled" by them and they want me to call customer service (probably otherwise known as "the Sales Department"). I surf around a bit and they still want $9.99 for a day pass.

Listen, Starbucks, it is easier for me to get up, move my butt down the street and get a faster and more reliable Wi-Fi connection for free. Further, the Starbucks brand no longer demands a premium for "coolness" since the market is saturated (600 Starbucks in the San Francisco area...you have GOT to be kidding me), so its not even COOL for me to be sitting in Starbucks working, it is far cooler at the Tattered Cover or even Panera.

So, for me to get a few minutes of work in (for which I DO buy coffee, yogurts and other stuff since I come in before Breakfast), you want me to


  • Purchase a few things at your store
  • Drink McDonald's-like Coffee that tastes the same every day, day in day out with no different flavor and no personality
  • Pay $9.99 for a day pass that I will use for maybe an hour while I eat Breakfast
  • Actually "talk" to the TMobile Sales...err...Service people that deactivated my account
  • Not support my local businesses that have free Wi-Fi (I know Panera isn't local but the Tattered Cover is)

Listen, it doesn't add up! It is just not cool to carry around a Starbuck's cup, it is just coffee, coffee I can buy at approximately 200 other places in the Denver area. And paying over $17 (TMobile + Food) for the privilege of Starbucks just doesn't add up anymore when I can take $10 off by going to one of the other myriad of places that have Wi-Fi. Heck, I could go to the hospital again and get free Wi-Fi (would insurance help me out on that one????).

And if you want a clue why your U.S. expansion is stalling, it doesn't take a brain surgeon to figure it out. And, frankly, the iPhone partnership isn't going to help you. Get out of the TMobile partnership, get some food that isn't stale and unappealing, get some hipness...you're a dinosaur.

Wi-Fi is ubiquitous. What is a brand worth to a consumer when that brand has hit the saturation point? I'm not a genius, but its not $17.

Tuesday Sep 04, 2007

CPU and Storage all the way down

This weekend I had the privelege of finding the last campsite in all of Colorado. I went to Stillwater Campsite in the Arapahoe National Forest area, just down the road from Rocky Mountain National Park. Everything was spectacular about the entire trip, and I had my Canon Elph along for the ride (I bought a Canon Rebel right before we left but...forgot the battery!!!!!!).

The panoramas in the Rocky Mountain National Park area are fantastic.

If you haven't dug into your latest digital camera + editing software, you may not have noticed a little feature that is called something like a "photo stitcher". It basically allows you to take a series of photos and stitch them together into one large picture, like the one here:

The original source is much larger than what I show above (9448x1823 to be exact) and is stitched together from a whopping 5 pictures taken in a line across the horizon. If you like the above picture, it is a scene off the roadside in Rocky Mountain National Park. Long's Peak (a 14er) is on the right side of the shot. Is it a good picture? I don't know, you can decide. Fortunately for you, if you are in Rocky Mountain National Park, you actually have to try if you want to capture a bad picture.

What is particularly interesting to me is the number of processors and the amount of intelligence that is inserted between the original captured content and the final printed result. What we are witnessing is the increasing encroachment of system and application intelligence in the path of the data content. This is occurring everywhere, from the smallest of devices up to the largest of storage systems. Further, companies that can insert applications along the datapath can do a great service to the consumers of the content along the way. My stitched together photo is a perfect example.

Start with me standing on a ridge in Rocky Mountain National Park. I would LOVE to have the entire panorama I'm seeing in a single picture, but my lense simply won't allow it. Enter the "Stitch Assistant". Instead of my camera being a light capture and store device, it is a light capture, process, store and assist device. Using the stitch assistant (that my friend found for me), I was able to take the first picture, and my camera would show the edge of that picture so I could line up the next picture with some intelligence. Not only that, it would keep the photos organized so that they are easily found and accessed on the SD card.

Next, I upload the pictures to my laptop, where there is more storage and CPU processing. The Canon software takes the images that were grouped on the camera and stitches them back together into one large panorama, crops it, and stores it as new content. Now comes a FATAL flaw in the Canon Software, there is no built in uploader to online content repositories.

So, I bring up Kodak Easyshare or the SmugMug uploader and grab my panorama (SmugMug is actually proving to be easier). SmugMug then processes the image AGAIN on the way into their content repository and transforms it to thumbnails and metadata. Finally, I choose to print my panoramic view (here all of the storage services fall down, they give the traditional sizes for printing (4x6, 5x7, etc...) and don't tailor their services to us budding panoramites (I would love to choose a picture and have them create a custom size for me that best fits the picture).

Think about this, every storage operation was preceded by application intelligence that made meaningful transformations of my content. With better programmers and more time, the storage operations could become far more intelligent as well. For example, there is absolutely NO reason that my pictures that I used "Stitch Assistant" on shouldn't have been weaved together into a single panorama when my pictures were transferred from my camera to my laptop. The storage operation itself should be able to apply meaningful application logic to the process.

Where these transformations take place will be dependent on the particular application. For example, the Canon Camera uploader is a natural place for the stitch assisted photos to be re-stitched because of the lack of CPU on my camera and the lack of standards for metadata in images. Had my camera more CPU, the camera itself could do the stitching. If images had standard metadata about what picture in the series was what, you could push the stitching into the storage device itself (you would store 5 pictures and get 6 back in your folder!). Even the Operating System could add the functionality.

Each step of the way has distinct advantages and disadvantages. More and more we are going to see application logic being placed in between the capture and raw bits of our digital life. It just makes sense that our storage devices should be able to do some work on our behalf. Further, these abilities are going to creep into our lives almost unnoticed...except by those companies that are enabling these leaps in customer functionality, like Sun Microsystems :-)

Monday Aug 13, 2007

Google Storage Offering / Microsoft SkyDrive

I don't think anyone would argue that Google is one of the masters of Web 2.0. One thing that Google does particularly "Web 2.0" is the Perpetual Beta. They release code early, they release code often, and more often than not, the application is a glimmer of what "could be" rather than the all-conquering application that is.

Google's Storage Offering is one of those things that could be. With the wealth of APIs available (including a Google Data APIs, Google Web Toolkit and a Picasa Web Albums Data API, the new storage offering has more than enough potential for conquering the world.

But, lo and behold, true to the Web 2.0 roots, the first outing for Google's Storage Offering only "integrates" the Gmail application and the Picasa Web Albums storage (so that both applications can access the same storage). Obviously, Google is moving to a consolidated storage / application model rather than separate stovepipes...though I have to question making people pay for this feature (free storage in the two applications is still stovepiped). In addition to "integrated" storage, you also get more storage.

So, as of right now, the model for Google Storage pricing is different from Amazon.com's pricing. In a way, this is an apples to oranges comparison as the accessibility of the storage is different (Google is accessed THROUGH an application and Amazon is accessed BY an application). I can ONLY assume that there will be parity in writing to a data API at some time (though I believe the capabilities of the data API will be slightly different).

Recall that Amazon.com's pricing is $0.15 per GB per month with a bandwidth price of $0.10 / GB data transfer in and tiered pricing for data out ($0.18 for < 10GB,

The cost of Google storage is $20/year for 6GB, &75/year for 25GB, $250/year for 100GB and $500/year for 250GB. Because there is no "application" API at this time, it makes sense there is no bandwidth charge.

Its very, very difficult to make a comparison here, but at the 6GB cost per year (assume 12 GB of inbound data and maybe 18GB of outbound data), you end up with $20 for Google and $15.24 for Amazon.com. But, again, what you can do with that storage is radically different so its really not even a fair comparison. Today, you would choose your provider based on your needs (do you use Google apps, then buy storage from Google, do you want to write an application that requires storage, then you would use Amazon.com).

Will Google give access to the storage from the Google Data API, I have to assume so...some day.

Architecturally, if you make the assumption you can switch from Amazon.com as your application storage provider to Google storage at some point in the future, you will need to build an adapter/model layer in your application so you can plug in the Google API as a target (assuming there is a level of parity in the API capabilities).

Finally, I have to briefly mention Microsoft's SkyDrive Beta. SkyDrive is a web-based application that you can sign up for with a Microsoft Live Account. It provides 500 MB storage with a web-based interface for dragging and dropping files. There doesn't appear to be a CIFS API yet for you to mount it on a machine and use it seamlessly, but you can almost guess this is in the works. But, keep in mind SkyDrive is just part of a suite of applications for sharing that Microsoft Live puts out (also includes a Photo Gallery, Blogs, EMail, Maps, Search, etc...). Its pretty obvious when you put the storage offerings from Google and Microsoft in perspective, they are simply ways to enhance their web application/desktop efforts! Microsoft Live + XBox 360 can extend Microsoft into your living room, is Google looking for this extension? (Remember, Amazon is already in your living room with its movie service via Tivo).

Perhaps a year from now, we will be looking at two different "APIs" for accessing your online storage utility:
- CIFS/NFS for use with normal File I/O libraries
- Application-aware interfaces (such as Amazon.com and our own StorageTek 5800 Storage System)

Where will Google land (if it lands?), both? Data API and CIFS/NFS?

The next few years will prove interesting as the apps and storage come into your house via so many different access points (consoles, phones, PCs, DVRs, etc...).

Wow, somehow an apples to apples comparison of storage utilities (of course, by definition, the only "storage utility" here is Amazon's) turned into an online application blog post?????? What's with that?

Monday Jun 18, 2007

Emulex Joins OpenSolaris

I haven't seen an "official" looking announcement yet, but Emulex has Open Sourced the Emulex Fibre Channel Device Driver and put the .tar.bz2 of the source in the downloads section of the project. The source was actually posted June 15th!

What does this mean to OpenSolaris? Well, it continues to gain momentum as a solid, open source, storage operating system. What does it mean to one of the many, many OpenSolaris or Solaris users? Well, hopefully it means more transparency into the storage stack that you are running, if you need it. To open source developers, they have some first rate driver code to look at and participate with.

The Emulex Driver uses the Leadville stack within OpenSolaris to plug in its functionality. I have to tell you though, I'm not a Solaris Driver guru, so let's hope Dan or Scott get down to business and do a blog for us on how the Emulex code interacts with the Leadville stack!

Nice job Emulex! Keep up the good work.

Tuesday Jan 02, 2007

Solaris 10 U3 on Gateway MX6453

I bit the bullet this weekend and picked up a Gateway MX6453 Laptop. I also immediately downloaded the Solaris 10 - 11/06 DVD and attempted an installation. It was (and still is) a bit rough going. I thought I would share a few things that are peculiar to the laptop that may help with other Gateways.

One thing to be aware of with this system, you will NOT have network functionality out of the box and as of yet, I do not have wireless working, so plan on being near a wired port for the time being.

Basic system information:
- CPU: AMD Turion 64 - worked without a hitch
- Wired Ethernet: Marvell Yukon 88E8038 PCI-E, get the driver from here, be SURE to grab the 64-bit driver. This SysKonnect supports the Marvell Yukon (note that the Marvell site does NOT have this driver available on it).
- Wireless Ethernet: Gemtek WMIB-184GW (driven by Broadcomm Driver in Windows), NOT WORKING
- Video: Don't know, worked without a hitch
- Audio: Not sure yet, not working either :-)
- Memory: 2GB (one of the reasons I bought it)
- Hard Drive: 160 GB (another reason I bought it)

Hard Drive Partitions
The Gateway comes with two primary partitions, one for system restores and one for the Windows Media Center edition. Do yourself a favor immediately, get Partition Magic before you go any further. Yes, I was able to install Solaris 10 without destroying my Windows partition, but no, I don't think I could have done it without Partition Magic.

Shrink the Windows partition down (you probably only need about 10GB to play, I put 55GB since I want NetBeans, StarOffice 8 and I'll be doing stuff for work on it).

Here are the approximate sizes and relevant information for the partitions in about the order you would see them in partition magic:
- Pre-existing partition: ~8GB, Primary Partition, System Restore already configured from Gateway, leave unchanged
- Pre-existing partition: ~150GB, Primary Partition, Windows XP, shrink to about 85GB, this will remain Windows XP
- New Primary Partition: ~55GB, no type (I think this is "unassigned"), this will be used for our Solaris installation
- New Logical Partition: ~5GB, FAT32, this will be used to transfer files between Solaris and Windows (theoretically)
- New Logical Partition: ~5GB, no type (I think this is "unassigned")

Installation
The installation is very straightforward. The only problem I had was with the initial file system sizes. The defaults were 5GB for / and the remaining for /export/home. This was not good and caused all sorts of problems for me as I tried to install larger programs (like StarOffice 8). I caught it on my second installation. Tweak the file system sizes to be more balanced remembering that if you don't explicitly set /opt and /tmp sizes, these directories will be put in the / file system.

To let you know how serious this is, I could not even grab StarOffice 8, decompress it and add it. The Solaris Companion CD also gave me all sorts of headaches as well. So, if you leave the default directories on the install, at least tweak the sizes so you have the balance in / and not /export/home.

Wired Network
The wired network worked well, but not without a lot of driver searching. For the Marvell chipset, get the driver from the SysKonnect
web site. Now you have some work to do...I think I've captured it here, but it is a bit tricky:
- Uninstall the pre-existing SK98sol package (they conflict and this is old)
- Remove all entries referring to sk98sol from /etc/driver_aliases
- pkgadd the SysKonnect driver (pkgadd -d . SKGEsolx) from the directory you decompressed and untarred the distribution from
- During the pkgadd DO NOT have it configure the interfaces, it won't find it anyway
- Once installed, edit the /etc/driver_aliases file and look towards the bottom, there should be a bunch of lines with skge in them. For the marvell card in this Gateway, add the following line to the end of the file: skge "pciex11ab,4352"
-- NOTE: How I came up with this is kind of weird, I found Ryan Hornbeck's blog that showed the basics of determining the device manufacturer number from Windows, I then used the command 'prtconf -Dv | grep 4352' to determine the rest of the string I needed.

There are many different things you could do from here to get the network up and running. I wanted to use my DHCP and have the same hostname all the time rather than have my router assign it. When the driver gets loaded, the ethernet card device will be skge0, so:
- create a file named /etc/hostname.skge0 with a single line in it: inet (where is the name you want for your host all the time)
- create a file named /etc/dhcp.skge0 (you can use "touch /etc/dhcp.skge0")
- now, you can do the rest of the work with ifconfig and a few other commands...I just rebooted (remember, I came from Windows).

You SHOULD be attached to the network, to see...run "ifconfig -a", you should see skge0 with a valid address assigned by your DHCP server.

COOOL, if I blew some of the instructions above, drop me a note and I'll try to correct them...but that is the basics.

Wireless Network
Forget about it. The Broadcom driver (which appears to drive a Gemtek 802.11b/g Model WMIB-184GW on board card) does not appear to be supported, I couldn't find a Solaris driver anywhere. Further, the laptop community on OpenSolaris and Solaris HCL don't give me much hope for this card natively at the moment.

The ndis community is working to get Windows drivers working with Solaris to help expedite all of the various Wireless chipsets out there but, to be honest, you will have to do a bit of work to get your native chipset working and the ROI isn't worth it to me.

My current plan is to go back to Circuit City and purchase a PCMCIA wireless card that is on the support list, I think this list will provide me with enough information to get started.

StarOffice
Obviously you want StarOffice 8 and not 7. First remove StarOffice 7:
- I forgot the package names, simply do "pkginfo | grep Office", this should return 2 packages. Remote the gnome integration one first, then the main office package.
- You should do this BEFORE you access staroffice for the first time.

Then grab StarOffice 8 from Sun Microsystems, remember to get the x86 packages and follow the readme to install it.

Audio
Not working yet, not a priority for me either, this is why Jobs made iPods.

Grub Menu
When you boot, you will see the following entries on the Grub menu:
- Solaris 10
- Solaris 10 Safe Mode (or something)
- Windows
- Windows

Be VEWY VEWY careful, the first Windows option is the one you want. The second one takes you to a system restore...if you go there, just shut it off ASAP. You can rename these when you are in Solaris, edit the file: /boot/grub/menu.lst

Companion CD
My first attempt at this failed miserably because of lack of disk space (recall I didn't set the file directory sizes properly). At the bottom of the README on the DVD there are instructions for installing all of the packages. If you don't want to be finicky and you want to explore at your leasure, go for it. Noticeable things on the companion DVD include all sorts of GNU utilities, PHP, Ruby and more.

Things left to do
- Audio isn't working
- Try out my share between DOS and Solaris partition
- Try out ZFS :-)

Total Time Spent: More than you want to know ;-) (probably about 16-20 hours thus far, which is the reason I thought I'd post as much of this as I could...there are plenty of other how-to's out there but heck, if you have a Gateway you may as well have some more info).

Finally, thanks to my friend for helping me through some of the sticky networking stuff. Hopefully she can catch any bugs in the above info :-)

Tuesday Dec 26, 2006

Ho Ho Wi-Fi-enabled Protocol-out-of-control Christmas

Christmas is over and all of the new tech is hooked up, and now a blog full of constructive criticisms. We were lucky this year, Santa brought us a Wii and a Roku Soundbridge and, of course, a variety of iPods.

The insane part of all of this, none of it works together. Myself and my friend have spent several hours downloading hacks, configuring, jiggering, connecting, reconnecting, and reconfiguring to get things in the proper state. Now I'm living in an odd house of cards (shareware, open source, beta-level applications that profess to be 1.0, and more).

Let's start with the Roku Soundbridge. Its a wi-fi enabled cylinder about 9 inches long and a 3 inch diameter. It looks for a server on the network, connects and gets playlists from the server. It professes to work with the Apple iTunes server as well as Windows Media Connect. Getting it onto my wi-fi that is password protected was not a problem, I had it there in about 10 minutes. Getting the playlists off of the servers in my house is a completely different odyssey. It doesn't work with shared CIFS drives from my Windows XP box (that's how I have all of my machines hooked up). The instructions are now dated, of course, so you are forced to hunt around. Interestingly, Windows Media Connect only exists bundled with Windows Media Player 11 (an obvious sign of trying to simplify things for xbox users that are sharing between Windows and xbox...something Roku should start to worry about...more on that later). So, I updated to MP11...and it still didn't work. Once you read the forums, you find out it doesn't work for lots of people and they backed off to Windows Media Player 10. Oh and by the way, it doesn't work with Apple iTunes 7 either (which is where I am with all of the iPods lying around). I ended up downloading the third party open source server firely server and, oh, don't forget to download the other thing from Apple or you can't install firefly.

Music is now flowing to the Roku Soundbridge, unfortunately, I have to add an 802.11g network extender to my post-Santa-Christmas-List because it is rebuffering 3 times per song...very annoying when you are rocking out to AC/DC. FOR THOSE ABOUT TO RO.....(rebuffering).......(30%)........(32%)........CK WE SAL...(rebuffering).....(13%)...... (you get the idea).

Lessons for Paul:
- Wi-fi 802.11b/g is ubiquitous, excellent stuff (incidentally, the Wii connected right up to my network as well).
- These third party suppliers of network equipment are completely at the mercy of the music servers...I understand you probably don't have enough memory/OS functions for a CIFS client but you need to work faster to support the latest...and your online help stinks

Onto the Video iPod, very cool. But, of course, I want to move my Tivo recordings to my iPod. I already have the Tivo Desktop on my PCs in my house, I had to purchase some sort of $25 portable device support from Tivo (I would give you the link but I have no clue...my friend took care of this for me and just asked me for my credit card...let's hope she didn't buy that new system she wanted but I didn't buy her for Christmas. She spent about an hour and a half configuring it (Tivo...the concept of the Tivo Desktop is there...but, to be frank, it stinks...it locks up constantly on one of my computers and the usability of it is terrible). We transferred a bunch of shows over to it. These now get converted into MPEG-4 that I can download to the Video iPod. Now, let's get something straight here, it is over 32 hours later and 4 hours of video are ready to download to the iPod. The machine is constantly working and there is a backlog of stuff to convert, but just so you understand, for over 36 hours of waiting and "transferring" and "converting", I have 4 hours of video. You've got to be kidding me. And the kicker, I have NO CLUE what it is doing...I just get a weird % converted if I flyover the Tivo icon in the bottom right corner of my screen. WHAT is 42% converted? How long is left? Why are you trying to convert all of the shows at once (I'm guessing this is what you are doing, I have no clue since there is no apparent log or way to find out what you are doing...let's pray my computer doesn't crash....)

Lessons for Paul
- The functionality is there, perhaps I need to buy time on the Sun Grid Compute Utility to get my Tivo content converted faster?
- The Tivo Desktop continues to be a mystery to me, how can such a wonderful device maker build such a terrible application?
- I don't have much nice to say about iTunes either, hello Apple...Window's Media Player is kicking your butt on the Media Player and Real also beats you

Oh yeah, The Fray CD is EXCELLENT if you haven't heard it.

And, finally, the Wii. A lot has been said about it, but that's by people that haven't spent days on end with it. The magic does start to go away after a few days. Once again, it connected nicely to my wi-fi network, but it takes HOURS to connect to the stores online and one of the channels is still not available after over a month. Just to be sure you realize that I wanted this thing, here is an image of my Toys 'R Us number, I was 21st in line...

I did NOT stay there all morning (or night), I just showed up 2 hours before they opened and made it in line.

I have four controllers for it and 4 people playing tennis gets me VERY VERY nervous...I don't have a mansion and all of those arms swinging around makes me think we are just waiting for a black eye to happen.

But here's what surprised me, the controller is actually FRUSTRATING in several games, especially for my 6 year old. Let me give you an example. In Spongebob Squarepants, you have to rotate the controller vertically in a circle to get the winch (yes, I said winch) to go up.
A) My son can read great, but he doesn't know what vertical is, so I have to be helping him
B) He doesn't really have the dexterity yet to do many of the more complicated controller waves and rotations, especially this often used winch raising move

In another game, Rayman Raving Rabbits, every game has different controller moves, so he has to have the patience to read on the way into every segment. In one mini-game, you have to move the Wiimote and the Nunchuck up and down to run, in another you have to point the Wiimote at the outhouse door and shake the Nunchuck to slam them on the bunnies that are...well...doing their business (this mini-game cracks me up). The two handed moves are often particularly odd, sort of like chewing gum and rubbing your head for him (and me at the start).

There is NO DOUBT that Nintendo has revolutionized game playing. For older folks it is a riot and, over time, your gestures tone down as you adapt to the controller. For my younger son, he loves Wii Sports, especially baseball and boxing me (I let him win of course), but the bigger games are so far frustrating and we limit our time. The one exception so far has been Monkeyball. After a bit of frustration, he has come to be much better at tilting the Wiimote to make the Monkeyball roll through the courses. Many of the mini "Party Games" are frustrating to him, but I am hoping with a few more weeks of effort we can get him happy.

Now, here's a major beef. The photo channel on the Wii is yet another photo repository. Listen, I don't need another photo repository, I have 8 of them already. Work with someone to team up. Oh yeah, and take a cue from Tivo...work with Yahoo on network identity and just suck in the Weather, Road conditions and other things from Yahoo. This is so great on Tivo that I don't have to maintain yet another network identity and personal settings...but last night I found myself configuring what weather I wanted to watch. Very frustrating.

So, on Wii:
- Love the remote, but settle down for the younger kids...technology for technology's sake is one thing, but your target market can't handle some of the moves and are getting frustrated
- The channels expose a growing problem...proliferation of identities and content repositories. Seriously, I have to have several identities just to make my HOUSEHOLD function at this point. Photo upload sites for the PC, photo upload sites for the Nintendo, Tivo (you use Yahoo...thank you), Apple iTunes...aghhh make it stop!

The proliferation of folks wanting to own my living room is getting to the annoying point and if there isn't some consolidation, next Christmas is going to be a bear. I don't know if you recall, but many years ago there was a law over Remote Controls basically forcing them into the same wavelengths and some standards on operation. If these companies don't get it together, there is going to be a living room revolt (if you can get people off their couches with all of the new video game systems). The worst part is that a Monopoly may be able to step in if these smaller companies don't get it together and "simplify" people's lives through homogeneity...of course, the monopoly will be at the expense of true innovation but at least it will all work! Apple, Roku, Nintendo, Tivo, Sony (if you can get off your high-horse and stop thinking the world revolves around your proprietary technologies), get it together or Microsoft is going to have a very merry Christmas either next year or the one after...and you won't be invited...unless you want to pay for a Windows Living Room Network Certified label and testing.

Gotta run!

Wednesday Aug 23, 2006

Web 2.0 Hive Mind Not So Smart?

I originally wanted to title this: I'm more meta than Lanier. If you haven't read "Digital Maoism: The Hazards of the New Online Collectivism", you need to.

[Read More]

Tuesday Aug 15, 2006

BitTorrent: Welcome to the Storage Grid, Love it/Hate it

I've been studying and toying with BitTorrent for well over a year now. I finally took the jump, installed it, and downloaded some .isos for Linux and some Open Solaris files. In short, BitTorrent is a glimpse of the Storage Grid.

[Read More]

Thursday Jul 13, 2006

Storage on the Web...Generosity or Savvy Business Play?

To home users, it seems like storage is getting cheaper and cheaper. In fact, I just purchased a 160GB drive for $80, that's amazing...a buck for 2GB.  I'm running with 400GB of storage on my computers. I have most of my personal CD collection ripped, my videos that I take from my Sony DVD Camcorder, my digital pictures and more. If you're like me you start thinking, "Hey, maybe I should share some of my storage on a huge storage grid where people could get temporary storage to put stuff and they could pay me".

Well, that's a whole different tangent. With how cheap storage is to an end user, you can easily be deluded into thinking that all of the photo sites, blog sites, portals, backup sites, and more have GBs and TBs to burn. Here's a little secret, maintaining a high-availability site with storage that serves thousands of users is not trivial. Not only is there primary storage to worry about, but there is backup storage and managing the storage to ensure your paying customers have the best quality of service and all of the storage they need for their quotas. Sharing storage to 1000s, 10s of 1000s and 100s of 1000s of users is quite the expensive undertaking.

So, if storage is an expensive undertaking (and it is...), why are companies giving it away? I must have 1-2GB of storage at Kodak alone.

Capture! How many of you are loathe to switch Internet ISPs because you'll lose your email address and have to send out the evil "I've switched ISPs, blah blah blah" notes.  I know, one solution is to get an online account, like gmail or yahoo but...really...you are a bit attached to your ISP, aren't you?

Once I have all of my personal training logs at Motion Based, am I really going to uproot them and move to a different service or will I just purchase a Motion Based subscription?
And what about original content? How do I get all of my content off the site in one stroke of a button and move it to another site. It really is a pain, even for the computer savvy.

And who owns all of that information I put on company servers anyway? Let's say, hypothetically, that flikr gets bought by Yahoo! (well...maybe it did already ;-) I trusted flikr with my pictures, but did I trust Yahoo!? Its rhetorical, I didn't have much of a choice. But NOW who owns my pictures?  What about this post? If its super funny (not likely) and someone wants to pay someone for the rights to use it (less likely), who do they pay, Sun or me...or both? Have you REALLY read those copyright notifications and ownership notifications of all of your online services?

So, I have original content posted all over the web that is not even on my own personal computer anymore. It is difficult to move that original content, sometimes impossible.

Here's a tip for all of you neighborhood computer gurus that make a few bucks on the side...start getting to know the online services. When Microsoft roles out its built in virus detection and eventually gets it right, you should have a smooth transition to freeing peoples original content from one site and backing it up for them or moving it to
another site. It won't be long before Web 2.0 turns around and bites a user or two and its best to be prepared, cause guess who's getting the call from your Mom or Dad...you are oh guru of the keyboard, master of the Internet, freer of the content.

Oh, and to answer the original question...generosity or savvy business play? Obviously, the answer is savvy business play, its all about capture.

And one more thing...Web 2.0, I'll say it again...its the storage that counts. Community = Storage. Share = Storage. Participate = Storage. Storage begets more storage (online vs. nearline vs. archive vs. backup) and more sharing begets more storage.

Thursday May 11, 2006

We're not trolling your personal life....really...its just metadata

With so much focus on protecting information about yourself, have you have considered what people would know about you if they only had access to information about your information (aka, metadata)? Today's news may have provided us with, at least, a start to this extended privacy debate.[Read More]

Friday Oct 21, 2005

Zoom in, see the pink flamingo? That's my house...

I was using Google Maps the other day and was struck by how profoundly different even the simple things in life are today from when I was a kid (and I'm really not THAT old). Obviously, sending directions to people on MapQuest and now Google Maps is the norm for a lot of us in technology. But yesterday was different than most. I sent the map and directions using the email link, but I sent a follow-up comment to my friend saying "Switch to the Sattelite View, Zoom In and you'll see my house, its the second one from the end on the left side of the street with the pink flamingo in the front". Well, maybe the pink flamingo is an embellishment...but then again, maybe its not. 10 years ago I was drawing a house with a pink flamingo, now my friends can see it. You have to figure I confused a LOT of my friends with the drawings too...I'm a software engineer, not an artist. Oh yeah, the downside of all of this technology comes when my home association uses automated google-sattelite scanning to detect my pink flamingo and fine me without even having to drive by in their vans with the funny magnetic signs.
About

pmonday

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today