Monday Mar 03, 2008

Computational Photography and Storage

There is a great article on CNet's news.com about computational photography, "Photo industry braces for another revolution". It is basically about Photography 2.0. The first wave of digital photography seeks to reproduce film-based photography as well as it can. Photography 2.0 advances hardware while taking advantage of higher processing power within the camera to take advantage of the new hardware, replace hardware functionality with software functionality or bring image detection and manipulation capabilities that are not possible in the hardware space.

There are a few developments worthy of note, and all of them involve bringing more CPU capabilities into the camera:


  • Panoramic photography - I enjoy these types of scenes (one shown below), though I don't think they are the future of photography at all
  • Depth of field and 3-D photography - There is an excellent example of this in the CNet article. Personally, depth of field is arguably one of the most difficult techniques to master since this is purely 4-dimensional using our current lenses (aperture size decrease increases time of exposure and more depth will be in focus, etc...)

There are many other ideas in the article...detecting smiles (an extension of this is closed or open eyes), better light detection, self-correcting for stabilization (this is done with high priced hardware today in Image Stabilized lenses), etc... Clearly a Photography 2.0 revolution is in the works.

Photography 2.0 is really the same trend we see in the storage business...Storage 2.0. There are simple changes in the industry, like the incredible increase in CPU driving software RAID into storage stacks again. A huge benefit with software RAID is the decrease in hardware costs that it drives. This is very similar to the Photography 2.0 concept of moving image stabilization out of the hardware (the lenses) and into the software.

Storage 2.0 also brings us projects like this one: Project Royal Jelly. Project Royal Jelly encompasses two important pieces, one is the implementation of a standard access model to fixed content information, a second is the insertion of execution code between the storage API and the spinning rust. The ability to "extend" a storage appliance (or device) via a standard API will allow us to leverage the proliferation of these inexpensive and high-powered CPUs. A common use-case for an execution environment embedded in a storage device would be an image repository or a video repository. Every image submitted goes through a series of conversions: different image formats, different image sizes (thumbnail, Small, Medium, Large), and often a series of color adjustments. Documents go through similar transformations: a PDF may have different formats created (HTML primarily), the document will be indexed, larger chunks will be extracted into a variety of metadata databases for quick views, etc...

These transformations can arguably be the responsibility of the storage operation, not the application operations, especially when the operations can be considered part of an archiving operation. While indexing and manipulation could be considered a higher tier, storage tiering and taking advantage of storage utilities could also benefit from a standard storage execution platform. Vendors could easily insert logic onto storage platforms to "move" data and evolve a storage platform in place rather than authoring applications that have to operate outside of the storage platform.

Just some Monday morning musings...have a great week.

About

pmonday

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today