CHI 2004 trip report

Yes, I know it's been nearly two months, but I've finally been prompted by a meeting next week to do a short write-up of the sessions I attended at CHI 2004 in April.

It all started with, amongst other things, the presentation of a Lifetime Service award to our very own Robin Jeffries. Followed by...

Opening Plenary

Jun Rekimoto unfortunately isn't the most fluent of English speakers (and why would he be), so this session was harder work than I was hoping for first thing in the morning :) Amongst other things, he talked about his work on pick and drop, augmented surfaces, and other collaborative environments-- including a classroom with a shared display at the front of the class on which students were allowed to rate the quality of the lecture as it progressed. Unfortunately some of the more interesting questions from the panel and audience were somewhat lost in the translation... which is a good trick if you can get away with it :)

Games: What's my Method?

Chose this one in the hope that it would ease me in gently. Cornily-staged as a game show in its own right, this session looked at whether conventional usability techniques (ehtnography, contextual enquiry, biometric measurement etc.) could apply equally to games. The conclusion, following some entertaining video clips from gaming usability studies, was an unsurprising "mostly yes".

Video Visions of the Future: a Critical Review

A panel chaired by our very own Eric Bergman, looking at 'vision videos' from the past (such as Sun's Starfire and Apple's Knowledge Navigator), and asking why many of the features predicted for the present day never quite happened the way we thought they would. I don't recall many concrete answers (caveat: this was two months ago and I wasn't taking notes), but it was certainly fun seeing all those videos again...

Mark my Memories

A short paper session, the most memorable of which was probably Stu Card et.al.'s 3Book: A Scalable 3D Virtual Book. Essentially a 3D represenation of any scanned-in book, one of its more interesting features is its ability to create degree-of-interest indices based on its contents. I left unconvinced that the 3D aspect (accurately rendered down to the animation of turning pages) was little more than a gimmick, though.

Also presented was Photo Annotation on a Camera Phone. Making use of a phone's camera and internet connection to do something useful seemed to be a common theme this year, but speaking as somebody who wishes that the only thing you could do with cellphones is dial 911 or a breakdown service, I'm afraid it didn't really grab me.

Designing the Humane Interface

Amidst glossy tales of Disney.com redesigns and Carlson Marketing corporate reward scheme templates, the paper that stood out for me here was from the Universities of Dundee and St. Andrews: a carer-driven system for "reminiscence therapy" with dementia patients, with which they can stimulate conversation by accessing multimedia clips, typically of sights or sounds from the patients' childhood (rather than of friends and family, which apparently tend to have a negative effect). This seemingly simple approach-- reported here by the BBC-- seems to be remarkably successful.

Can You See me Now?

Three papers presented here: one on mouse and touchscreen selection in the upper and lower visual fields, and one on how varying icon spacing changes users' visual search strategy, about which I was mostly none the wiser afterwards.

The paper I was most interested in was a comparison of the effects of quantisation vs. frame rate for streamed video (of soccer match highlights, in this case). Its unexpected conclusion was that contrary to most service providers' Quality of Service policies, users actually prefer high quality to high frame rate (for fast moving sports, at least). And also that, perhaps because of the comparative novelty, people seem willing to pay up to $10 a month for surprisingly-poor quality video.

Finding your Way

From these papers, I was most interested in IBM's reMail talk, given our involvement with Evolution and Glow. Was disappointed, though... pretty much everything of any use is already available in Evolution. And slightly bafflingly, one of the features most popular with its users was apparently its Thread Arcs visualisation, which allow you to see a selected email in the context of its response hierarchy. Er... tree view, anyone?

Another camera-phone paper here too: the idea of this one is that you could retrieve information about a particular building or other tourist attraction by taking a picture of it. The picture is sent back to a server that compares it with other textually-annotated pictures that people have taken from the same location. If the photo is determined to be of something that somebody else has already photographed, a web search is carried out on the annotation, thus returning to your phone within a few seconds all sorts of information about whatever you've just photographed.

Games

My personal highlight of the conference: a presentation by the guys who invented the ESP Game. Inspired by the huge fascination with sites like Hot or Not, the team from Carnegie Mellon hit upon the ingenious idea of labelling every image on the web-- a massive, non-automatable problem-- by having you and I do it for them, in the form of a game.

It's easier just to visit the site and play it (it doesn't seem to work behind Sun's firewall, though), rather than read an explanation. But basically it's a kind of web-based Pictionary-style affair in which two randomly-selected human opponents, unknown to each other and unable to communicate, have to agree on a single word that describes a randomly-selected image. Words that have been agreed on for that image by previous players are also "taboo". Consequently, the agreed-upon words are almost guaranteed to be descriptive of the image (and there are some built-in safeguards to filter out those that aren't), as there's no way to influence what your partner will type.

At the time of the presentation, about 6-10 descriptive words had been recorded for around 4 million images from google.com (although the labels are not yet, as far as I know, used by google or any other search engine). Right now there's only an English version of the ESP Game, but the concept is of course pretty much applicable to any language. (And apparently, even the English version is very popular with Japanese students trying to learn English...)

HCI Overviews

A turnout of about 80 people for our talk on open source usability, which wasn't bad for first thing in the morning after the CHI Reception. And no tricky questions either :)

Sightseeing

A few photos from Vienna here...

Closing Plenary

By Tim Brown, CEO of IDEO, on designing "technology experiences" and "experiences enabled by technology". Examples included a design study for Prada, a huge interactive display for Vodafone headquarters that can be controlled via your mobile phone, and an art exhibit featuring chairs that projected an image of your clothing onto the back of the chair when you sat in them-- the electronic equivalent of hanging your jacket on the back. Tim also discussed the differences between "top down" design, which typically results in a scripted, controlled experience (e.g. Disneyworld), and the rarer "bottom up" design, which gives a more organic, continually-evolving experience (his examples included eBay and NTT DoCoMo).

Everything wrapped up with a cringeworthy musical number to publicise next year's conference, and a video about Portland, Oregon that I'm guessing still hasn't quite finished yet.

Comments:

Post a Comment:
Comments are closed for this entry.
About


I am an Interaction Designer in the Systems Experience Design team, arriving at Oracle via Sun where I've worked since 2000. I currently work on sysadmin user experience projects for Solaris. Formerly I worked on open source Solaris desktop projects such as GNOME, NWAM and IPS.

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today