By Calum on Nov 11, 2004
On the vagaries of installing a new Windows-based touchscreen interface in San Jose police cars...
On the vagaries of installing a new Windows-based touchscreen interface in San Jose police cars...
An interesting article over at The Register highlighting how tabbed browsers can increase the risk of phishing. There are a couple of concrete recommendations that I guess Epiphany should take on board (if they haven't already-- Firefox and Konqueror are already on the case):
Of course, if everyone followed the HIG and didn't use tabbed MDI interfaces, we wouldn't have the problem, right? :o) (Kidding!)
It's not entirely fair to say that there's no UI design or testing on mobile phones... the company I worked for ten years ago did a lot of work with Motorola in that area, I was round the Nokia Usability labs about five years ago (they even wrote a book about their methods, "Mobile Usability: How Nokia Changed the Face of the Phone"), and here at Sun, we publish UI guidelines for mobile devices running Java (v1.0, v2.0). So people have been worrying about this sort of thing since before mobile phones became a mass-market consumer product.
One problem, other than the never-ending feature vs. miniaturisation competition between manufacturers, is that it's actually quite difficult to usability test a mobile phone in the conventional sense... you can't strap a camera to somebody's head and follow them around all day, and expect them to behave naturally. So often you end up having to test in some sort of lab-- which is artificial enough for a desktop product, but at least you would normally use a desktop product sitting at a computer in a room somewhere.
Personally I think the more difficult they are to use the better anyway-- if I wasn't so mild-mannered and likeable, most people who used one within fifty yards of me would be a good candidate for a slap :) (And apparently I'm not the only one, according to Andrew Monk's study.)
mozilla-accessibility list has exploded.
We posted a proposed keyboard navigation spec for Mozilla there towards the end of last week, and there have been well over 200 replies so far. Mostly from visually impaired users and assistive technology developers (i.e. people who know what they're talking about), and with disappointingly-few "me toos" among them, so we're actually having to read them all :)
Tragically the archives don't seem to be working, so you'll just have to take my word for it...
D'oh... the missing email finally turned up-- Evolution had somehow managed to sneak it into my Local Folders Inbox, which I never look at.
Apologies to Steven Garrity, who probably thinks me very rude for not replying. By way of some recompense, here's the article he wrote that he was trying to tell me about: The Rise of Interface Elegance in Open Source Software.
Spent an hour or so chatting to Dirk Ruiz yesterday, who's recently been put in charge of usability for Sun's desktop projects. Sounds like we're putting together a really good full-time team at last to give JDS and Looking Glass the works, rather than giving the work to whoever's available that week (which was usually only me, for the GNOME stuff at least).
Yes, I know it's been nearly two months, but I've finally been prompted by a meeting next week to do a short write-up of the sessions I attended at CHI 2004 in April.
It all started with, amongst other things, the presentation of a Lifetime Service award to our very own Robin Jeffries. Followed by...
Jun Rekimoto unfortunately isn't the most fluent of English speakers (and why would he be), so this session was harder work than I was hoping for first thing in the morning :) Amongst other things, he talked about his work on pick and drop, augmented surfaces, and other collaborative environments-- including a classroom with a shared display at the front of the class on which students were allowed to rate the quality of the lecture as it progressed. Unfortunately some of the more interesting questions from the panel and audience were somewhat lost in the translation... which is a good trick if you can get away with it :)
Chose this one in the hope that it would ease me in gently. Cornily-staged as a game show in its own right, this session looked at whether conventional usability techniques (ehtnography, contextual enquiry, biometric measurement etc.) could apply equally to games. The conclusion, following some entertaining video clips from gaming usability studies, was an unsurprising "mostly yes".
A panel chaired by our very own Eric Bergman, looking at 'vision videos' from the past (such as Sun's Starfire and Apple's Knowledge Navigator), and asking why many of the features predicted for the present day never quite happened the way we thought they would. I don't recall many concrete answers (caveat: this was two months ago and I wasn't taking notes), but it was certainly fun seeing all those videos again...
A short paper session, the most memorable of which was probably Stu Card et.al.'s 3Book: A Scalable 3D Virtual Book. Essentially a 3D represenation of any scanned-in book, one of its more interesting features is its ability to create degree-of-interest indices based on its contents. I left unconvinced that the 3D aspect (accurately rendered down to the animation of turning pages) was little more than a gimmick, though.
Also presented was Photo Annotation on a Camera Phone. Making use of a phone's camera and internet connection to do something useful seemed to be a common theme this year, but speaking as somebody who wishes that the only thing you could do with cellphones is dial 911 or a breakdown service, I'm afraid it didn't really grab me.
Amidst glossy tales of Disney.com redesigns and Carlson Marketing corporate reward scheme templates, the paper that stood out for me here was from the Universities of Dundee and St. Andrews: a carer-driven system for "reminiscence therapy" with dementia patients, with which they can stimulate conversation by accessing multimedia clips, typically of sights or sounds from the patients' childhood (rather than of friends and family, which apparently tend to have a negative effect). This seemingly simple approach-- reported here by the BBC-- seems to be remarkably successful.
Three papers presented here: one on mouse and touchscreen selection in the upper and lower visual fields, and one on how varying icon spacing changes users' visual search strategy, about which I was mostly none the wiser afterwards.
The paper I was most interested in was a comparison of the effects of quantisation vs. frame rate for streamed video (of soccer match highlights, in this case). Its unexpected conclusion was that contrary to most service providers' Quality of Service policies, users actually prefer high quality to high frame rate (for fast moving sports, at least). And also that, perhaps because of the comparative novelty, people seem willing to pay up to $10 a month for surprisingly-poor quality video.
From these papers, I was most interested in IBM's reMail talk, given our involvement with Evolution and Glow. Was disappointed, though... pretty much everything of any use is already available in Evolution. And slightly bafflingly, one of the features most popular with its users was apparently its Thread Arcs visualisation, which allow you to see a selected email in the context of its response hierarchy. Er... tree view, anyone?
Another camera-phone paper here too: the idea of this one is that you could retrieve information about a particular building or other tourist attraction by taking a picture of it. The picture is sent back to a server that compares it with other textually-annotated pictures that people have taken from the same location. If the photo is determined to be of something that somebody else has already photographed, a web search is carried out on the annotation, thus returning to your phone within a few seconds all sorts of information about whatever you've just photographed.
My personal highlight of the conference: a presentation by the guys who invented the ESP Game. Inspired by the huge fascination with sites like Hot or Not, the team from Carnegie Mellon hit upon the ingenious idea of labelling every image on the web-- a massive, non-automatable problem-- by having you and I do it for them, in the form of a game.
It's easier just to visit the site and play it (it doesn't seem to work behind Sun's firewall, though), rather than read an explanation. But basically it's a kind of web-based Pictionary-style affair in which two randomly-selected human opponents, unknown to each other and unable to communicate, have to agree on a single word that describes a randomly-selected image. Words that have been agreed on for that image by previous players are also "taboo". Consequently, the agreed-upon words are almost guaranteed to be descriptive of the image (and there are some built-in safeguards to filter out those that aren't), as there's no way to influence what your partner will type.
At the time of the presentation, about 6-10 descriptive words had been recorded for around 4 million images from google.com (although the labels are not yet, as far as I know, used by google or any other search engine). Right now there's only an English version of the ESP Game, but the concept is of course pretty much applicable to any language. (And apparently, even the English version is very popular with Japanese students trying to learn English...)
A turnout of about 80 people for our talk on open source usability, which wasn't bad for first thing in the morning after the CHI Reception. And no tricky questions either :)
A few photos from Vienna here...
By Tim Brown, CEO of IDEO, on designing "technology experiences" and "experiences enabled by technology". Examples included a design study for Prada, a huge interactive display for Vodafone headquarters that can be controlled via your mobile phone, and an art exhibit featuring chairs that projected an image of your clothing onto the back of the chair when you sat in them-- the electronic equivalent of hanging your jacket on the back. Tim also discussed the differences between "top down" design, which typically results in a scripted, controlled experience (e.g. Disneyworld), and the rarer "bottom up" design, which gives a more organic, continually-evolving experience (his examples included eBay and NTT DoCoMo).
Everything wrapped up with a cringeworthy musical number to publicise next year's conference, and a video about Portland, Oregon that I'm guessing still hasn't quite finished yet.
I am an Interaction Designer in the Systems Experience Design team, arriving at Oracle via Sun where I've worked since 2000. I currently work on sysadmin user experience projects for Solaris. Formerly I worked on open source Solaris desktop projects such as GNOME, NWAM and IPS.