Part One: Gathering Feedback from Recent Usability Research and Studies

Back to Summary

The design team for My Oracle Support reaches out to customers with design issues and new designs on a regular basis. Of course, we only see a limited number of users during any testing or survey work despite the many tens of thousands of folks out there,. To help us understand a full range of customer requirements, we try to solicit input from a mix of customers: Large customers (those with dozens of Oracle administrators), small customers (even down to a shop with one or two Oracle administrators); educational, national and international businesses; customers who use the configuration manager, and those who do not. Till now, we have not focused on "new" My Oracle Support customers. Although, I think this would also yield interesting results. I won't try to define "administrators" here, but I suspect you are one if you use My Oracle Support.
We interact with customers in a few different ways, and via these processes generate new designs, issue lists or requirements, which in turn are given to a UI designer (usually me) to design. We prioritize the features or changes and give them to development to implement. Not all changes go through designers. Some go directly to development. Sometimes that can be ok, but generally with design issues we try to follow one process.
We gather the feedback in 3 basic ways: View your feedback from the site, from surveys, and from interviews. Generally all of these methods tend to result in additional e-mail, and sometimes we send customers prototype screenshots to check that we are doing the right thing. I am primarily going to cover the customer interviews in this post. But let me quickly address the other two feedback methods.

Feedback
Your feedback via the UI (the link the top right corner) is reviewed, categorized, and makes its way into enhancement requests and bug fixes. The team that reviews them does a great job of reading and organizing all of your feedback. Here is a recent item,

 

"Hi, after leaving My Oracle Support open over night (which of course I shouldn't have done in the first place), I got the error message "A server connection error occurred. IO Error Error #2032 Please try again later" I would have expected something like "You have been logged out due to inactivity" or the like. Can you please research/comment? Thanks"

 

Well that particular comment is a good one, and one that is still relevant today--I am NOT a fan of the Error #2032. But try to figure out by reading the whole post why this is not a usability priority (and then post your thoughts on it!). About 150 of these feedback messages are received each week. Let me break down the recent feedback. User Administration issues (18.5%), General Usability issues (15%), Managing Service Requests (7%), Support Identifier issues (6%), Creating a Service Request (6%), "You stink" issues (6%) and other smaller areas of feedback. This covers features like adding new administrators, adding/removing permissions, search results quality, registration issues, general bugs and yes... performance. Performance is near and dear to a lot of people's hearts, and I think it was Dan Rosenberg who coined, "there is no such thing as a slow usable interface". So let me just say that performance is on everyone's mind, including the design team. We will cover performance issues in Part Three. Hopefully addressing customer complaints like this,

 

"I have never used something so slow and it is extremely cumbersome to navigate. It takes forever for the main screen to load. I don't understand what could be going on in the background. As a DBA of 20 years, I can say without a doubt this will be the last time I look to your website for anything except the expected worse than normal, painful experience of creating a TAR in the new system."

 

Ouch, but sometimes the truth hurts. But we do hear you and are working to improve performance and improve the ability to create a Service Request (a TAR, for you old timers).

Surveys
A variety of surveys are conducted including the usability survey which was linked from the Getting Started region (now available only from the sign-in page). I have also sent out other surveys to as many as 8000 participants covering issues like patch process, creation of patch plans and "carts" of patches, key feature needs and requirements, and general survey stuff like how many folks work on your team, etc... This information is used to help set priorities, define who our users are (you!) and what their typical roles and responsibilities are within their organizations. It is also a good place to find folks for our one-on-one interviews. These results get processed typically into high-level design goals and directions, aka, Improve SR Flow, Improve Help). Typically a survey uncovers very few specific issues.

Interviews
When we interview customers, it typically follows a basic model. We spend half of the time listening to the customer about their needs. Typically this is specific to a single issue like patching, or providing proactive health checks. Then, depending on when we are speaking to the customer (early during requirements gathering, during design, or during early development), we show flow diagrams, mockups, or even some working code. With mockups or working code the customer drives the UI, and interacts with it. We tend to give little or no instruction or hints. We watch where and how the UI functions, where it breaks, or helps the customer. We typically do a final round of sessions with about 6 customers working with close to final code. While with early sessions were for formulating what we should build, in the later sessions we are focused on design issues of what we have built. If we find major missing functionality that typically goes into a bucket for future releases.

Issues are found in a few ways;


  1. The customer explicitly tells us (i.e., "I don't know what 'Suppression' means")

  2. The customer implicitly tells us (we watch their cursor move around and around never landing on the button we think they should use, i.e., it is not obvious, in the right location or labeled correctly)

  3. The designer, a Product Manager, or developer makes a note of something they notice

  4. The product does something "silly", and we note it and work around it during the session.


These issues can be at the highest level (are the concepts correct?), down to the nitty gritty (is the icon too small, poor grammar, or bad layout?). We document every issue and prioritize them with the Oracle support product managers.

 

Let me tell you how I prioritize this feedback. I follow a model similar to something I learned from the great designer, Phil Haine. Although we have not applied this model directly, I use to do my own prioritization of bugs. Try this... Rank each usability issue by the follow questions and multiply the results together to get the UI score.

How Many Users Does it Impact (3 - All, 2 - Some, 1 - A few or a Limited User Role)
How Bad is the Problem (4 - Severe, 3 - Critical, 2 - Important, 1 - Not Important)
How Often Does it Occur (3 - All of the Time, 2 - Some of the Time, 1 - Infrequently)

I think you might find this interesting because it might help you realize how we have to determine what is best for everyone. For example, if you file a bug against the customer user administrator feature, you might consider it the most important, but there might be a bug in knowledge searching, and that could take priority based on this model (because more users are impacted more often).

Let's take a look at some real bugs from customers and how this model helps us prioritize (and makes for a good conversation in our blog). I have already sorted them by most to least important, and have associated a development effort with each. The development effort is both for the design and development of the fix. "High" generally is something which requires many days to fix, maybe requiring multiple people and impacts more than just the user interface. Medium could require a day or more, while Low is typically a few hours' worth of work. Very Low (VH) is trivial to fix. This does not include QA time, changes to help or marketing materials. These are real issues found in the last release.

 


























































































Sample (Real) Issue

# Users

Bad

Often

Total

Dev


1. The Browser Back button does not take you "Back"

3

4

3

36

H


2. Text: "Run" should be called "Search" in toolbar

3

2

3

18

VL


3. Can't submit a search by pressing return in a field

3

2

3

18

L


4. Vertical Scroll bar is missing from Patch Recommendations

2

3

3

18

M


5. "Enterprise Patch Recommendations" is confusing term

2

2

3

12

VL


6. Deploy column (in Patch Plans) is not sorting correctly

2

2

3

12

M


7. No way to select all language packs you need for an EBS patch

2

2

3

12

M


8. Download Text and number should align correctly

3

1

3

9

VL


9. Download Trend for Patch Downloads has no labels

2

2

2

8

L


10. Task Region cannot be dragged onto the screen when empty

1

2

3

6

M

 

This list is just to give us a discussion point. In Part Two of this article, I will provide a richer set of feedback from customers and discuss how we resolve these issues.
Let me explain some of these issues for those who don't want to wait till Part Three. And it makes the point of how we prioritize usability issues. Everyone uses the browser's Back button. So even as a flash "application" we saw ourselves and eventually customers blowing away the flash app and returning to their previous HTML page. So from the beginning we created markers that the browser would detect, so that Back would take the user back to the previous screen in our application. That is, going from the Dashboard to the Knowledge page and then 'Back' would return the user to the Dashboard. So fixing these issues has always been important. I think there are still places (like in the SR wizard, where going 'back' does not go back one step in the Wizard, but goes to the page before the Wizard) we need to fix that so that 'Back' always means back one page.

We didn't provide a good answer to the back button question with pages like Knowledge where the search results morph into a master/detail view (with the list of articles on the left and the articles details on the right). This was done so you can quickly scan a large chunk of articles without excessive navigation. But then what does "Back" mean? One can click Article A, then B, then C without any other navigation. But what happens with "Back"? Should "Back" go back to Article B then A? Or should it go back to the full view of Articles? Let us know what you think (I have thoughts on this. ;->) So "Back" is at the top of our list and continues to be an issue as we develop new features. We haven't found a single solution which can fix this problem.

One more example: the vertical scroll bar missing from a table. If you have a table with a lot of data you darn well need a scroll bar, right? So how can this not be a 4x3x3, a top priority? Because not all customers use the configuration manager, and not all targets have recommendations yet. So if you are an EBS user, or an organization that doesn't use configuration manager, you would not be impacted by this issue. So even though it is a real problem (one that is now fixed), it was not ranked at the top.

You can also see that there is the factor of development effort. The closer we are to a release, the tougher it is to make big changes. Even as a designer I understand this. Making even late binding design changes can sometimes cause unintended consequences--Maybe we forgot about some error condition, or a set of permissions which could impact users. So we try to make sure that the bigger, more troubling important items get worked sooner in the development cycle. Towards the end of the cycle we can typically knock out the very low (VL) items with little impact. So the sooner we get customer feedback the more opportunity we have fix issues prior to shipping.
So now you have some idea of how we go about the process of listening and cataloging issue. You probably want to know more about what we have heard from customers like you. We will cover that in Part Two. Stay tuned!

P.S. Recall the Error #2032 error earlier in the post? Did you score it? I scored it at 2x2x1 because it happens when you leave you session overnight and it doesn't really do anything surprising (the session times out). But, duh, the message should be better and likely just return you to the sign in page to sign in again. And yes, that same error is a catch all for some other conditions; I am just talking about this one. I filed a bug to fix that. Thanks to the customer who filed it.


Next: Part Two: Actual User Feedback
Part Three: Special Areas of Feedback: Content, Quality, and Performance

 

Comments:

This is good stuff; thanks Richard. I appreciate that you show the good and the bad, that you are not overly trying to sell us kook-aid (*grin*), but trying to disseminate useful information. Eagerly looking forward to Part 2. =) Just curious - you have features that are rank-ordered (given your Phil Haine role model). Can customers vote on enhancements and priorities? Just a thought. Just thinking about continuous, bi-directional feedback. I have always been frustrated that Oracle does not expose its core development efforts as you have done with MOS (why are they introducing *that* feature?!?), and I am pleased with the direction you are going. Keep up the good work.

Posted by Charles Schultz on October 22, 2009 at 09:19 PM PDT #

When I originally commented I clicked the -Notify me when new comments are added- checkbox and now each time a comment is added I get four emails with the same comment. Is there any way you can remove me from that service? Thanks!

Posted by Rudy Nill on April 29, 2011 at 07:46 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Search

Archives
« July 2015
SunMonTueWedThuFriSat
   
2
3
4
5
6
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
25
26
27
29
30
31
 
       
Today