Monday Oct 26, 2009

Part Three: Special Areas of Feedback: Content, Quality, and Performance

Back to Summary
Part One: Gathering Feedback from Recent Usability Research and Studies
Part Two: Actual User Feedback

In the previous post, we talked about general feedback to the product and showed examples of the feedback customers provided and how we worked to resolve those issues. In this part, I want to focus on a few key areas which get a lot of talk; Content, Quality, and Performance.

Content Issues
Content is the information inside of My Oracle Support. It is the "text" of your Word Processing document, the image in your photo gallery and for My Oracle Support the knowledge-base, the questions asked in a Service Request, the health or patch recommendation and so on are content. It is not the user interface per se. My Oracle Support, the software relies on a variety of back-end services to provide this information. Let's talk about these content issues and what can be done to improve them.
Content issues and bugs are super important, but they tend to not be something that the design or front-end development team can fix. They require many teams across Oracle, and tend to take a long time to fix compared to the fixes we can typically do in the UI or in the database. Content issues customers reported include;

  • Knowledge returning poor results

  • SR templates asking the same questions over (and over, and over) again

  • Patch or Health Check recommendations not being correct

  • The list of products is different when searching Knowledge, filing an SR,searching for a configuration, or finding a patch. Ouch!


This is not an excuse. But the team I work with need to work with many other teams to resolve these issues. Darn right these issues need to be fixed!

For example, tuning the knowledge engine behind the knowledge articles is a challenging task. And long-time Oracle customers probably got used to the old search results. The new engine does things differently, and it should be better for most searches. The knowledge team is dedicated to continuously improve the search and browse experience of our knowledge base. They continually mine the search logs to learn and identify ways to improve search.


Tip: If you have trouble finding an article, is to go to the Knowledge page and use the product search in the top left. Once you have filtered down to the specific product, THEN type in a search in the search field in the middle of the screen to further refine your search. And try using the filters provided on the right side of the screen to also filter down to a manageable list. Now see if you can find your article. This product filter is also available from the Knowledge advanced search link to the right of the search field.





There some usability issues with first time users of the knowledge search, especially with the use of facets (the drill down options on the right of the search). Some design improvements are being worked right now for an upcoming release in that area. Also look at the short training video on how to get the most out of the knowledge search available in the Video Training Blog entry.

I know the knowledge team is constantly working to provide more precise results, faster queries, and improvements to the user interface. So continue to provide feedback. Tell them what you were trying to find and tell them about your search via the Feedback mechanism. This can help them improve the results from knowledge search. And when you can, try providing feedback on the articles you read. Recently viewed articles on the Knowledge page have a link to provide a review.

I can attest that some issues in the Service Request templates are being fixed, but I think that the templates are the same coming from Classic MetaLink as they are in the new user interface, and they would need to be modified to take into account the new UI. This has not happened yet, which is partially why you see the same question being asked over and over again. I do know your complaints about the process being too long for "simple" problems, is being addressed.

Patch and Health recommendations went through a big over-hall in early 2009 and that should have resulted in customers seeing much better results with less "silly" recommendations. Sometimes a recommendation is tagged for applying to say 30 or 40 releases of a Database, but gets "over-tagged". Then we wind up with a recommendation on one release which just doesn't make sense. Typically those should be fixed quickly, and I have noticed a large reduction in issues in that area in the last few months. If you find that a recommendation just doesn't make any sense on the system you are looking at, let us know! Improvements and fixes to the recommendation engine are done frequently.

And finally my personal pet peeve is the naming of Oracle products. In one place you look for "Oracle Server - Enterprise Edition", in another "RDBMS Server", a third is "Oracle Database". So each part of My Oracle Support is asking you for the same product, but using different terminology. I am so sorry! In addition, when Oracle changes the product name you get stuck because even though you are on an "old" release which uses the old name, Oracle wants to refer to it by the new name, no matter what release. We saw this in early user testing and created an alias list. So that, you might type "Database" and our UI would find the product. We added aliases to E-Business Suite products (so you could type "GL" for General Ledger" and find that product). This shipped with the 3.0 release of My Oracle Support. And we are working for a release in the future to have this alias list used everywhere and base it on a single common table of product names. This feature is not available yet in the MetaLink3 ( site and will not be available when the Oracle customer migration occurs in Nov '09. But we know this is an issue and are working to get all of the development and support teams to work from a single list of products. By the way Oracle has more than 5000 products, and with each new company we buy this list grows. We are also working on ways to hide the 95% of this which you don't want or typically need to see. If you have some thoughts on this, do post to the blog! We would love to talk about it.

What is quality to you? For me quality has many dimensions. Does the product crash? Run slow? Provide me accurate results? Can I do what I expect when I expect it?

All of these probably matter to you, they certainly matter to our team. When doing usability research, or testing with early code, we tend to not deal with most quality issues, because we expect to fix them prior to production. And typically we are not running on the typical high-performance hardware, so it hard to gauge actual performance in the field. Like how fast a query returns. And even accurate results we don't necessarily catch in usability sessions. We might have just test or sample data, so again this type of information is not captured. So how do we capture this information? A special environment is setup to mimic real-world settings, but even then we don't always have the right data or real data in the environment. It is a combination of our quality engineers to find these issues. But even today we don't have all of the tests in place to verify every possible situation you would experience. Are you seeing a slow down when you PowerView, do a Group by, and then filter by name? Humm... sometimes we can only catch this on a case by case basis, so file a bug! But when it comes to user experience quality, we can find the issues typically in our tests. The real question comes, if we can fix them before you see them. Not everything you see goes through the same quality filters. Health Checks might be reviewed by one team, while the knowledge article about the issue and service request questions for that product are from two other teams. And sure enough all of these are for the same issue! So we do have some work to do to provide you a consistent, accurate and quality product. Inconsistency like this is a quality issue, a usability issue, and a problem worth solving.

Performance Issues
I would guess we saved the most contentious issue for last. It is true there are places in My Oracle Support where performance is slow. And we have heard this loud and clear. Not all of the performance issues are from the use of Flash, but we are working them all, when possible. We have heard the following:

  • Takes too long to load - get rid of the loading screen

  • The dashboard comes up but then take a long time to load the content

  • SR details load slowly

  • Memory footprint of my browser grows and then slows the whole experience

  • Delays when loading pages, PowerView, or other features

The development team has looked for any and all solutions. Some of the solutions implemented or slated for releases include the following:

  • Reducing the size of the initial download (deferring the loading of some data)

  • Placing the application on edge servers to allow faster downloading across the Internet from non-US locations

  • Doing more "just-in-time" and server-side queries

  • Allowing collapsed regions to wait until opened to get data (deferred loading)

  • Tuning queries to return data faster

  • Compressing data across the data connection


This is a true client application running in a browser. So once the application is loaded it should be very very fast. But when it has to wait on the back-end to return data, and sometimes we return a lot of data, you wait. It is a tough tradeoff. For example, if I took the Inventory report and made it non-interactive, it would be a lot faster. But then you would have to create a new report for the equivalent of every single click or drill down. When you have a 1000 or so collected systems (when using the collector) an interactive report like the one provided can answer tons of questions with only a click or two. But the cost is loading all of that data ahead of time. Maybe this is an ok tradeoff? But for a Service Request, it needs to load quickly... every time. Right? I can honestly say a lot of folks are working very hard to continue to improve performance. And even though the application is probably now at least 50% larger in features and size than at Oracle World last year, I think you will still see performance improvements in the upcoming releases. If you don't use a region, collapsing it will improve performance...

You should notice some performance fixes right now if you have an account on For customers using, these improvements should appear when your migration occurs in November.

And one final thought. You might ask, well just do it in straight AJAX, that will solve the issues! I know, that was one of our thoughts too. But the testing matrix for AJAX is huge, and the javascript code can also be quite large to load, and in the end some of our biggest performance hits have nothing to do with the front-end technology per se, they have to do with how we access the content and how much context exists. A list of 5000 products is the same no matter what the technology. How and when we access it still needs to be addressed.


Performance Test

Go to the Dashboard and click Reload from the browser. How long did this take? Now collapse all of the regions on the screen so they are just one row tall. Now reload again. Did performance improve? How long does it take you to load the application for the first time from a browser where the application is not in your cache? Click on SR on the Dashboard. How long did it take to load? Click the "next" icon in the top right. How long for the second and third SR? Do tell, post to the Blog, let us know where you are from and how long it takes you. Inquiring minds do want to know!


So, in conclusions; we hear you: Performance needs work. Content should be improved, the site should work with all browsers, and don't don't make customers "beta testers" with a buggy site. I don't like the idea of customers filing Service Requests because the Service Request system is not working! And we, everyone on the My Oracle Support team, is working to fix these issues. But don't be shy. Do provide that feedback. And if you can, when asked, do participate in feedback sessions or usability studies. I know I am listening and so are many folks on our team.

I hope this series explained how we gather customer feedback, how we work to resolve the issues, what some or the key issues are, and what we are doing about it. Thanks for getting this far in the blog!

Friday Oct 23, 2009

Part Two: Actual User Feedback

Back to Summary
Part One: Gathering Feedback from Recent Usability Research and Studies

In Part One I covered how we gathered feedback from customers for the new My Oracle Support. In this part, we will discuss the detailed results of those studies by looking at two studies we have done in the last year. One done last summer, which was a large study of twelve customers, covering basic navigation and usability. The second a detailed study of the design of the health check features found in production for former Metalink3 customers and coming soon for Oracle customers (November of 2009).

In the first study, participants spent about an hour doing basic tasks mostly focused on SR creation in My Oracle Support. The problems and issues were documented by the usability tester and then reviewed and triaged by the team. Twelve customers participated. Each session was video recorded with their permission. The videos are used to both clarify issues and when filing a bug, a link to the video can be provided to the designer or developer to exactly see the issue.

12 Customers
88 Actionable Issues, by category:

Create SR (42)
General Issues (5)
Delete, Saving, Breadcrumbs, Templates (4)
Wizard: General Information Step (11)
Wizard: Knowledge Step (5)
Wizard : Upload (4)
Wizard: Problem Details (7)
Wizard: Product and Problem (2)
Wizard: Review (4)



View SR (15)
Draft SRs (5)
Knowledge (8)
General Usability, Dashboard, Filtering, PowerView, Customize, Timezones, etc.. (15)
Bug Region (3)


For each of these issues, we cataloged how many of the customers experienced the problem during the session, and then rated the issues to help prioritize them. This was done in a spreadsheet and includes details of when the problem occurred, for who, and screen shots. Later bugs could be filed and pointing the developer to a link to the video to see for themselves the issue.

Let's look at some examples of the issues found, that are fixed for the current or upcoming release.




Design Solution / Notes

MUST not allow you to leave SR without saving...

Customer could lose work by navigating away without clicking Save

Adding confirmation when user attempts to navigate away without saving

(Working towards more "Auto - Save" functionality, so you never can lose work and don't need to remember to "Save", but this is still a work in progress)

No way to delete draft SRs

Allow the user more control to remove old draft's without going into the wizard

Adding the Remove icon to the table

"By System" text in Create SR flow is confusing

Many customers do not know what we define as a System. They might confuse our Configuration Manager collected "System" with just the name of their "system".

Redesigned AutoFill Region to reference "System/Configuration" - disabled when they do not have configurations

Remember last 10 choices in Product Selector

Difficult to find "your" product in the 1000's Oracle sells. So keeping track of ones I have used in the past help me find my content next time.

Added Recently Used items to many drop lists and selectors for the 3.2.1 release for MetaLink3 customers (coming soon for Oracle customers). Product List, List of Platforms and Languages, etc... to all have a recently used list.

Opening SR WAY TO SLOW!!!

Large SRs were having trouble rendering in the History region.

Multiple fixes were attempted, and some are still in the works to bring back the "bubble" view, to make it quicker to scan for what you say and what Oracle says. For now even large SRs should load fast in the History region.

Double Scroll bars annoying

Double scroll bars make it very difficult to view content. You have to play with each bar back and forth. Really annoying.

Make sure that there are not two sets of scroll bars for regions which grow. Turn off the scroll policy of one of the items to allow it to grow to its natural size up to the full size of the window. Then let the scroll bar appear, as needed. Fixed in SR region, Attach SR

Can't find the "Send" button

Submitting an SR requires them Send it to Oracle.

Button should be moved to bottom right position of the wizard and highlight the button.


So from this study about 30% of the bugs are currently fixed in production releases. I highlight this study because this has a low fix rate. This is something we should work to fix in upcoming releases. While in the next study done for the Health Check area, 70 of the 72 issues were resolved prior to the first release of the Health Check user interface for the MetaLink3 users in August of 2009. Those results were interesting for a few reasons.

The Health Check region for My Oracle Support was effectively designed starting with customer feedback. While the SR form has been around forever, Health Checks are reasonably new and don't have a lot of baggage associated with their design. So we went to about 6 customers without any concept of what we should build and asked them how they want health checks to work and what features would be useful to them. This eventually created a roadmap of features. We then took the key features and design a series of design ideas and floated those by customers. Here are some examples of the evolution of those screens over time.

First this is the screen where we just mapped content up into three regions and called it a day. If you want to see details for a check, you have to click on the finding and go to another page. In this example, you can click on the bars and it will change the health checks shown below. This was a hack we did to help moving between Critical, Warning and Informational. Prior to that you had to return to the home page to change between Critical, Warning and Informational. Ouch!




We heard a few key things which made it into the design showed below. This is the most recent release of Health Checks, but still not the end-state. Notice the ability to Suppress a check. This was one of the key communications. Customers wanted to suppress based on a variety of options. We currently allow for the three shown (the issue, an entire target, and an entire check). Some customers also want other ways of suppressing, but more options yields more complexity. So we started with the "big three".

Notice also the ability to see the details of the check without navigating. This improves performance. You can also group by findings and multiple select rows and then suppress. Also notice it is clearer how to navigate to the different classes of checks. And a short cut to view any suppressed items. We also know we had issues with what to call the "Suppression" function (Hide?, Archive?, Put in Trash?), and did our best but also notice how we extended the Suppress menu with a Help link to "Learn about Suppression". So if you don't get it right away, you might explore and learn from the Help system.




But we know we were not done. In other studies concerning Patch Recommendations we were asked to; bring the advances from the Patch Recommendation UI into Health Recommendations, and to further extend the features of Health Checks to support a broader array of checks, systems to check and features. Thus the future should see us handle larger numbers of checks for our biggest customers, view the check in more detail, provide feedback to Oracle and the Oracle community about their use of this check (or issues with the check) and other similar features we have now released for Patch Recommendations (sorry can't give you a timeline for this).

Overall, we took in 72 customer issues. Each resulted in a design, assigned to developers and we fixed 70 of the issues prior to shipping the first release of this in Spring of 2009 for the customers on (Nov 2009 for customers). Try it out for yourself and see what you think.

And in the meantime, take a look at an "idea" I am working on based on recent customer feedback. As with anything this is just a working concept. No guarantee it would ever be released... and I will let the picture do the talking... Want to get involved? Let us know via the Blog.




So, this is a look into some of the feedback we have received for usability issues. We get more than we can discuss here. It hurts to hear complaints, but it worse to know we have issues. So do provide us feedback, this makes it easier to focus attention on fixing issues which you really want addressed. And one tip... Don't just tell us, "My Oracle Support is slow!", tell us what is slow and provide details, "When I load My Oracle Support via VPN from my office in China it takes 45 seconds before my dashboard comes up, and that is too slow!". Speaking of "slow", performance has been on our mind for a while. In Part Three I will cover some specific areas of special interest to customers, including Performance.

Next: Part Three: Special Areas of Feedback: Content, Quality, and Performance

Thursday Oct 22, 2009

Part One: Gathering Feedback from Recent Usability Research and Studies

Back to Summary

The design team for My Oracle Support reaches out to customers with design issues and new designs on a regular basis. Of course, we only see a limited number of users during any testing or survey work despite the many tens of thousands of folks out there,. To help us understand a full range of customer requirements, we try to solicit input from a mix of customers: Large customers (those with dozens of Oracle administrators), small customers (even down to a shop with one or two Oracle administrators); educational, national and international businesses; customers who use the configuration manager, and those who do not. Till now, we have not focused on "new" My Oracle Support customers. Although, I think this would also yield interesting results. I won't try to define "administrators" here, but I suspect you are one if you use My Oracle Support.
We interact with customers in a few different ways, and via these processes generate new designs, issue lists or requirements, which in turn are given to a UI designer (usually me) to design. We prioritize the features or changes and give them to development to implement. Not all changes go through designers. Some go directly to development. Sometimes that can be ok, but generally with design issues we try to follow one process.
We gather the feedback in 3 basic ways: View your feedback from the site, from surveys, and from interviews. Generally all of these methods tend to result in additional e-mail, and sometimes we send customers prototype screenshots to check that we are doing the right thing. I am primarily going to cover the customer interviews in this post. But let me quickly address the other two feedback methods.

Your feedback via the UI (the link the top right corner) is reviewed, categorized, and makes its way into enhancement requests and bug fixes. The team that reviews them does a great job of reading and organizing all of your feedback. Here is a recent item,


"Hi, after leaving My Oracle Support open over night (which of course I shouldn't have done in the first place), I got the error message "A server connection error occurred. IO Error Error #2032 Please try again later" I would have expected something like "You have been logged out due to inactivity" or the like. Can you please research/comment? Thanks"


Well that particular comment is a good one, and one that is still relevant today--I am NOT a fan of the Error #2032. But try to figure out by reading the whole post why this is not a usability priority (and then post your thoughts on it!). About 150 of these feedback messages are received each week. Let me break down the recent feedback. User Administration issues (18.5%), General Usability issues (15%), Managing Service Requests (7%), Support Identifier issues (6%), Creating a Service Request (6%), "You stink" issues (6%) and other smaller areas of feedback. This covers features like adding new administrators, adding/removing permissions, search results quality, registration issues, general bugs and yes... performance. Performance is near and dear to a lot of people's hearts, and I think it was Dan Rosenberg who coined, "there is no such thing as a slow usable interface". So let me just say that performance is on everyone's mind, including the design team. We will cover performance issues in Part Three. Hopefully addressing customer complaints like this,


"I have never used something so slow and it is extremely cumbersome to navigate. It takes forever for the main screen to load. I don't understand what could be going on in the background. As a DBA of 20 years, I can say without a doubt this will be the last time I look to your website for anything except the expected worse than normal, painful experience of creating a TAR in the new system."


Ouch, but sometimes the truth hurts. But we do hear you and are working to improve performance and improve the ability to create a Service Request (a TAR, for you old timers).

A variety of surveys are conducted including the usability survey which was linked from the Getting Started region (now available only from the sign-in page). I have also sent out other surveys to as many as 8000 participants covering issues like patch process, creation of patch plans and "carts" of patches, key feature needs and requirements, and general survey stuff like how many folks work on your team, etc... This information is used to help set priorities, define who our users are (you!) and what their typical roles and responsibilities are within their organizations. It is also a good place to find folks for our one-on-one interviews. These results get processed typically into high-level design goals and directions, aka, Improve SR Flow, Improve Help). Typically a survey uncovers very few specific issues.

When we interview customers, it typically follows a basic model. We spend half of the time listening to the customer about their needs. Typically this is specific to a single issue like patching, or providing proactive health checks. Then, depending on when we are speaking to the customer (early during requirements gathering, during design, or during early development), we show flow diagrams, mockups, or even some working code. With mockups or working code the customer drives the UI, and interacts with it. We tend to give little or no instruction or hints. We watch where and how the UI functions, where it breaks, or helps the customer. We typically do a final round of sessions with about 6 customers working with close to final code. While with early sessions were for formulating what we should build, in the later sessions we are focused on design issues of what we have built. If we find major missing functionality that typically goes into a bucket for future releases.

Issues are found in a few ways;

  1. The customer explicitly tells us (i.e., "I don't know what 'Suppression' means")

  2. The customer implicitly tells us (we watch their cursor move around and around never landing on the button we think they should use, i.e., it is not obvious, in the right location or labeled correctly)

  3. The designer, a Product Manager, or developer makes a note of something they notice

  4. The product does something "silly", and we note it and work around it during the session.

These issues can be at the highest level (are the concepts correct?), down to the nitty gritty (is the icon too small, poor grammar, or bad layout?). We document every issue and prioritize them with the Oracle support product managers.


Let me tell you how I prioritize this feedback. I follow a model similar to something I learned from the great designer, Phil Haine. Although we have not applied this model directly, I use to do my own prioritization of bugs. Try this... Rank each usability issue by the follow questions and multiply the results together to get the UI score.

How Many Users Does it Impact (3 - All, 2 - Some, 1 - A few or a Limited User Role)
How Bad is the Problem (4 - Severe, 3 - Critical, 2 - Important, 1 - Not Important)
How Often Does it Occur (3 - All of the Time, 2 - Some of the Time, 1 - Infrequently)

I think you might find this interesting because it might help you realize how we have to determine what is best for everyone. For example, if you file a bug against the customer user administrator feature, you might consider it the most important, but there might be a bug in knowledge searching, and that could take priority based on this model (because more users are impacted more often).

Let's take a look at some real bugs from customers and how this model helps us prioritize (and makes for a good conversation in our blog). I have already sorted them by most to least important, and have associated a development effort with each. The development effort is both for the design and development of the fix. "High" generally is something which requires many days to fix, maybe requiring multiple people and impacts more than just the user interface. Medium could require a day or more, while Low is typically a few hours' worth of work. Very Low (VH) is trivial to fix. This does not include QA time, changes to help or marketing materials. These are real issues found in the last release.


Sample (Real) Issue

# Users





1. The Browser Back button does not take you "Back"






2. Text: "Run" should be called "Search" in toolbar






3. Can't submit a search by pressing return in a field






4. Vertical Scroll bar is missing from Patch Recommendations






5. "Enterprise Patch Recommendations" is confusing term






6. Deploy column (in Patch Plans) is not sorting correctly






7. No way to select all language packs you need for an EBS patch






8. Download Text and number should align correctly






9. Download Trend for Patch Downloads has no labels






10. Task Region cannot be dragged onto the screen when empty







This list is just to give us a discussion point. In Part Two of this article, I will provide a richer set of feedback from customers and discuss how we resolve these issues.
Let me explain some of these issues for those who don't want to wait till Part Three. And it makes the point of how we prioritize usability issues. Everyone uses the browser's Back button. So even as a flash "application" we saw ourselves and eventually customers blowing away the flash app and returning to their previous HTML page. So from the beginning we created markers that the browser would detect, so that Back would take the user back to the previous screen in our application. That is, going from the Dashboard to the Knowledge page and then 'Back' would return the user to the Dashboard. So fixing these issues has always been important. I think there are still places (like in the SR wizard, where going 'back' does not go back one step in the Wizard, but goes to the page before the Wizard) we need to fix that so that 'Back' always means back one page.

We didn't provide a good answer to the back button question with pages like Knowledge where the search results morph into a master/detail view (with the list of articles on the left and the articles details on the right). This was done so you can quickly scan a large chunk of articles without excessive navigation. But then what does "Back" mean? One can click Article A, then B, then C without any other navigation. But what happens with "Back"? Should "Back" go back to Article B then A? Or should it go back to the full view of Articles? Let us know what you think (I have thoughts on this. ;->) So "Back" is at the top of our list and continues to be an issue as we develop new features. We haven't found a single solution which can fix this problem.

One more example: the vertical scroll bar missing from a table. If you have a table with a lot of data you darn well need a scroll bar, right? So how can this not be a 4x3x3, a top priority? Because not all customers use the configuration manager, and not all targets have recommendations yet. So if you are an EBS user, or an organization that doesn't use configuration manager, you would not be impacted by this issue. So even though it is a real problem (one that is now fixed), it was not ranked at the top.

You can also see that there is the factor of development effort. The closer we are to a release, the tougher it is to make big changes. Even as a designer I understand this. Making even late binding design changes can sometimes cause unintended consequences--Maybe we forgot about some error condition, or a set of permissions which could impact users. So we try to make sure that the bigger, more troubling important items get worked sooner in the development cycle. Towards the end of the cycle we can typically knock out the very low (VL) items with little impact. So the sooner we get customer feedback the more opportunity we have fix issues prior to shipping.
So now you have some idea of how we go about the process of listening and cataloging issue. You probably want to know more about what we have heard from customers like you. We will cover that in Part Two. Stay tuned!

P.S. Recall the Error #2032 error earlier in the post? Did you score it? I scored it at 2x2x1 because it happens when you leave you session overnight and it doesn't really do anything surprising (the session times out). But, duh, the message should be better and likely just return you to the sign in page to sign in again. And yes, that same error is a catch all for some other conditions; I am just talking about this one. I filed a bug to fix that. Thanks to the customer who filed it.

Next: Part Two: Actual User Feedback
Part Three: Special Areas of Feedback: Content, Quality, and Performance


Wednesday Oct 21, 2009

Results from Recent Usability Research and Studies

Customers have asked about how we deal with customer feedback. Customers who have participated in feedback sessions are especially interested in understanding how their feedback was incorporated into My Oracle Support. This series should answer that question for you. Thanks Charles for bugging me to put this out!




The My Oracle Support design team reaches out to customers in a variety of ways. This includes one on one interviews, group meetings, user group sessions, reading feedback via the site, watching Oracle and other blogs or forums, and doing larger scale surveys. Via these processes we generate feature requests, bugs, or enhancements, which in turn are given to a designer to design. The most feedback comes from places like the Feedback form, but typically these are "bugs" (items which need to be fixed but do not require design consideration). The focus on this series is on design. What features do customers want? How do they want the features to work? Who will use the features and how often? And will the feature make sense for a new user and be efficient for an experienced user?

Design is the process of understanding customer needs and providing a user experience which achieves customer needs and goals while focused on "ease of use". Design is not development. If one writes code to solve a problem it may or may not be "easy to use". Design generally puts more complexity into the code and the development process so that the user doesn't have to deal with the complexity. We are focused on understanding customer needs (we answer to the customer not to development) and how customer interacts with Oracle Support. So one can imagine that the time and energy to design something well takes more time up front, but would require less effort by development in the long run to "get it right" and provide the customer with what they need and want. Of course, we don't always get it right, so we iterate and refine until we do.

A series of user research studies using one-on-one web conference sessions were conducted covering dozens of My Oracle Support customers. These sessions covered basic customer needs (how do you do your business), review of designs to cover those needs (typically screen shots), and up through the use of a demo by the customer, while myself and other watched and asked questions. All of these were done prior to the release of features. Up to this point, we have not yet conducted any in field testing of shipping products, other than the Service Request process, but more on that hairy issue, later!

The results of these tests are written up into design briefs which are used to guide the design and development of the product. Part One will cover the details of how we gather this feedback. Part Two will go into the details of the feedback received. We recognize there are a variety of key issues which still remain in My Oracle Support. This will also be discussed in Part Three. And hopefully all of this will serve as a mechanism to get your input into the design process. I for one, listen to customer feedback and work very hard to make the feedback results appear in the product via improved usability and the introduction of important features.

Part One: Gathering Feedback from Recent Usability Research and Studies
Part Two: Actual User Feedback
Part Three: Special Areas of Feedback: Content, Quality, and Performance



« February 2017