X

An Oracle blog about Application UX

New ways of input still on the verge of the enterprise

Guest Author

OAUX team exploring young emerging tech of voice, gesture as input

By Sarah Smart, Oracle Applications User Experience

When the Oracle Applications User Experience (OAUX) team is researching emerging technologies, we want more than just the cool factor. Enterprise use cases can be hard to come by for some technologies, such as voice and gesture as input, but we need to keep up with the latest developments in those fields anyway so that we’re ready when an enterprise use case comes up.

“As [a technology] gets better, we want to be on that edge with them so we can have that solution immediately once the tech is broad enough,” said Thao Nguyen, a director with the AppsLab, the OAUX Emerging Technologies team. Let’s take a look at what that entails.


Voice as input
Ever throw your iPhone across the room because Siri couldn’t understand what you said? The OAUX team is researching voice as input. “It’s come in leaps and bounds, of course, and we see it being used more and more,” OAUX Group Vice President Jeremy Ashley said. 

Speaking of Siri, the iOS virtual assistant was the first voice-as-input technology that captured the interest of Jake Kuramoto, Senior Director with the AppsLab, in 2011. “Siri showed a ton of promise, but it turned out to be a show-and-tell feature, not a sticky one,” he said recently. Then Google introduced passive listening for the “OK Google” assistant, but that feature evolved too slowly. “Plus, I’ve never felt that talking to a phone was all that natural,” Kuramoto said.


The tipping point for him was the Amazon Echo, even in its early stages. “The biggest difference was the natural interaction of just talking into the air vs. to a device,” he said. This device is always listening and waiting for the keyword, so although it wasn’t the first device of its kind, it’s very easy to start using. “The fact that I don’t have to hold the device, be near it, or push a button, makes this cylinder kind of magical,” said Noel Portugal, Senior UX Developer Manager, in an AppsLab post about the Echo.


Noel Portugal demonstrates an Echo integration with email in a video from the AppsLab.

With Release 9 of the Oracle Sales Cloud came Oracle Voice — more or less the Siri of Sales Cloud. It was designed to be a fast, friendly, fun way for sales reps to interact with the application. These sales reps already spend a lot of time talking and using their smartphones, so being able to enter information three times faster through Oracle Voice than typing on a small phone keyboard would allow was a key user experience improvement. Oracle Voice didn’t require users to have any training. And in a recent study, 80% of Oracle field sales reps testing Oracle Voice said the product exceeded their expectations for productivity and ease of use.

The AppsLab team has been investigating voice as an input with the Echo, building integrations into the Oracle Applications Cloud: “start my day” or “open a lead,” for example.

“Commands like this allow the a user to navigate and use Cloud Applications simply by voice,” Kuramoto said. “Our research continues, and as we learn more about people’s expectations, and as our users use voice commercially through their own Amazon Echos and Apple Watches, we’ll be able to build better and more desirable voice integrations.”

Yet potential users set the bar very high for voice. To bring the technology of voice as input to the enterprise, Oracle needs to solve not only context problems but also human problems.

“When we talk to each other, we go back and forth and ask questions,” GVP Ashley said. Even though speaking to a device works most of the time, users remember only the times it doesn’t work. It can only be used in certain settings — for example, in a quiet environment and definitely not during a meeting. Correcting incorrect input can be difficult, too, so a lot of drop-off happens. Unless a voice system is specific to a profession or area, such as in a hospital, it won’t have context for what the user says, so the information is not very useful.

On the bright side, the introduction of a development kit (SDK) has allowed the AppsLab, to build and demonstrate some voice interactions with Cloud Applications. Siri and Google Now, too, have opened more to developers lately. “As their capabilities expand to include devices like smartwatches,” Kuramoto said, “people will find interesting ways to use voice.”


Gesture

Gesture as input is a bit more promising for the enterprise right now. The technology seen in the movie “Minority Report” is not quite within reach, but the OAUX team is hard at work bringing more usable gestures to the Oracle Applications Cloud.

One of the first examples of this kind of technology can be found in video games like the Wii and Xbox Connect. That took further shape with iPod Touch and iPhone, but “the tablet took it to another level with more gesture — the pinch and zoom,” Nguyen said. But with some handheld devices, certain apps took it a little too far, and gestures for gestures’ sake aren’t always appropriate. “It’s more important to stay more natural with what your gestures are,” Nguyen said. That will make it easier for people to participate.

The ergonomics issue means that “Minority Report” style gestures take too much energy. "People aren’t made to stand up all day and wave their arms around,” Ashley said. The burden should not be on the user to learn an elaborate gesture language; they should be minimal and natural.

Anthony Lai, Senior User Experience Architect, began experimenting on the Leap Motion, motion-control hardware, that came out a few years ago. He and his colleagues built a robot arm with infrared detectors and a camera inside of it to detect and mirror hand gestures, and they integrated it with Leap Motion.

That’s cool and all, but where’s the enterprise use case? How can this actually be used by an employee at any company? Lai has a few predictions: operational machinery in manufacturing and medical industries or for use in areas where radiation is a factor.

“For enterprise use cases, we’re not playing with a toy,” he said. “We’re trying to do something sort of serious here, and we want to make sure everything works how a user would expect.”

But don’t forget about the smart office we discussed in our emerging tech article on wearables. You have the keyboard for your normal computer, but how do you interact with the ambient screen? The Leap Motion controller saves the day, allowing users to wipe back and forth on the ambient screen and execute certain commands there, such as “grabbing” a location on the map on the ambient screen and “throwing” it to the working machine to open up specific details, which you can see in action.

Nguyen said, “We always explore capabilities of tech as it is today, but the challenge is always finding enterprise applicability of it. We want to find those durable core problems or tasks that a user has and how we can better enhance it or give them an alternate way to do the task.”

Interested in learning more?
Check out the Emerging Technology page on the Usable Apps website to learn more about Oracle and our research on all types of emerging technology, from wearables and IoT to gestures and voice as input and everything in between. Check back, too, because the field is only going to grow!


 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha