Beyond the touchscreen: interfaces of the future
12th Mar 2011 | 10:00
Multi-touch, voice control, augmented reality and more
Interfaces of the future
We're living in interesting times. Tech firms want to take over our TVs, our phones are more powerful than some recent PCs, and we can control games consoles through the medium of dance.
New interfaces are all around us, from touch screens to augmented reality, and the way we interact with technology is being transformed. But which interfaces are genuine leaps forward and which are digital dead ends? What makes a good user interface anyway?
Videos of very young children playing with iPads have become an internet cliché, but they demonstrate how intuitive technology is becoming: nobody was filming two-year-olds using IBM's original PC.
The IBM PC's command line interface was streets ahead of 1970s computers' switches, of course, but it wasn't until the arrival of the graphical user interfaces of the 1980s and beyond that personal computing became simple enough for the mainstream. And now the landscape is changing again.
After more than two decades of dominance, WIMP interfaces - windows, icons, menus and pointing devices - face new challengers in the form of multi-touch devices, voice control, gesture input and augmented reality.
This year's Consumer Electronics Show in Las Vegas showed where we're heading. Microsoft demonstrated its Kinect sensor doing a great job of voice recognition - no mean feat in a crowded venue - and gesture control, with users controlling video apps with waves of their hands, while everyone else seemed to be showing off some form of touch-controlled tablet.
Gestures certainly seem more futuristic than Google's vision of a QWERTY keyboard in every remote control, but are these ideas a mere fad? Is multi-touch a credible form of PC input?
"It's credible," says Gord Kurtenbach, Autodesk's Director of Research. "We're just at the start of determining how multi-touch can be used in the context of desktop PC systems. Obviously adding multi-touch to the display monitor makes a cool demo, but when using it over a period of time your arms get tired… Some of the most promising work I've seen uses multi-touch in concert with standard desktop configurations of mouse, keyboard and monitor. After all, the keyboard is a multi-touch device and that has been pretty successful, so I can imagine multi-touch tablets and displays being used horizontally as controller devices."
Everything's an interface
Multi-touch certainly won't be the only way we interact with our gadgets. As James McCarthy, senior windows consumer product manager with Microsoft, points out, the era of natural user interfaces (NUIs) has already begun: Windows' Photo Gallery uses image processing to recognise faces, eight million Xbox 360 owners control their console using Kinect and some two million motorists are talking to Ford Sync-equipped vehicles.
Forget the ropey speech recognition systems of the 1990s: voice control is back, and this time it works. That's partly because modern systems now specialise. Hardware is now much more powerful, and cloud computing allows remote processing and real-time results.
"The way we interact with technology is becoming more natural, allowing our devices to work on our behalf instead of just at our command," McCarthy says. "You can already envision the world we're imagining through Microsoft research projects like a prototype of an automated receptionist; Microsoft LightSpace, a technology prototype showing the potential of using depth-sensing cameras to naturally interact with any surface; and Project Gustav, a realistic painting prototype that lets artists become immersed in the digital painting experience, just as Kinect helps people become the controller in the gaming experience."
The magic touch
Touch control has been around for a long time. What's different about today's touch technology is that touch has become multi-touch: instead of prodding with a stylus we're pinching and pulling with one, two or ten fingers. That means tablets can be typewriters, pianos, canvases, or anything else we fancy playing with.
Done well, multi-touch removes abstraction - instead of moving a device like a mouse to point at something, you just point at it; instead of clicking on piano keys or trying to remember which keyboard key corresponds to each note, you just play the note.
Multi-touch is a good illustration of Fitts's Law, which was proposed by Paul Fitts in 1954. The law states that the time needed to move to a target area is a function of the distance to and the size of the target. In effect, that means big icons are easier to hit than little ones, top-of-screen menus are easier to click on than top-of-window ones and pop-up menus are faster than pull-downs.
There's more to effective UIs than big icons, of course. Good UIs remove complexity and obstacles, so for example, a mouse-and-windows OS is more intuitive than a command line interface and a multi-touch OS can be more intuitive than a mouse-based one.
Designers can do several things to streamline interfaces. They can use metaphors to make things more obvious - for example, we all know what desktops are, what control panels do and what recycle bins are for - and they can use icons and nested menus to reduce visual clutter. They can use context-sensitive menus and hinting, where the interface offers visual clues such as tooltips, and they can add indicators to icons.
However, if you keep adding features to the underlying system, eventually you'll run out of tweaks. As Microsoft program manager Jensen Harris recalls, by Office 2003 the UI was beginning to feel bloated, "like a suitcase stuffed to the gills with vacation clothes".
No wonder - it was essentially the same UI that Office had in 1989. As Harris explains, "There's a point beyond which menus and toolbars cease to scale well. A flat menu with eight well-organised commands on it works just great; a three-level hierarchical menu containing 35 loosely-related commands can be a bit of a disaster."
Microsoft redesigned the Office UI and the result was the Ribbon, which makes Office less intimidating and features easier to find. It annoyed power users though, demonstrating that when it comes to UIs, you can't please everybody.
3D software and augmented reality
An extra dimension
Three dimensional games, TVs and monitors are being hyped this year. How about 3D software?
Microsoft and Nvidia have teamed up for what they call the 3D PC, but that's about consuming games and movies. A 3D OS would be far trickier to put together. Just ask Autodesk, which has been in the 3D business for decades.
"Many people believe that because in the real world we navigate 3D physical spaces with ease, 3D navigation should make navigation of 3D virtual spaces easy as well," Kurtenbach says.
"However, everything changes when the space is behind the display monitor. It becomes hard, like learning to fly an aeroplane using controls. So the old idea that your desktop will be easier to use if it's a 3D virtual space that you can navigate around hasn't really made things easier. Certainly, we've made big advances in 3D navigation with such product features as SteeringWheels, but these interfaces are for navigating spaces that truly need to be in 3D, such as CAD models."
That doesn't mean it isn't possible. Toshiba recently showed an auto-stereoscopic display that uses a six-axis accelerometer, effectively letting users look around 3D objects, and Autodesk has the wonderfully named Boom Chameleon.
"The Boom Chameleon is a very different approach to 3D navigation where the tablet PC acts like window into a virtual space," Kurtenbach says. "When you move the tablet, you change your window into the virtual space. With current tablet systems that have position sensors or built-in cameras, this type of thing can already be accomplished and I've seen examples of the Chameleon on tablets and mobile devices built by student researchers. As devices become more powerful, we'll see this becoming more practical and useful."
One vision of the future is to replace the desktop and monitor with multi-touch screens. That's the thinking behind the BendDesk. BendDesk is a multitouch display in two sections: one where you'd normally have a screen and where you'd normally have a keyboard and mouse.
As its creators, members of Aachen University's Media Computing Group, explain: "BendDesk is a multi-touch desk that combines a vertical and a horizontal surface with a curve into one large interactive workspace. This workspace can be used to display digital content like documents, photos or videos. Multi-touch technology allows the user to interact with the entire surface."
BendDesk is a concept, but advances in e-paper technology will make it possible within a few years. One thing it doesn't do is offer physical feedback of the kind we expect from keyboards, mice and other devices. But there may be an answer to that too: haptic feedback.
Haptic feedback is physical feedback: the click of the mouse button, the pressure you feel when your fingers press a key and so on. Apple and Microsoft have both filed patents for haptic systems: the former's idea places small vibrators around a screen, with multiple vibrators creating effects at specific locations on a phone or tablet's surface, while Microsoft intends to make screens that can shape-shift.
Microsoft's application describes a screen made from hundreds of tiny, light-activated tiles: hit them with the right frequency and they change shape to make a D-pad controller, a keyboard or text in Braille.
The move to mobile
Smartphones enable all kinds of new interfaces. Apps can use the camera as an input device, or be controlled by voice. Accelerometers, gyroscopes and GPS mean phones know where they are, what they're pointing at and if they're moving, which makes augmented reality possible.
With augmented reality, the world around you is the interface. When they're done well, as they are in apps such as Nearest Tube, they're intuitive and instantly understood - although our experiences of augmented reality to date suggest it's still in the 'coming soon' category rather than a useful, everyday technology.
Even without augmented reality, apps represent a new frontier in interface design: with an OS that's happy to give the entire screen over to an app, designers have been free to experiment. It's reminiscent of the early days of the web, and interface conventions will emerge.
As technology evolves, networks improve and we cram ever more processing power into ever-smaller form factors, we'll use a mix of different interfaces: voice, augmented reality and maybe even a keyboard, real or virtual.
As Gord Kurtenbach puts it: "We'll continue to add different setups to support different ways and times of working. Back in the '80s, I had to go to the university lab to use a computer. Later, I got one at home and I could do different types of work from both. Today I have desktop at home and work, a couple of laptops, a tablet, a mobile phone and so on. I use all of them, but in different ways for different things in different places."
First published in PC Plus Issue 305
Liked this? Then check out The future of the internet revealed
Sign up for TechRadar's free Weird Week in Tech newsletter
Get the oddest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register