Time Traveling To The Future Of User Interfaces

Written By John Sonmez

I really dislike using a keyboard and a mouse to interact with a computer.

keyboard

Using a mouse is a more universal skill—once you learn to use a mouse, you can use any mouse.  But, keyboards are often very different and it can be frustrating to try and use a different keyboard.

When I switch between my laptop and my desktop keyboard, it is a jarring experience.  I feel like I am learning to type all over again.  (Of course I never really learned to type, but that is besides the point—My three finger typing style seems to work for me.)

laptop

When I switch to a laptop, I also have to contend with using a touchpad instead of a mouse, most of the time.  Sure, you can plug in a mouse, but it isn’t very convenient and you can’t do that everywhere.

I also find that no matter how awesome I get at keyboard shortcuts, I still have to pick up that mouse or use the touchpad.  Switching between the two interfaces makes it seem like computers were designed for 3 armed beings, not humans.

Even when I look at a laptop, it is clear that half of the entire design is dedicated to the keyboard and touchpad—that is a large amount of wasted space.

I’m not going to say touch is the answer

You may think I am going in the direction of suggesting that tablets solve all our problems by giving us a touch interface, but that is not correct.

Touch is pretty awesome.  I use my iPad much more than I ever thought I would.  Not having the burden of the keyboard and mouse or touchpad is great.

But, when I go to do some text entry on my tablet or my phone, things break down quite a bit.

On-screen keyboards are pretty decent, but they end up taking up half of the screen and the lack of tactile feedback makes it difficult to type without looking directly at the keyboard itself.  Some people are able to rely on autocorrect and just let their fingers fly, but somehow that seems dirty and wrong to me, as if I am training bad habits into my fingers.

touch

Touch itself is not a great interface for interacting with computers.  Computer visual surfaces are flat and lack texture, so there is no advantage to using our touch sensation on them.  We also have big fingers compared to screen resolution technology, so precision is also thrown out the window when we relegate ourselves to touch interfaces.

It is completely silly that touch technology actually blocks us from viewing the part of the screen we want to touch.  If we had greaseless pointy transparent digits, perhaps touch would make the most sense.

Why did everything move to touch then?  What is the big thing that touch does for us?

It is pretty simple, the only real value of touch is to eliminate the use of a mouse or touch pad and a keyboard.

Not convinced?

I wasn’t either, till I thought about it a bit more.

But, consider this… If you were given the option of either having a touch interface for your tablet, or keeping the mouse-like interface, but you could control the mouse cursor with your mind, which would you prefer?

And that is exactly why touch is not the future, it is a solution to a specific problem, the mouse.

The real future

The good news is there are many entrepreneurs and inventors that agree with me and they are currently building new and better ways for us to interact with computers.

Eye control

This technology has some great potential.  As the camera technology in hardware devices improve along with their processing power, the possibility of tracking eye movement to essentially replace a mouse is becoming more and more real.

There are two companies that I know are pioneering this technology and they have some pretty impressive demos.

TheEyeTribe as an “EyeDock” that allows for controlling a tablet with just your eyes.

2013-03-03_13-31-09

They have a pretty impressive Windows 8 tablet demo which shows some precise cursor control using just your eyes.

Tobii is another company that is developing some pretty cool eye tracking technology.  They seem to be more focused on the disability market right now, but you can actually buy one of their devices on Amazon.

The video demo for PCEye freaks me the hell out though.  I don’t recommend watching it before bed.

But Tobii also has a consumer device that appears to be coming out pretty soon, the Tobii REX.

Tobii_rex_LE_Picture_montage_backfront619x220

Subvocal recognition (SVR)

This technology is based on detecting the internal speech that you are generating in your mind right now as you are reading these words.

The basic idea is that when you subvocalize, you actually send electrical signals that can be picked up and interpreted.  Using speech recognition, this would allow a person to control a computer just by thinking the words.  This would be a great way to do text entry to replace a keyboard, on screen or off, when this technology improves.

NASA has been working on technology related to this idea.

And a company called ambient has a product called Audeo that is already in production.  (The demo is a bit rough though.) You can actually buy the basic kit for $2000.

technology-promo

Gesture control

You’ve probably already heard of the Kinect, unless you are living under a rock.  And while that technology is pretty amazing, it isn’t exactly the best tool for controlling a PC.

But, there are several other new technologies based off gesture control that seem promising.

There are two basic ways of doing gesture control.  One is using cameras to figure out exactly where a person is and track their movements.  The other is to use accelerometers to detect when a user is moving a device, (an example would be the Wii remote for Nintendo’s Wii.)

2013-03-03_13-34-37A company called Leap, is very close to releasing a consumer targeted product called Leap Motion that they are pricing at only $79.  They already have plans to sell this in Best Buy stores and it looks very promising.

Another awesome technology that I already pre-ordered, because I always wanted an excuse to wear bracers, is the MYO, a gesture controlled armband that works by a combination of accelerometers and sensing electrical impulses in your arm.

two_rings

What is cool about the MYO is that you don’t have to be right in front of the PC and it can detect gestures like a finger snap.  Plus, like I said, it is a pretty sweet looking arm band—Conan meets Bladerunner!

Obviously video based gesture controls won’t work well for mobile devices, but wearable devices like the MYO that use accelerometers and electrical impulse could be used anywhere.  You could control your phone, while it is in your pocket.

Augmented reality and heads up displays

One burden of modern computing that I haven’t mentioned so far is the need to carry around a physical display.

A user interface is a two-way street, the computer communicates to the user and the user communicates to the computer.

2013-03-03_13-36-29Steve Mann developed a technology called EyeTap all the way back in 1981.  The EyeTap was basically a wearable computer that projected a computer generated image on top of what you were viewing onto your eye.

Lately, Google Glass has been getting all the attention in this area, as Google is pretty close to releasing their augmented reality eyewear that will let a user record video, see augmented reality, and access the internet, using voice commands.

Another company, you may not have heard of is Vuzix, and they have a product that is pretty close to release as well, Smart Glasses M100.

Brain-computer Interface (BCI)

Why not skip everything else and go directly to the brain?header_set

There are a few companies that are putting together technology to do just that.

I actually bought a device called the MindWave from NeuroSky, and while it is pretty impressive, it is still more of a toy than a serious way to control a computer.  It basically is able to detect different brain wave patterns.  It can detect concentration or relaxation.  You can imagine, this doesn’t give you a huge amount of control, but it is still pretty fascinating.

I haven’t tried the EPOC neuroheadset yet, but it has even more promise.  It has 14 sensors, which is a bit more intrusive, but it supposedly can detect your thoughts regarding 12 different movement directions, emotions, facial expressions, and head rotation.

So where are we headed?

It is hard to say exactly what technology will win out in the end.

I think we are likely to see aspects of all these technologies eventually combined, to the point where they are so ubiquitous with computer interaction that we forget they even exist.

I can easily imagine a future where we don’t need screens, because we have glasses or implants that directly project images on our retinas or directly interface with the imaging system in our brains.

I easily see us controlling computers by speech, thought, eye movement and gesture seamlessly as we transition from different devices and environments.

There is no reason why eye tracking technology couldn’t detect where our focus is and we could interact with the object of our focus by thinking, saying a command or making a gesture.

What I am sure of though is that the tablet and phone technology of today and the use of touch interfaces is not the future.  It is a great transition step to get us away from the millstone around our necks that is the keyboard and mouse, but it is far from the optimal solution.  Exciting times are ahead indeed.