The human interface

Tags:

I apologize for the lack of posts lately. I haven’t had access to a computer or time to think about anything interesting to write lately, as I’ve been doing my civil service. Anyways, let’s post something now!

As a long time computer user, I’ve used many different user interfaces, both software and hardware.

The typical hardware you have are the keyboard and the mouse. Many have probably also used touchpads on laptops or trackballs. There are also touch screens, tablets, joysticks, pads, various 3D controllers etc.

The interfaces software provide are limited by the hardware. This is why we have icons, buttons, textfields and other things – they are easy to use with a keyboard and a mouse. And back in the day before mice, you had command prompts / shells.

The current ways of interacting with computers are limited in many ways and they don’t feel very natural either, but I think they are the best possible methods – for now…

Limitations of keyboards and mice

Keyboards and mice aren’t “natural” – you have to learn to use them. If you haven’t ever used a mouse, you won’t be able to use one right away. On the other hand, it’s relatively easy to learn the principle, and a mouse can be used to control the cursor very accurately. Another limitation in a mouse is that it can only move in two dimensions, but you could argue that 3D user interfaces might not be very useful.

We can throw in trackballs to the same category with mice. While using a trackball differs from using a mouse, the principle is the same.

Keyboards are quite obvious right from the start, but learning to type fast is a completely different matter. You can learn to use a mouse relatively well in a short time span, but learning to type fast will take longer.

Both kinds devices also have the issue of ergonomics. You may have seen the warning labels in new keyboards, telling you about how harmful they can be.

Keyboards do have one merit, though: they are the fastest way to type text, outside something which recognizes your thoughts and types them, which obviously is science fiction.

Other types of devices

Another common way to interact with the cursor is the touchpad found in most laptops. It’s like a miniature tablet, except you don’t use a pen with it.

Tablets are a personal favorite of mine. If you have a big tablet (A5, A4), you can have the tablet represent the screen surface: Tapping the top corner of the tablet will always tap the top corner of the screen etc., which is very nice, since you can use the tablet very precisely in this kind of mode.

Another reason tablets are nice is that tapping a tablet has a nice feel to it. You do it with a pen, but you can get this feedback of actually tacking something with the pen, and seeing the computer react. It’s very close to actually tacking the screen.

However nice, they can be difficult to learn. Tablets require just as much hand-eye coordination as a mouse does, since you don’t actually see the screen on the tablet. You look at the screen and use your hand on the tablet, and it can be difficult at first, just like using a mouse.

I’ve also seen a 3D pen thingy, which would let you move in three dimensions. I don’t really know where you could use something like that, except 3D modeling or such.

Touch screens

Touch screens deserve their own chapter.

Touch screens are possibly the most intuitive method of using a computer. You can use your finger or a pen depending on the type of the screen to click things, drag things etc.

Depending on the model, some can even detect pressure and pen orientation. This obviously makes for a great tool for drawing or digital painting.

While using a touch screen is easy, you would still have to learn to use Windows (I’ll just use Windows here, you can think of KDE, OS X or whatever). The touch screen would be limited by the underlying OS: If you wanted to “push” a window by touching the screen and then moving towards a window, it wouldn’t work because Windows doesn’t support something like that.
Now, I don’t know about OSes built for touch screens – they might very well have a feature like above.

Also, most touch screens don’t support multiple fingers/pens/etc. You can’t use two fingers to resize a window by holding it down with one finger and dragging its side with another. Some touch screens do support more than one contact, though, but they aren’t very common, or used up to their potential.

Near future

Microsoft is going to release Microsoft Surface in the coming months. It’ll possibly be a good indication of what the future will hold.

The Surface is basically a big touch screen, but it uses cameras and a projector. It’s optimized to detect 52 simultaneous touches. Its capabilities have been demonstrated before, and I have to say it looks very promising. The way you interact with objects on the screen seems very easy and natural.

How well would a technology like the surface be usable in a “real world” example, such as browsing through the files on your hard disk? In theory it could work very well, as long as they won’t go too far with making it seem like sorting through papers on your desk.

I think we will see increasing amounts of touch screens in the near future. Hopefully we can get a big multi-touch screen for a relatively reasonable price in some years. It is the technology which will be the most easy one to adapt to current conventions.

There are also some devices which can track head movement, such as NaturalPoints TrackIR and SmartNAV. I haven’t personally used either device, but I’ve heard that TrackIR does its job rather well. Perhaps this concept could be expanded to include tracking eye movement, so you could simply look at things on the screen. There are, however, some difficulties in that, such as some methods forcing the user to wear special contact lenses.

Nintendo’s Wiimote is a good example of what we might also see. Perhaps gloves you could wear to make your hands “Wiimotes”. The Power Glove of 2008, perhaps. This could be a very interesting way to interact with computers or games.

Far future (or sci-fi)

It’s always fun to hypothize what kind of stuff we will have in the future. Let’s see…

Starting from the most feasible ones, it would be possible to track hand movement in three dimensions. Combined with a large projected 3D holographic image, or even a 2D one, it could open up interesting possibilities. The basic principle would be similar to touch screens, but without pressure detection and with the added third dimension. One could imagine having “floating” windows that you could move by “touching” them in the air.

The hand tracking one is something that could be done easily with the technology we have now. It would need some motion tracking stuff, and it would be rather expensive, not to mention the custom software it would require. Thus it would not be something a common consumer could own – not yet.

I’m a big fan of Masamune Shirow’s Ghost in the Shell, a cyberpunk’ish manga, also adapted as three anime movies and two anime series. It has various interesting concepts, such as the cyberbrain. In some other works, there are also “synapse links” which basically connect the human nervous system to a computer.

Of course, both kinds of technologies are far from reality. Some simpler idea, such as attaching a robotic arm in the arm nerves of a human, might be feasible in the near future, though. This could allow a different way of doing the mentioned hand tracking, as you could plug in your hands in the device.

Going back to the idea of linking your nervous system directly to a computer… it would definitely allow very interesting things. It would become possible to accurately track the condition of the whole body, and so it would be possible to track the motions of the whole body. Adding some more precise linking, such as implanting things in the brain, it could become possible to override the image feed from the eyes and the sound feed from the ears, and inserting the image and sound from the computer right into your brain.

Ghost in the Shell features the cyberbrain, which is essentially a computer which replaces your brain. It has wireless access to the net etc., and also prominently visible in many scenes is the communication between various characters “inside” their heads: They get an overlay on their vision, showing who’s calling and so on. It also features cyberspace, which could provide endless possibilities for interaction with applications.

The wireless connectivity of the cyberbrain also introduced various interesting security risks… but since this isn’t a post about the safety of the people of year 2000, let’s forget about it. However cool, something like that won’t happen in a long long time, and even when it could, there will probably be a ton of issues such as finding human test subjects etc.

Conclusion

So what is the human interface? Right now, the best possible user experience is probably a touch screen based system. It is easily learned and gives something “concrete”: Using a mouse is kind of abstract, as there is no immediately obvious connection between moving the mouse and the cursor. A pointing device similar to the Wiimote might give a very “hands on” feel as well.

The mouse is defending itself very well in one sector. Namely, first person shooter games. It still is the most precise tool available for aiming in the way aiming works in games. Of course, if it worked “realistically”, in a similar fashion as it does in light gun based games such as Virtua Cop, it is possible to beat the mouse with a direct pointing device (ie. a gun) or a touch screen.

I suspect we will see more touch and pointing (alá Wiimote) type of devices in the near future. What remains to be seen is whether or not they will be able to overcome the good old mouse and keyboard pairing, and whether someone will invent a better typing tool than the keyboard.

Well, this was another of the “put your thoughts on a paper” type of post.. but so what, I think it would be very interesting to hear other peoples opinions of things like this.