Hank Schlesinger gives us his view on how technology is shaping the future of gaming interfaces and what this brings to the overall experience

In 1952 engineers at the firm then known as Ferranti Canada plopped a smallish ball from the game five-pin bowling into a serious looking metal frame with exposed components. The ball, which floated gracefully on air ball bearings, was a new kind of computer interface engineered into existence specifically for a system called DATAR (Digital Automated Tracking and Resolving), designed to track ships in real time. DATAR worked, but the unwieldy system, which contained thousands of vacuum tubes, eventually fizzled out.

However, the bowling ball interface - eventually called the “trackball” - went on to have a gloriously long run in all manner of computer systems, including video games. Indeed, the trackball along with the joystick, developed by the Germans sometime in 1944 to guide glider bombs to their targets, and 19th century pushbuttons, essentially spring-loaded switches, were the sum total of computer game controls for decades.

Even as computing power increased exponentially, powering ever better graphics and game play, not much changed in the way of basic controls. Incremental enhancements aside, the basic user interfaces for the typical arcade video games are pretty much half-century old technology developed when vacuum tubes were softly glowing at the vanguard at the cutting edge of technology. Even as the industry moved from transistors to chips that process increasingly more complex graphics and screens that offer increasingly better resolution, user interfaces have lagged far behind. 

Interface innovation has been largely a one-sided affair. We see computers far better and in much greater detail than the machines see us. If the standard trackball, pushbutton and joystick seem intuitive to players, it is only through familiarity and hard-earned acquired skill from repeated play. The half-century and older mechanical controllers simply failed to keep pace with the computing power or, even worse, the game play potential.

That state of affairs is now rapidly changing. In the home market, the first and most dramatic indications were the Nintendo Wii and Microsoft’s Kinect for the Xbox redefined controllers. The coin-op segment, although slowly and belatedly, is also evolving. Touchscreens, for instance, a technology that dates back to the mid-1960s and a paper published by E.A. Johnson of the Royal Radar Establishment, Malvern, UK, seem to be taking on new life as an interface.

Adrenaline Amusements, based in Quebec, Canada, took the touchscreen concept to the next level for arcades with its TouchFX system that boasts a 46ins multi-touch HD LCD screen. Interestingly, the system is best described as a rather large version of the iPhone, though its touchscreen employs LED sensors rather than the capacitive surface used on the Apple product and other smartphones. Not coincidentally, Adrenaline is currently licensing games, such as Fruit Ninja, that first made their appearance as downloads for the iPod Touch, smartphones and other portable devices. 

Tweaked for the large screen, the TouchFX offers familiar and intuitive play in a large format. For Fruit Ninja players still swipe to slice fruit, but they swipe big. “What we’re doing is bringing hardware from one place, the software from another and merging them,” said Adrenaline’s vice president of business development and marketing, Marc-Antoine Pinard. “So we’re leveraging the best of both worlds, and therein lies the brilliant idea.”

As Pinard is quick to point out, it wasn’t just the software that requires tweaking. The hardware also required a ruggedising make-over for the arcade environment. To this end the TouchFX screen - originally designed for more genteel environments - employs a proprietary multi-layer covering that includes tempered glass that can stand up to a fair amount of abuse and is easily cleaned.

If Adrenaline is the first to adopt newish technology for coin-op, it is in all likelihood not the last. According to Jonathan Brawn, a principal at Brawn Consulting, based in California, US, this is the way the industry is heading, at least in the near future. “We’re adapting our technology to work the way human’s instinctually operate. I was recently floored by a child of two or three interacting with a piece of software - successfully playing the game,” he explained. “Buttons and trackballs only persisted because they worked.”

As Brawn sees it, the popularising of touchscreens is a relatively recent event, with credit going to Apple, but will continue to take an ever greater share of the interactive market. As prices come down for capacitive screens, such as those used on Apple products, touch will become the norm rather than the exotic stand-out.

What’s next may be more intuitive than even touch. Vincent John Vincent, president and co-founder of GestureTek, whose system was licensed for Kinect, has been working with gesture interfaces for more than a quarter century, rolling out the first prototype in 1986. “We invented the gesture control in 1986, but the issue was computing power and the cameras were US$20,000,” he explained. “Over time that price came down and the information being done on the computers, the analysis, went from multiple boards to a single board and then to a chip. And with what is now happening with the cameras we are able to track the various points, so we can get a full avatar of the user and do it in real time. Those are the things that are speeding up.”

What exactly does this speeding up mean for game play? As Vincent explained, it means the ability to track a player’s movement in greater detail for one thing, giving them more control over the game environment. Higher resolution cameras will pick up finger motions associated with fine motor skills. At the present time, Vincent noted, the data from a player’s digits is not great. However, as the cameras improve along with processing power, players will gain greater dexterity in the game environment along with a better analysis of the entire body. That means moving from simple intuitive to completely natural gestures. Players will be able to reach for an object, close their fingers around it and the avatar on the screen will do the same.

“When we started looking at 3D in 2000, we would have had to use 10 Pentium machines to do analysis for a simple 3D depth map of user in 2000,” Vincent said. “Now, all that information is in the camera itself, in a single chip. It’s come down from multiple boards to a single chip in the camera.”

OFF THE DRAWING BOARDS, INTO ARCADES

When Russia’s largest commercial bank, Sberbank, announced an ATM with a “built-in lie detector” to process loan applications, the news raised a few eyebrows. First there was the fact that the company developing the technology, The Speech Technology Centre in Moscow, Russia, counts the Federal Security Service - an organisational descendent of the Soviet KGB - among its clients. Then there was the whole question of whether the technology really worked.

Can a machine actually quantify nervousness and distress? A few wags even quipped that a computer couldn’t hope to simulate the cold and calculating resolve of a typical banker.

Regardless of whether the Sberbank system actually works or has any real practical application, such biometric functions are an area of intense study around the world at many of the most prestigious computer labs. And, like the trackball or joystick it’s only a matter of time before these more exotic interfaces make their way into video games. This is no small thing.

Computers are notoriously “mind blind” when it comes to users. Capturing a player’s mood, age or sex in a digital form the machine can “understand” opens up a whole new dimension for game designers. In many ways, it is very much the ultimate intuitive interface.

“Since humans started interacting with computer systems, they have been investigating ways to improve that interaction. Research into new methods continues today,” said Neil Dodgson, professor of graphics and imaging at the University of Cambridge Computer Laboratory and member of the graphics and interaction research group (Rainbow Research Group). “Our research group is investigating how to recognise mental state (“affect”) from facial expression, tone of voice, and body posture.”

This kind of research, Dodgson explained, will lead to systems that have some internal model of the human’s mental state, thus allowing a computer system to tailor its output to the human player. At present time Dodgson and others have been looking at applications in transport, medicine and entertainment.

If applied to videos games, it means games will no longer be “mind blind” to the player and adjust play accordingly. A game, for instance, that fine tunes play to age or sex as the player approaches, then makes even more adjustments on the fly during play, could enhance the play experience and expand the demographic. The experience of playing the same machine could be as rewarding for, say, a 10-year-old as a 20-year-old. 

There are already hints these kinds of features are fairly close at hand. Billboard companies, for instance, such as NEC, have begun rolling out so-called “smart billboards” for use in select cities. Cameras set into the billboard capture images of passers-by to run through biometric software that pulls up adverts appropriate to the viewer. A small child may see an advert for a toy while a young woman sees a designer dress.

Then there’s Namco’s new driver, Dead Heat. When networked, players log on to the game using a pin and the system saves their driving profile along with a digital photo when they race against their friends. Then, when they return alone to drive solo they can still race against their friends’ saved profiles. “The people you raced before, you get to race again, even if they are not there,” said Namco America’s product coordinator Sam Ven. “The software analyses how you drive and records that. So you’re racing ghost cars with personality, not just a computer-generated car. It really blurs the line between artificial opponent and real opponent. I’m not sure it is the future, but it’s a great addition and it does seem to be where it’s headed.”

So, where will the next generation of game interfaces come from? Follow the money, say the experts. Look for the big R&D budgets. Medicine is a good bet. So is the military. But the odds are excellent there is a DATAR trackball out there somewhere, just waiting for a video game.