Closing the Gap On The Uncanny Valley

In 1983, the cutting edge in personal computers was the Apple IIe. My best friend Mark and I logged countless hours playing games like Conan, Elite and Castle Wolfenstein. The graphics were positively primitive by today’s standards — just 80 columns on a 12-inch green screen — but back then they were absolutely immersive. We used to muse at how amazing it would be to be able to see what the characters onscreen were seeing, to actually look down the corridors rather than merely seeing the blocky-profiled soldiers in Wolfenstein, who looked more like Lego than Nazis, half-sliding, half-jittering across the screen barking orders like “Achtung!” and “Schweinhund!” as we filled them full of 1-bit lead.

Castle Wolfenstein - Apple IIe

When Doom came out in 1993, our heads nearly exploded. Not only had id created one of the most brutally violent games to date, they had created an entirely new genre of video game: the first-person shooter (though many would argue that the first FPS was Wolfenstein 3D) — and to top it all off, they gave away the first few levels for free, like a drug dealer hooking you up with “a taste.” While it wasn’t 3D in the same way that modern games are, the team at id, led by founders John Carmack and John Romero, employed some clever visual and programming trickery to make it feel like a very immersive 3D world. We were hooked, and just like Wolfenstein before it, Doom fed on any and all free time we happened to have. Playing as BJ Blaskowitz, we made our way through dungeons and fortresses, killing all manner of minions and demons with pistols, shotguns, grenade launchers and of course, the BFG — and yes, it stands for exactly what you think it does. Doom II followed and after that came Quake, which not only raised the bar visually, it added a crucial new gameplay element that has since become synonymous with gaming: multiplayer. Now, instead of blasting away on NPCs for hours at a stretch you could, as the saying went “make new friends, and kill them.”

Doom

While there were a number of excellent games that came out during the 1990s, it wasn’t until 2004 when the bar was truly raised again with the release of Half-Life 2. The sequel to 1998’s Half-Life, HL2 upped the bar in every way possible. Using Valve’s new Source Engine, the game featured advanced physics and much improved AI, but it was the incredible narrative and the unprecedented graphics that made this game so immersive. For arguably the first time, this was a video game that felt remarkably visceral and real. It wasn’t just a mindless shooter, but rather a fully realized world with the player cast as engineer-turned-savior Morgan Freeman, making his way through a dystopian future that borrowed from science fiction and some the darkest times in our collective histories to feel hauntingly familiar.

Half-Life 1 + 2Half Life 2

In each generation comes a leap in quality, either evolutionary or in some cases revolutionary. With the latest generation of consoles, led by the Xbox One and the Playstation 4, the so-called Uncanny Valley is shrinking. For those of you unfamiliar with the term, the Uncanny Valley refers to how our brains process the rendering of human features and movements and how the tendency is to reject the slightest (read: less “real”) imperfections outright. In film, for example, while there have been CG characters for decades, most of them have been monsters or creatures of some sort (Jurassic Park), not humans. With Gollum from The Lord of the Rings, however – masterfully created by Peter Jackson’s team at Weta Digital – the character’s look, feel and oversell movement (courtesy of Andy Serkis) had become so lifelike that our brains could more easily believe that he was an actual cast member, rather than little more than a digital puppet. Contrast Gollum with the characters in Robert Zemeckis’ The Polar Express and you can see the Uncanny Valley at work. While Zemeckis’ team tried to create a photorealistic CG film, the characters were off, lingering just out of reach of believability – so much so that even Tom Hanks couldn’t make people go see what is a wonderful story by Chris Van Allsburg (skip the film and instead listen to this version featuring readings by Michael York, Morgan Freeman and Cary Elwes).

Gollum

While in film, human-like or partial CG human characters (Dobby from the Harry Potter series or Davy Jones from Pirates of the Caribbean) seem to fare better than full CG characters (Jeff Bridges’ character in Tron Legacy), the next generation of video games look absolutely incredible. Over the last few years, there have been a number of games that have dramatically elevated the realism in both environments and characters – Far Cry 3, Assassin’s Creed 4 and Grand Theft Auto V to name a few. But it was last year’s phenomenal PS3 title The Last of Us that did it for me. Much like Half Life 2 a decade ago, Neil Druckmann and the team at Naughty Dog crafted one of the most brilliantly realized and emotionally powerful games I have ever played. For 15 or so hours, I was immersed in the story arc of the main characters and actually cared about both of them and the world they lived in. Their successes and failures, victories and losses were also mine to enjoy or lament. This is a game whose narrative is as engrossing as any book, firmly anchored in the lead characters of Joel and Ellie. While not photo real, they are eerily close, which serves to pull players deeper into their story.

the-last-of-us

The next generation of gaming shows no signs of slowing, either in complexity of narrative or presentation as we close the gap on the Uncanny Valley. In the latest trailer for the upcoming Tom Clancy game The Division, we see another leap forward towards realism, told via time-lapse, HDR, lens effects and stunningly detailed environments and character models. Add to that the story itself which, is based on scenarios that are entirely plausible and in fact rehearsed. The name itself, The Division, was taken from an actual organization tasked with doing whatever it takes to “save what remains” in the case of a widespread biological attack on America.

Although there are a few games like Titanfall, where any sort of proper narrative is virtually nonexistent (and many would argue irrelevant), immersion has become the new benchmark for engagement. Gone is the “if it moves, kill it” mentality of earlier games in favor of complex, character-driven story arcs where actions are likely to have consequences, either positive or negative, as the game progresses. Gaming is well on its way to becoming interactive cinema and, judging by projected revenues for the game industry, that’s exactly what customers want. When Grand Theft Auto 5 launched in September of last year, worldwide sales topped US $1Billion in the first 72 hours. Where previous versions of GTA were chastised for their gratuitous use of violence and sex, the latest incarnation has been praised not only for its visuals but for the story, which has been compared to a Tarantino film and called a witty satire on American life. Granted, the sex and violence are still there, they just look much, much better.

I’ve been a gamer for more than 30 years and while here I have only scratched the surface of the games that I have played and enjoyed, I have never been more excited about its future. There will always be a place for casual games (just look on my iPad), but as life grows more and more complex, so too does the desire for games that are equal to the task of escaping from it, if only for a while.

You may also like