What is it about stories, anyway? Anthropologists tell us that storytelling is central to human existence. That it’s common to every known culture. That it involves a symbiotic exchange between teller and listener—an exchange we learn to negotiate in infancy. Just as the brain detects patterns in the visual forms of nature—a face, a figure, a flower—and in sound, so too it detects patterns in information. Stories are recognizable patterns, and in those patterns we find meaning. We use stories to make sense of our world and to share that understanding with others. They are the signal within the noise.
So powerful is our impulse to detect story patterns that we see them even when they’re not there. In a landmark 1944 study, 34 humans—Massachusetts college students actually, though subsequent research suggests they could have been just about anyone—were shown a short film and asked what was happening in it. The film showed two triangles and a circle moving across a two-dimensional surface. The only other object onscreen was a stationary rectangle, partially open on one side.
Only one of the test subjects saw this scene for what it was: geometric shapes moving across a plane. Everyone else came up with elaborate narratives to explain what the movements were about. Typically, the participants viewed the triangles as two men fighting and the circle as a woman trying to escape the bigger, bullying triangle. Instead of registering inanimate shapes, they imagined humans with vivid inner lives. The circle was “worried.” The circle and the little triangle were “innocent young things.” The big triangle was “blinded by rage and frustration.”
But if stories themselves are universal, the way we tell them changes with the technology at hand. Every new medium has given rise to a new form of narrative. In Europe, the invention of the printing press and movable type around 1450 led to the emergence of periodicals and the novel. The invention of the motion picture camera around 1890 set off an era of feverish experimentation that led to the development of feature films by 1910. Television, invented around 1925, gave rise a quarter-century later to I Love Lucy and the highly stylized form of comedy that became known as the sitcom.
As each of these media achieved production and distribution on an industrial scale, we saw the emergence of twentieth-century mass media—newspapers, magazines, movies, music, TV. And with that, there was no role left for the consumer except to consume.
Then, just as we’d gotten used to consuming sequential narratives in a carefully prescribed, point-by-point fashion, came the Internet. The Internet is the first medium that can act like all media—it can be text, or audio, or video, or all of the above. It’s nonlinear, thanks to the World Wide Web and the revolutionary convention of hyperlinking. It’s inherently participatory—not just interactive, in the sense that it responds to your commands, but an instigator constantly encouraging you to comment, to contribute, to join in. And it is immersive—meaning that you can use it to drill down as deeply as you like about anything you want to know about.
At first, like film and television in their earliest days, the Internet served mainly as a way of retransmitting familiar formats. For all the talk of “new media,” it functioned as little more than a new delivery mechanism for old media—newspapers, magazines, music. The emergence of P2P file-sharing networks encouraged a lot of people to get their deliveries for free. But as disruptive as the Net has been to media businesses, it’s only now having an impact on media forms.
Under its influence, a new type of narrative is emerging—one that’s told through many media at once in a way that’s nonlinear, participatory, and above immersive. This is “deep media”: stories that take you deeper than an hour-long TV drama or a two-hour movie or a 30-second spot will permit.
The most talked-about ad campaign of the past year involved a former football player who for two days took questions about Old Spice on Twitter and responded to the best of them minutes later on YouTube. Nike+, a Web service that doubles as a marketing platform, functions as a branded corner of cyberspace where runners can keep their stats and tell their own stories. Tron: Legacy was preceded by Flynn Lives, an alternate reality game that engaged millions of people worldwide in the 18 months before the movie came out. The 2010 season of the BBC’s Doctor Who was made up of 13 television episodes and 4 that came in the form of downloadable video games. Lost told a story so convoluted that the audience had little choice but to work together to decipher it communally online.
“An artistic movement, albeit an organic and as-yet-unstated one, is forming,” David Shields writes in Reality Hunger: A Manifesto, a book whose truth to its time is underscored by the gleeful way it samples from other sources. “What are its key components?” Shields names several: randomness, spontaneity, and emotional urgency; reader/viewer participation and involvement; anthropological autobiography; a thirst for authenticity coupled with a love of artifice; “a blurring (to the point of invisibility) of any distinction between fiction and nonfiction: the lure and blur of the real.”
We stand now at the intersection of lure and blur. The future beckons, but we’re only partway through inventing it. We can see the outlines of a new art form, but its grammar is as tenuous and elusive as the grammar of cinema a century ago.
We know this much: people want to be immersed. They want to get involved in a story, to carve out a role for themselves, to make it their own. But how is the author supposed to accommodate them? What if the audience runs away with the story? And how do we handle the blur—not just between fiction and fact, but between author and audience, entertainment and advertising, story and game? A lot of smart people—in film, in television, in video games, in advertising, in technology, even in neuroscience—are trying to sort these questions out. The Art of Immersion is their story.
This excerpt previously appeared on Wired.com/Epicenter