clock menu more-arrow no yes

Filed under:

Video Games Used to Copy Movies. Now They’re Influencing Modern Action.

There’s always been a two-way street of influence between the film and video game industry. Action sequences from movies like ‘Edge of Tomorrow,’ ‘John Wick,’ and especially ‘Hardcore Henry’ make that connection obvious.

Dan Evans

This week on The Ringer, we’re hosting the Best Video Game Character Bracket—an expansive competition between the greatest heroes, sidekicks, and villains of the gaming world. And along with delving into some of those iconic figures, we’ll also explore and celebrate the gaming industry as a whole. Welcome to Video Game Week.


About 30 minutes into Extraction, Tyler Rake’s covert mission goes haywire. The black-market mercenary has been tasked with rescuing Ovi, a drug lord’s teenage son, but a compromised getaway plan leaves them exposed in a Bangladeshi forest. As a rival hit man and local authorities converge on the pair, Rake eludes gunfire and sprints for an escape route, leading Ovi to an unused car and accelerating into a busy urban center. For the next 12 minutes, the movie turns into a video game.

Using his background as a stunt performer and choreographer, director Sam Hargrave captures the ensuing car chases, gun battles, knife fights, and roof jumps in one long tracking shot. Without any visible cuts, the camera zips, hovers, and jumps in lockstep with its kinetic subjects, shifting between first- and third-person perspectives, as Rake (Chris Hemsworth) navigates crowded streets and maze-like apartment complexes to keep Ovi (Rudhraksh Jaiswal) under his protection. As though handling a PlayStation controller, Hargrave joysticks the frame through car windshields and hides it behind corners, tethering the camera’s movement to Rake’s spontaneous decision-making.

Prepping for the scene, Hargrave asked himself a question: “What would it be like for the audience to experience an extraction in real time?” For logistical reasons, the sequence isn’t a true “one shot”—Hargrave stitched and blended numerous complex scenes together—but the effect, and the movie itself, still has the look and feel of an immersive and interactive role-playing video game, one with clearly defined objectives, identifiable “levels,” and indestructible avatars moving full-speed ahead. “It was all meant to capture a feeling of being present in a very intense, physical, close-quarters combat situation,” he says. And though Hargrave didn’t model the shot on any specific video games, he acknowledges their influence has become “ingrained subconsciously.”

Watch any big-budget action movie from the past decade and it’ll become apparent that the genre has seamlessly adopted the aesthetic and narrative DNA of contemporary RPGs. It’s visible in the “open world” stream of 1917, the third-person shoot-outs of John Wick, the stylized fighting of Sucker Punch, and the physics-defying “cut scenes” of Marvel movies. And thanks to overlapping technologies, increased investments in actor stunt training, and moviegoers’ and gamers’ unquenchable thirst for total immersion, the trend has only continued to grow. “I definitely think about who my audience is,” says director Doug Liman, whose 2014 alien epic Edge of Tomorrow features a video-game-style story centered on the idea of respawning. “They’re different from an audience 20 years ago.”

This shift has been approaching for a while. As video games and their softwares have become more cinematic and cutting edge, filmmakers have followed their lead, using emerging digital tools and unique perspectives to enhance storytelling. At the same time, it’s a two-way street of influence: Role-playing games themselves—from Call of Duty to The Last of Us to God of War—have taken their own visual and narrative cues from Hollywood. Now more than ever, the two industries and their consumers are in constant dialogue with each other, and the line between them is becoming blurrier and blurrier.

When a wave of young, talented filmmakers arrived in the 1970s to make up what became New Hollywood, arcade-style video games were just beginning to reach the mainstream. Simplistic in their left-to-right, flat designs, “ball and paddle” technology had understandably little influence on the era’s writers and directors. “I mean, I don’t know if there’s much Sidney Lumet or [Martin] Scorsese could have taken from Pong or Asteroids,” says John Wick director Chad Stahelski. “There weren’t a lot of cinematic trailers going on then for video games.”

It wasn’t until 1982, when Tron was released, that a large-scale motion picture attempted to engage and interface with the nascent video game industry. The movie’s protagonist, Kevin Flynn, gets transported into a virtual setting, and the majority of Tron’s running time is spent navigating the neon blues and reds of writer-director Steven Lisberger’s digitally enhanced world. “The spatial relationships of the characters in Tron, the camera’s point of view, and the actor’s relationship to the environment are key elements that define the film’s interiority,” writes David Sedman, an associate professor of film and media arts at Southern Methodist University. “Once Flynn becomes a part of the video game interface, the audience’s frame of reference becomes the gamespace.”

As that visual language began overlapping in the 1980s, designers recognized the value in engaging players with more narrative-driven games. Before the term even came into vogue, Pac-Man and Dragon’s Lair adopted noninteractive cutscenes, that interjected gameplay with quick storytelling transitions. Those transitions became longer and more prominent in the following decade, as games such as Night Trap and Wing Commander IV included full-motion videos with real actors, packaging scripts, lighting, and direction into virtual worlds. In 1995, video game maker Acclaim had finished building one of the earliest performance-capture studios, giving game avatars more lifelike, three-dimensional capability.

It wasn’t long before video games took on a more realistic, cinematic scope. Keith Arem, who began directing the Call of Duty franchise in 2005, was at the forefront of this new wave, combining voice-over acting with motion-capture performance and dropping it into World War II landscapes. “As motion capture became a significant part of animation and in-game action, I migrated a lot of my performance background with [actors] to incorporate into all the action sequences between gameplay or during gameplay,” Arem says. To enhance Call of Duty 2, he brought in actors from the HBO miniseries Band of Brothers, a clear inspiration for the game’s environments and story lines.

By the early 2000s, these newer technologies had migrated to big-budget Hollywood productions. For The Lord of the Rings, director Peter Jackson enhanced Acclaim’s performance-capture technology to allow Andy Serkis, as Gollum, to interact with actors in real time. The advances in photo-realism eventually turned blockbusters into showcases of nonhuman characters, an ever-growing phenomenon “where you’re seeing games and film really starting to cross over,” Arem says. “You see films like Avatar and other motion pictures that are taking what we’ve learned in games and exponentially modifying those to do motion-capture underwater, on animals, and over much larger spaces.”

But dialogue between major studios and game production companies isn’t limited to performance. As comic book and sci-fi movies have required more CGI, filmmakers have relied more on dynamic video game engines, capable of quickly rendering entire cityscapes, to communicate their expansive and otherworldly visions. The engines allow directors to collaborate throughout production with VFX and previsualization (previs, for short) departments, which can map out all the action sequences of a movie—in the same vein as old-school storyboarding—before anything is even shot on camera. “There are situations where the studio wants to start making a movie and they don’t have a director aboard, but they want to start with previs already,” says Ryan McCoy, a senior visualization supervisor at Halon Entertainment. “Often, we’re starting and there may not even be an art department.”

From a studio perspective, using game engines to craft blueprints makes financial sense. Marvel executives would prefer to see how Thor and Iron Man, for example, will interact flying above and around buildings on a computer before creating budgets for lighting, camera rigs, and staging. At Halon, which uses Unreal Engines, the same game engine used for Fortnite, McCoy has developed previs for countless action movies over the past decade—everything from Terminator: Salvation to Aquaman—often constructing realistic landscapes and pitching unique ways of moving the camera. “The whole point [for studios] is to figure out what you want the scene to be and how effective it is,” McCoy says. “And before you spend millions of dollars making that become reality, if you can spend a few hundred thousand in previs, then you’ve really solved that issue before it’s too late.”

Director involvement on these sequences varies. When Hargrave worked as a stunt coordinator on the last three Hunger Games films, he remembers director Francis Lawrence wanting stunt performers to mimic the previs simulations to see whether they were capable of re-creating them. “He’s like, ‘I want to make sure that the physical camera can execute the moves that I see in the previs, so I don’t fall in love with something that’s impossible to do practically,’” Hargrave remembers. “He saw it as a tool to inspire ideas. … A lot of these creative ideas are coming from the minds of young animators who probably have worked in the video game space.”

For scenes that require real actors and grounded setups, McCoy and the Halon team supply—like other previs companies—what they call “techvis,” which diagrams the kinds of cameras and filming equipment needed to pull off a previs shot. “Sometimes, thinking about real-world camera equipment can actually help make your shots better because it doesn’t feel like this weird, floating thing that’s bobbing all over the place in an unnatural way,” McCoy says. “How do we reverse-engineer that shot that you came up with? How do we make that something that’s actually possible?”

In 2021, as camera technology and capability continues to advance, nearly everything that previs companies and their designers think up can now be replicated by filmmakers in some way. “It’s never a conversation of what’s not possible—it’s just what’s in the budget, or what’s not, and do we want to make this full CG?” McCoy says. “It’s pretty cool what you can get away with.”

One of the most memorable scenes in 2014’s Kingsman: The Secret Service involves Colin Firth kicking ass inside a Kentucky church house. After a group of microchipped parishioners becomes violently programmed, Firth’s British agent draws his gun and begins a manic bloodbath. In an unlikely and dazzling display of aggression, Firth begins slicing throats, stabbing eye sockets, running over pews, and body slamming his targets with catlike reflexes—all as “Free Bird” shreds over the chaos. It’s a striking five-minute sequence, not just for its ultraviolence, but for the way the camera refuses to flinch from it all. Similar to an RPG perspective, director Matthew Vaughn’s lens hovers behind Firth, almost telepathically linked, creating a series of long takes interspersed with occasional first-person footage.

“It felt like a video game, and I’m sure that was the point,” says the movie’s editor, Eddie Hamilton.


In 2017, Atomic Blonde achieved a similar visceral feeling during a scene in which Charlize Theron takes out several attackers in a stairwell, a physical spectacle that director David Leitch keeps in frame without cutting for nearly 10 consecutive minutes. Two years later, Sam Mendes staged the entirety of 1917 to appear as one long tracking shot, with a third-person camera floating from behind and rotating around two soldiers traversing trenches and abandoned World War I–torn towns. The sprint scene near the movie’s finale feels like a “quick-time event” asking you to “tap X” to complete your avatar’s mission.

Considering the popularity of game-streaming sites like Twitch, the inclination for filmmakers to pick up the controller makes sense. As Los Angeles Times game critic Todd Martens observed after the release of 1917, “watching games today is as much a part of the culture as playing them,” and Mendes’s filmmaking choice, influenced by the Western-inspired Red Dead Redemption, caters to an audience whose visual vocabulary has been built around livestreams. “People are getting really used to being able to move the camera where they want and having cameras zooming into a character’s face and then zooming out to a third-person view of them running and then flying out over a mountain,” McCoy, the Halon supervisor, says. “That’s become more integrated into our global alphabet in film language.”

As more directors have taken advantage of growing digital toolboxes, these kinds of forward-moving, immersive scenes and stylistic editing choices have saturated the action-movie genre. It’s a sharp distinction from the clunky blocking, quick-cutting, shaky-cam aesthetic of the 1990s and 2000s, epitomized by Paul Greengrass’s The Bourne Supremacy in 2004, which was copied to worsening effect in subsequent and similar-themed movies. “Most of the shaky cam [and] the fast editing are meant to hide things—hide the lack of time, hide the lack of rehearsals, hide the shitty fight moves, or hide something that you don’t want the audience to see—or to infuse energy and action and kineticism that didn’t exist before,” Stahelski says. “I think [Bourne] was very, very well done … [but] back in the day, before Paul had made that more in vogue, that was just called shitty camera work.”

Now, however, as more actors invest in training for stunt scenes and digital cameras offer innovative capabilities, filmmakers have less reason to hide. That’s never more apparent than when you’re watching Keanu Reeves eliminate Russian mafia members in John Wick. In that franchise’s first film, with a small budget and little production time, Stahelski leaned on Reeves’s experience and commitment to memorizing long Aikido-based maneuvers so he could shoot his fight sequences in long takes with little coverage. “We weren’t training actors to be martial arts people—we’re trying to be dancers that look like martial arts fighters,” says Stahelski, who grew up admiring Hong Kong action films. “We had a guy that could remember more than 50 moves from all his experience with The Matrixes. So we used that to our advantage.” The logistical limitations also let Stahelski experiment with third-person gameplay elements. “I do like creating an umbilical cord between camera and performer so that if he ducks, you duck,” Stahelski says. “If you go around the corner, a camera goes around the corner.”

Stahelski’s methods have caught on throughout Hollywood, and the former stuntman’s company, 87Eleven, which he started with Leitch, has shown the results of extensive, tailored training and memorization work. “The trend that they’ve started is taking longer with actors,” says Hargrave, who collaborated with 87Eleven as a stunt double on Captain America: Winter Soldier and Civil War and as a stunt coordinator on Atomic Blonde. For Extraction, Hemsworth’s background in Muay Thai and kickboxing gave Hargrave the opportunity to coordinate his ambitious tracking shot without subbing in stunt performers. Meanwhile, in preparation for Kingsman, Firth trained for six months to master the skills needed to achieve Vaughn’s long takes. “When the performer is as prepared as a stunt performer would be,” Hargrave says, “it allows you to drop back a little bit and enjoy and appreciate the beauty of the moves and the cinematic quality of the choreography.”

It also helps when your star is a natural daredevil. Part of the reason the Mission: Impossible franchise can enlist smooth spider-cam shots of motorcycle chases and capture a HALO parachute fall in one shot is because Tom Cruise doesn’t require a double. His devotion to performing stunts often leads to breathtaking, occasionally first-person visuals, like in the upcoming Top Gun: Maverick, which placed—according to Hamilton, the movie’s editor—six small IMAX cameras straight into F-18 cockpits. “You’re watching actors in real F-18s doing real stunts, having trained for eight months to get their G tolerance up to do that,” Hamilton says. “It’s just a way of making the audience feel more immersed in the reality. What does it feel like to be in one of these planes and do this crazy shit?”

Video games have doubled down on these longer, cleaner shots, too. Unlike previous iterations, which showed off the massive scale of their environments, the latest God of War operates like a never-ending tracking shot, staying with the main character Kratos and his point of view throughout the entire game. Maneuvering Nathan Drake through Uncharted feels the same way, and it’s easy to mistake him for Cruise, who has pulled off similar combinations of athleticism, swinging from helicopters and hanging on to cliffs in one fell swoop.

“I have a number of previs friends that have gone to work for games in the cinematic departments. It becomes similar,” McCoy says. “There is a rich film language that’s already been established by movies and I think what’s been happening is that games have been trying to tap more and more into that and get grounded more into that film world, while still allowing for game cameras that are able to float around and don’t take you out of the experience.”

Much like their aesthetic convergence, the film and video game industries have also seen their narrative structures overlap. Though video games will always remain distinct from movies in their interactivity, plot-based RPGs are investing more in noninteractive cutscenes, guiding players through their designers’ script-focused, three-act structures. Similarly, more action movies have adopted storytelling devices designed to give audiences the familiar feeling that they’re in a game world—protagonists often have clear, emotion-based objectives, guided by screenplays with mission-based architecture. “I can talk all I want about John Wick, longer takes,” Stahelski says. “I still have to bring you in.”

When Doug Liman set out to make 2014’s Edge of Tomorrow, he knew the movie’s connection to video games was “screaming at the audience.” The director had become addicted to playing Grand Theft Auto and GoldenEye 007 in his 30s, and to this day lets himself play only “under specific circumstances” because of their addictive nature. That history informed the way he thought about his third feature, The Bourne Identity, which initially read like an RPG on paper. “I even toyed with possibly putting little icons on the screen, so that when Jason Bourne opened the safety deposit box, suddenly he’d have a gun and passports and money icons in the corner.”

But with Edge of Tomorrow (a title Liman hopes Warner Bros. will officially rebrand as Live Die Repeat), the director saw a better opportunity to create an immersive viewer experience. Based on the 2004 manga All You Need Is Kill, the movie follows William Cage, played by Tom Cruise, an officer with no combat history who’s thrust into military duty to fight an indestructible alien race. Upon his quick death, however, Cage is thrown into a time loop, forced to relive the previous day each time he dies. Much like a video game character respawning on the same level, Cage learns to become a better soldier in an attempt to escape his repetitive existence. “One of the things that happens is you get sent back to the beginning and you go, ‘Oh God, I’ve got to get through all of that to get back to where I was?’” Liman says. “I had to come up with ways to find stakes. Some of those ways I could pull from video games.”

Initially working with a team of 10 writers on a script eventually credited to Christopher McQuarrie and Jez and John-Henry Butterworth, Liman examined the emotional toll of living in a time loop and supplemented Cage’s isolated journey with another soldier’s (Emily Blunt), who had already been stuck in a loop. “We’re an unusual action film where avoiding dying isn’t the main goal,” Liman says. “It is a goal—because it hurts and you have to start over—but it’s more of an annoyance than game over.” It didn’t hurt that Liman’s admittedly amateurish gaming skills deepened his comprehension of Cage’s existential crisis. “For all the time I played that stupid game, I never got off the island in GTA,” he says. “So, I understood innately the frustration of being trapped on a level. Had I possibly been a better video game player, I wouldn’t have known that that was the trick to making Live Die Repeat.”

Game structure in action movies has been identifiable for the past three decades—consider the time-loop structure of Run Lola Run and the list of increasingly challenging enemies Uma Thurman must defeat in Kill Bill—but today it feels unavoidable. In Snowpiercer, each train car feels like its own distinct level; throughout the John Wick franchise, the Continental Hotel serves as a safe haven to heal up and acquire weapons; Mad Max: Fury Road has a boomerang road map that resembles a racing game; and in 1917, the plot hinges on a linear mission to deliver valuable news as protagonists stop at various checkpoints. Even Hargrave acknowledges that Extraction’s tracking shot and alternating perspectives mimic the feeling of a multiplayer game. “Yeah, it is Enter Player 3,” he says. “Now you’re in the game, and then Welcome Back, because Player 1 is chopping you in the neck.”

“We all stole it from old literature and old mythology,” Stahelski says. “That’s Odysseus … Minotaurs and sirens and cyclops, that’s your typical storytelling motif of mission to mission to mission. … And then each medium has taken it and kind of molded it to fit its goal.”

The logical culmination of immersive action movies arrived in theaters six years ago. Hardcore Henry, Ilya Naishuller’s punk-rock, sci-fi thrill ride, debuted to audiences with the unique distinction of being the first mainstream action movie to be shot entirely in a first-person perspective. Though other movies had used the technique in various capacities, the Russian director remained committed to keeping the lens solely from the point of view of his amnesic, cyborgian protagonist for a full 90 minutes. “I just thought it would be a crazy great experience to have something completely fresh, yet something so familiar,” Naishuller says. “I’m a huge video game nerd, and I saw it as an opportunity to do something that hasn’t been done.”

Naishuller had experimented with the form before, most notably in a 2013 music video he directed for his band called “Bad Motherfucker.” When producer Timur Bekmambetov suggested he try the style as a full movie, the director initially balked. “The way I saw it was, I’m a serious person,” he says. “I’ll be a serious director making important movies with important things to say. I didn’t want my feature debut to be explosions, breasts, and violence.” But Bekmambetov pushed him on the concept, and Naishuller eventually warmed to the idea of turning a first-person shooter into something that could inspire a teenage audience, much like how The Matrix inspired him. “The concept drove the story rather than the other way [around], which is not how you’re supposed to make good movies,” he admits. “But at that moment, I was thinking this could be something very joyful and gleefully exciting for a young person to enjoy.”

The movie itself operates the way most first-person games do. Henry wakes up in a Moscow laboratory and spends the length of the movie following the commands of various guides (all played by Sharlto Copley), who provide him with instructions to defeat his next targets. With the exception of some fluid cuts, Naishuller keeps his speechless protagonist always moving forward, engaging in hand-to-hand combat and shotgunning his way through abandoned warehouses. Naishuller rigged GoPros to various stuntmen’s chins to pull off the effect, making sure that his frame didn’t get too shaky. “I think the biggest concern is you’re robbing the audience of the ability to empathize with the hero when you’re not seeing the hero’s face and you’re not having a great actor act,” Naishuller says. “We don’t learn much about him by the end of it, but I wanted to immerse as much as possible.”

Hardcore Henry made just $16.8 million at the box office, and a feature-length, first-person POV movie hasn’t been attempted since. But the “gimmick,” as Naishuller partially concedes, was useful to see how audiences reacted to a gamified vision without the interactivity attached. It was also another example of filmmakers continuing to experiment with the medium and genre, incorporating radical yet familiar perspectives—even the most personal, immediate version of them—into the theatrical experience. “When video game designers get to design action sequences and cutscenes, they are only limited by their imagination,” Naishuller says. “When the filmmakers get the same opportunity, then minds start to think alike.”

The ultimate goal—for action movies and games—is to create immersion, to connect with audiences through storytelling on a more visceral level, with innovative technology that aids that process. Though previs may account for Marvel’s more homogenous look, it’s also brought to life previously untenable sequences straight out of comic books; motion capture technology has improved to the point that facial features can be adjusted to de-age actors, swap visages, and account for different languages; and newer game engine advances like StageCraft technology—large LED backdrops that display real-time imagery—have even begun replacing green screens, giving actors better performative guides. “I think what you find in the game industry is that we’re constantly pioneering new technologies and new techniques,” Arem says. “They’re always going to be heralded or criticized based on how realistic they are. But it’ll always come back down to how well they serve the content.”

Hargrave agrees. The director, getting set to direct Extraction’s sequel, remains sturdy in his stunt roots and is a strong advocate for capturing the physical feats of performers on camera. But he also knows the increased accessibility of high-def cameras and previs digital wizardry give him the tools to supplement—not substitute—his footage.

“It’s going to just continue to allow for creative directors and filmmakers to up the ante with action,” he says. “I’m looking forward to it.”

Jake Kring-Schreifels is a sports and entertainment writer based in New York. His work has also appeared in Esquire.com, GQ.com, and The New York Times.

Sound Only

Reviewing ‘Deathloop’ and Looking Ahead to Episode 100

Video Games

Is Marvel’s Video Game Takeover Inevitable?

Video Games

Video Games’ Sensory Revolution: How Haptics Reinvented the Controller

View all stories in Video Games