it is 2016 and Brad Pitt's latest heist film, Ocean's Fifteen, is a roaring box-office success. Moviegoers can't get enough of his wit, charm and ever-youthful good looks - except the man himself never set foot in front of a camera. Instead, Pitt's digital image was recreated on screen by teams of visual effects artists, leaving the actor free to work on other projects.
This scenario is quite plausible. Motion-capture technology has created realistic non-humans, such as Gollum in The Lord of the Rings and the Na'vi in Avatar. Now the time has come for a digital human to be cast in a leading role.
"We're definitely there, it just needs to be exploited," says Stephen Rosenbaum, a visual effects supervisor who won an Oscar in 2010 for his work on Avatar. "You look at the imagery and the performances and it's absolutely believable. It just takes a director that's going to be brave enough to cross that line."
Some directors have already come close. Shortly after winning his Oscar, Rosenbaum joined Digital Domain, the Los Angeles visual effects company behind two of the most believable examples of a digital human. In The Curious Case of Benjamin Button, released in 2008, they transformed Brad Pitt into an old man who then grew younger throughout the film. In 2010 they did the reverse, allowing Jeff Bridges to play a younger version of himself in Tron: Legacy.
Bridges wore a headset attached to four cameras that tracked his facial movements, which were then mapped onto a digital model of his younger self. The model was created by scanning his head and face and altering their appearance based on images and video from his earlier films.
With motion capture it is possible for the head and body to be played by different actors. Pitt and Bridges's characters used older and younger body doubles so that their body movements were right for the character's age.
"Benjamin Button made a huge leap in look, but benefited from the fact they had an older character," says Ben Morris, Oscar-winner and visual effects supervisor at the Framestore studio in London. Button's advanced age meant he didn't always have to be fully expressive. A younger Bridges proved more difficult. Morris says: "I think it was incredibly impressive, but I felt on occasion the character's eyes and facial animation didn't work."
Both characters can trace their technological roots to Gollum, the fully digital creature played by Andy Serkis in The Lord of the Rings trilogy, but advances in computing power, motion capture and digital scanning are now pushing visual effects to an entirely new level.
Take Light Stage, the digital scanning technology used in films like Tron: Legacy. It uses three high-speed HD video cameras that film the actor as they are lit from different angles. The first version used a single spotlight, while the second version featured 30 strobe lights and was used for facial capture inSpider-Man 2. The latest iteration has a sphere of 6500 LED lights and can capture full-body motion, not just faces.
"The levels of detail are jumping by orders of magnitude," says Morris. "Now we can scan down to the mesoscopic level of skin detail, which is pore detail and very fine wrinkling."
Visual effects artists need such an incredible level of detail because even the smallest imperfection can ruin the illusion. This well-known effect, "the uncanny valley", describes how we recoil from digital characters that look almost but not quite human. "We're on the cusp of it being good, but I think for a few years movies will waver on either side of the line," says Simon Robinson, chief scientist at The Foundry in London, which produces software tools for the visual effects industry. Robinson says the technology could allow young actors to archive their performances and then license the rights to use their digital self in future films, thereby prolonging their career.
Visually and technologically, movies and games are moving closer together. Performance-capture techniques from the latest games are already attracting the attention of film-makers (see "L.A. Noire leads the way"), and actors will soon be able to inhabit digital sets, rather than just stand in front of a blue or green screen. "You go blue box crazy," says Morris.
It's already possible for directors to watch their actors perform in real time on digital backgrounds, and Morris is using the same techniques to create the scene from a character's point of view, presenting it to the actor through virtual-reality goggles. It's an immersive experience that helps actors rehearse, says Morris, but it can't yet be used during live-action filming. "When we go for a take, they do have to take the apparatus off their head."
So when will we see the first digital human on our screens? "I think a lot of the tools are in place," says Morris. "It's actually down to a director with vision and a team of talented artists with enough time to observe and follow through."
Money is also a factor: "Like everything in Hollywood, it's rationalising the budgets," says Rosenbaum. "I've spoken with some of the best directors in the business, and they're ready." Virtual Brad Pitt, here we come.
L.A. Noire leads the way
Film directors have the luxury of working with live actors, but every character in a game has to be created from scratch, which can leave them looking lifeless. L.A. Noire is different - every character is played by a human actor whose performance is recreated using MotionScan, a new technique developed by the Australian-based firm Depth Analysis.
L.A. Noire's cast isn't photo-realistic, since gaming consoles can't render high detail in real time, but their subtle facial movements make them some of the most lifelike characters ever seen in a video game.
The system uses 32 high-definition cameras to capture an actor's performance from every angle and then stitches the images together to create a 3D model. For the moment the technology is restricted to faces but full-body capture should be possible. "We think a full-body MotionScan capture would have a significant impact of what's possible in game and film-making," says Depth Analysis spokeswoman Jennie Kong. Kong says Depth Analysis is talking to visual effects artists about how the technology can be applied to digital humans in film. "Once everything is captured, the performance can be lit, relit, positioned at any angle for easy editing, so the director doesn't have to retake a strong performance from every angle."
No comments:
Post a Comment