Categories
Cinematography Film Editing Film Sound People Technology VFX

Here We Go Again: The Digital Cinema Revolution Begins

Happy Birthday to George Lucas! As we know George was a big proponent of the use of digital technology in cinema. When I worked at Sony in the 1990’s, we were on the cutting edge of using digital cameras for cinematography. Here is a video from Sony that highlights the development of the Sony cameras used in Star Wars.

Here is another video from ILM about all the areas that George changed with digital technology for editing and VFX. Thank you Mr. Lucas!

Categories
Animation Technology VFX

Face2Face: Real-time Face Capture

I have already blogged about software that allows actors facial expressions to be edited in post. Now take a look at Face2Face: Real-time Face Capture. It can map new facial expressions real time over video. While very interesting from a technological viewpoint, the idea of ‘photoshopping” video will certainly affect journalistic ethics and the trustworthiness of video evidence.

From Michael Zhang Petapixel.

Face swap camera apps are all the rage these days, and Facebook even acquired one this month to get into the game. But the technology is getting more and more creepy: you can now hijack someone else’s face in real-time video.

A team of researchers at the University of Erlangen-Nuremberg, Max Planck Institute for Informatics, and Stanford University are working on a project called Face2Face, which is described as “real-time face capture and reenactment of RGB videos.”

bushhead

Basically, they’re working on technology that lets you take over the face of anyone in a video clip. By sitting in front of an ordinary webcam, you can, in real-time, manipulate the face of someone in a target video. The result is convincing and photo-realistic.

vladimirhead

The face swap is done by tracking the facial expressions of both the subject and the target, doing a super fast “deformation transfer” between the two, warping the mouth to produce an accurate fit, and rerendering the synthesized face and blending it with real-world illumination.

techdemo

To test the system, the researchers invited subjects to puppeteer the faces of famous people (e.g. George W. Bush, Vladimir Putin, and Arnold Schwarzenegger) in video clips found on YouTube. You can see the results (and an explanation of the technology) in this 6.5-minute video:

Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, June 2016.

Abstract

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.

See Matthias Nießner for more info.

 

It can also be done with 2 live cameras.

ACM Transactions on Graphics 2015 (TOG)

Abstract

We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photo-realistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.

Categories
Animation Disney Filmmaking Technology

New Software Can Actually Edit Actors’ Facial Expressions

FaceDirector software can seamlessly blend several takes to create nuanced blends of emotions, potentially cutting down on the number of takes necessary in filming.

A new software, from Disney Research in conjunction with the University of Surrey, may help cut down on the number of takes necessary, thereby saving time and money. FaceDirector blends images from several takes, making it possible to edit precise emotions onto actors’ faces.

Shooting a scene in a movie can necessitate dozens of takes, sometimes more. In Gone Girl, director David Fincher was said to average 50 takes per scene. For The Social Network actors Rooney Mara and Jesse Eisenberg acted the opening scene 99 times (directed by Fincher again; apparently he’s notorious for this). Stanley Kubrick’s The Shining involved 127 takes of the infamous scene where Wendy backs up the stairs swinging a baseball bat at Jack, widely considered the most takes per scene of any film in history.

“Producing a film can be very expensive, so the goal of this project was to try to make the process more efficient,” says Derek Bradley, a computer scientist at Disney Research in Zurich who helped develop the software.

Disney Research is an international group of research labs focused on the kinds of innovation that might be useful to Disney, with locations in Los Angeles, Pittsburgh, Boston and Zurich. Recent projects include a wall-climbing robot, an “augmented reality coloring book” where kids can color an image that becomes a moving 3D character on an app, and a vest for children that provides sensations like vibrations or the feeling of raindrops to correspond with storybook scenes. The team behind FaceDirector worked on the project for about a year, before presenting their research at the International Conference on Computer Vision in Santiago, Chile this past December.

Figuring out how to synchronize different takes was the project’s main goal and its biggest challenge. Actors might have their heads cocked at different angles from take to take, speak in different tones or pause at different times. To solve this, the team created a program that analyzes facial expressions and audio cues. Facial expressions are tracked by mapping facial landmarks, like the corners of the eyes and mouth. The program then determines which frames can be fit into each other, like puzzle pieces. Each puzzle piece has multiple mates, so a director or editor can then decide the best combination to create the desired facial expression.

To create material with which to experiment, the team brought in a group of students from Zurich University of the Arts. The students acted several takes of a made-up dialogue, each time doing different facial expressions—happy, angry, excited and so on. The team was then able to use the software to create any number of combinations of facial expressions that conveyed more nuanced emotions—sad and a bit angry, excited but fearful, and so on. They were able to blend several takes—say, a frightened and a neutral—to create rising and falling emotions.

The FaceDirector team isn’t sure how or when the software might become commercially available. The product still works best when used with scenes filmed while sitting in front of a static background. Moving actors and moving outdoor scenery (think swaying trees, passing cars) present more of a challenge for synchronization.

By Emily Matchar
smithsonian.com

From Disney Research

We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states. As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production. Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.

 

Download File “FaceDirector- Continuous Control of Facial Performance in Video-Paper”
[PDF, 13.22 MB]

 

Copyright Notice

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

 

Categories
Animation Filmmaking Technology VFX

“What Lives Inside” Episodes

I previously wrote about What Lives Inside here.

This is the future folks, computer companies producing a movie to be shown on a streaming service.

Get ready to be taken to a world beyond your imagination. From Academy Award Winner Robert Stromberg, Dell presents What Lives Inside. Starring Academy Award Winner J.K. Simmons, Colin Hanks and Catherine O’Hara. Premiering March 25th only on Hulu.

This is the Episode playlist and some behind the scenes.

Here is the Making of video.


Intel Dell What Lives Inside – Behind the Scenes by CGMeetup


Intel Dell What Lives Inside – VFX Breakdown by CGMeetup

Categories
Animation Filmmaking Technology VFX

“What Lives Inside” Official Trailer and behind the scenes

This is the future folks, computer companies producing a movie to be shown on a streaming service.

Get ready to be taken to a world beyond your imagination. From Academy Award Winner Robert Stromberg, Dell presents What Lives Inside. Starring Academy Award Winner J.K. Simmons, Colin Hanks and Catherine O’Hara. Premiering March 25th only on Hulu. Find out more at here.

From Fast Co.Create.

What Lives Inside is the fourth installment of Intel’s “Inside Films” series, dating back to a partnership with Toshiba, and agency Pereira & O’Dell, that started in 2011 with Inside, starring Emmy Rossum and directed by D.J. Caruso. It was followed by 2012’s The Beauty Inside starring Topher Grace, and 2013’s The Power Inside starring Harvey Keitel.

This year’s film, divided into four episodes, is about the son of an absentee father (Hanks) who finds himself on a journey of self-discovery after the death of his father (Simmons), a well-known and acclaimed children’s puppeteer who was widely celebrated for his creativity. The son discovers a mysterious world of his dad’s creation and finds himself on an adventure that will soon unlock his own creativity.

3043484-slide-i-1-intel-and-dell-launch-what-lives-inside-a-social-film-directed-by-two-time-oscar-winner-robe

Pereira & O’Dell chief creative officer PJ Pereira says one of the biggest challenges this year was finding another fresh way of bringing the same premise—Intel tagline “It’s what’s inside that counts”—to life. “We had to find a role to make the product not the subject of the story we are telling, but a character,” says Pereira. “Because characters are what the audience will remember and love months after the campaign is gone.”

In addition to Oscar-winning talent, each Inside Films series featured a social element, soliciting submissions from people for the chance to see their photo and videos for the film, or even audition for a part. This year, it worked a bit differently. “This time, because the central theme is their creativity, that’s what is on display. Their drawings, as if they were all kids that have submitted ideas to the character played by J.K. Simmons,” says Pereira.

Just six weeks after Stromberg issued a challenge online, the film received thousands of creature submissions and more user-generated content than the two previous films combined. “This project always felt more like a film than an ad, with its longer format, incredible cast and extensive visual effects,” says Stromberg, who won art direction Oscars for Alice in Wonderland and Avatar. “The whole interactive angle is also super interesting to me. We’ve had over 6,000 submissions of art work, which is crazy! I just think that’s a much better indicator of engagement than throwing a project into testing. I love how it lets people be an active part of the final product. Any time I can be a part of inspiring others to get in touch with their creative side, only inspires me more as an artist.”

The film debuts on Hulu, with new episodes weekly for four weeks, then starting May 6 the full series will be available on WhatLivesInside.com and YouTube.

Categories
Cinematography Technology VFX

VFX Legend Douglas Trumbull talks about the Future of Film … and Kubrick.

From the Sept. 12 issue of The Hollywood Reporter.

Trumbull drives me a short distance from his home to a full-size soundstage and escorts me into a screening room that he has constructed to meet his ideal specifications: a wide wall-to-wall and floor-to-ceiling curved screen, with surround sound, steeply rigged stadium seating and a 4K high-resolution projector. As I put on specially designed 3D glasses and settle into stadium seating, he tells me, with an unmistakable hint of nervousness, “You’re one of the first people on the planet to see this movie.”

Ten minutes later, the lights come back up and I sit in stunned silence. The short that I have just seen, UFOTOG (a blending of the words “UFO” and “fotog,” the latter slang for press photographer), is stunning not because of its story — we’ve all seen movies about UFOs — but because it shows, as it was designed to do, what movies can look like if theaters, studios and filmmakers embrace the MAGI process through which Trumbull brought it to the screen: bigger, brighter, clearer and with greater depth-of-field than anything ever seen in a cinema before.

All of the aforementioned conditions are part of the MAGI equation, but the most essential element is the rate of frames per second at which a film is projected. In the beginning, the Lumiere brothers projected films at 18 fps, slow enough to result in the appearance of flickering —  hence the early nickname for the movies, “the flickers” or “the flicks.” That figure eventually increased to 24 fps, and has remained there, for the most part, ever since.

In 2012, Peter Jackson dared to release The Hobbit‘s first installment at 48 fps, which was supposed to create a heightened sense of realism, but which instead struck many as strange-looking and some even as nauseating. Many deemed the experiment a failure. Trumbull disagreed. He felt that if a digitally shot film was projected even faster — markedly faster, as in 120 fps, via a bright projector and onto a big screen — then the movie screen itself would seemingly disappear and serve effectively as a window into a world on the other side that would appear as real as the world in which one sits.

tothemoon

To the Moon and Beyond featured a 70 mm circular image projected onto a dome screen and took viewers on a journey “from the Big Bang to the microcosm in 15 minutes.” Two of the thousands who saw it were Stanley Kubrick, the filmmaker, and Arthur C. Clarke, the writer, who came away from it convinced that an A-level sci-fi film — which eventually became 2001: A Space Odyssey — was possible. Kubrick contracted Graphic Films to produce conceptual designs for the project, but, once it got off the ground, moved it to London, at which point 23-year-old Trumbull cold-called the director and got a job on the film. His greatest contribution to it was devising a way to create a believable “Star Gate” effect, representing “the transformation of a character through time and space to another dimension.” Even though Kubrick alone claimed screen credit and an Oscar for the film’s VFX, Trumbull instantly became a name in the business.

silent-running-movie-poster-1972-1020209768

A few years later, he made his directorial debut with Silent Running (1972), a well-received film that landed him deals at Fox, MGM and Warner Bros. — but all of them “unraveled for stupid reasons.” By 1975, “desperate because you can’t live on development deals,” he and Richard Yuricich proposed the creation of the Future General Corporation, through which they would try to identify ways to improve the technology used to make films. Paramount agreed to sponsor the endeavor — which, to them, was a tax write-off — in return for 80 percent ownership. Within the first nine months of its existence, Trumbull says, “We invented Showscan [a manner of projecting films at 60 fps]. We invented the first simulator ride. We invented the 3D interactive videogame. And we invented the Magicam process [by which actors can perform in front of a blue screen, onto which nonexistent locations can be projected to create virtual realities].” And yet, in the end, Paramount “saw no future in the future of movies” and failed to support their efforts, devastating Trumbull, who was under exclusive contract to the studio for the next six years. (The studio’s one gesture that he did appreciate: loaning him out to Columbia to do the special effects for Close Encounters of the Third Kind.)

Trumbull got out of his Paramount contract in 1979 thanks to Star Trek: The Motion Picture. The original effects team that had been engaged for the highly anticipated film couldn’t handle the job, something the studio realized only six months before its long-scheduled Christmas release date. The studio begged Trumbull to take over, and he agreed to do so — provided he was paid a considerable fee and released from his contract. He got what he requested and, to the detriment of his health, also got the job done on time.

Newly a free agent, Trumbull continued to take on special effects jobs for others — for instance, Ridley Scott‘s Blade Runner (1982) — but his primary focus was on directing a film of his own that would demonstrate the capabilities of Showscan. For the project, which he called Brainstorm, he secured a top-notch cast, led by Natalie Wood, and a major distributor, MGM. Production got underway and was almost completed when, on Nov. 29, 1981, tragedy struck: Wood drowned under circumstances that remain mysterious to this day. Since Wood had only a few small scenes left to shoot, Trumbull felt that he could easily finish the film, but MGM, which was in dire financial straits, filed what he deemed a “fraudulent insurance claim” because “they wanted to get out of it.”

Doug Trumbull on motion simulator base for “In Search of the Obelisk” (1993) VistaVision ridefilm at the Luxor Las Vegas.
Doug Trumbull on motion simulator base for “In Search of the Obelisk” (1993) VistaVision ridefilm at the Luxor Las Vegas.

Photo courtesy of Mice Chat.

Then, in 1990, he was approached about making a Back to the Future ride for Universal Studios venues in Florida, Hollywood and Japan. Others had been unable to conquer it, but he made it happen — and in a groundbreaking way: “It took you out of your seat and put you into the movie. You were in a DeLorean car. You became Marty McFly. You became a participant in the movie. The movie was all around you.” It ran for 15 years, he says, but was “dismissed as a theme park amusement.” He felt it was something more. “This was a moment where, for the first time in history, you went inside a movie.” Even though others failed to see larger possibilities, he says, “That kinda kept me going for a long time because it validated that we could be here in the Berkshires and make breakthroughs that no one else was able to do in Hollywood or anywhere else.”

In 2009, James Cameron‘s Avatar, a digitally shot 3D production that grossed a record $2.8 billion worldwide, changed everything. Its success spurred, at long last, filmmakers to transition en masse to digital photography and theaters to transition en masse to digital projection — at which point Trumbull made a crucial discovery. He realized that digital projectors run at 144 fps — twice as fast as Showscan had been able to — but films were still being made at 24 fps, with each frame just flashing multiple times. “Could we do a new frame every flash?” he wondered. If so, he reasoned, it might just give people a reason to put down their smartphones, tablets and laptops and actually buy a ticket to see a movie in a theater.

After years of work on his farm, Trumbull is finally ready to unveil UFOTOG. Its first public presentation will take place on Sept. 11 as part of the Toronto International Film Festival’s Future of Cinema conference (at which Trumbull will also give a keynote address), and it will also screen days later at the IBC Conference in Amsterdam. At both venues, he says, his message will be rather straightforward: “It’s not rocket science, guys. It’s just a different shape, a different size, a different brightness and a different frame rate. Abandon all that crud that’s leftover from 1927. We’re in the digital age. Get with it.”

The cost of these changes, he insists, will be rather negligible: projectors are already equipped to handle faster frame rates, and would require only slightly more data time and render time; theaters are already adopting brighter projectors that employ laser illumination, which uses a longer-lasting bulb to produce twice the amount of light; and theaters, he believes, will soon recognize that they are in the “real estate business” and that it is in their interest to have fewer total screens but more big screens, for which the public has demonstrated a willingness to pay a premium.

Trumbull’s main objective, though, is “to show the industry what it is possible to do” with MAGI. He says he’s “dying to show” UFOTOG to filmmakers such as Jackson, Cameron and Christopher Nolan, whom he regards as kindred souls. But mostly, he wants to challenge the industry one more time, warning it, “If you want people to come to theaters, you better do something different.”


Categories
Animation Technology VFX

The Congress, a film by Ari Folman.

The recent controversy about what Andy Serkis said about Dawn of the Planet of the Apes and Scarlett Johansson’s performance in Her are both indications of the rise of the “Virtual Actor.” 

After seeing the fascinating trailer, I am looking forward to seeing this film. Please note the Fleischer and Kubrick comments in the interview selections below.

From the website for The Congress, a film by Ari Folman.

Robin Wright, a Hollywood actress who once held great promise (“The Princess Bride”, “Forest Gump”), receives an unexpected offer in mid-life: Mirramount Studios want to scan her entire being into their computers and purchase ownership of her image for an astronomical fee. After she is scanned, the studio will be allowed to make whatever films it wishes with the 3-D Robin, including all the blockbusters she chose not to make during her career. As if that were not inducement enough, the studio promises to keep the new 3D Robin forever young in the movies. She will always be thirty-something, a stunning beauty who never grows old. In return, Robin will receive tons of money but shell be forbidden to appear on any kind of stage for all eternity. Despite her deep internal resistance, Robin eventually signs the contract , since she understand that in the economy of scanned actors, its her only way to stay in the business, but even more crucial, Robin can give her son Aaron, who suffers from a rare disorder, the best treatment money can buy. The contract is valid for 20 years.

Twenty years later, Robin arrives at Abrahama, the animated city composed by Miramount Nagasaki, once a Hollywood studio that signed Robin, and now the exclusive creator of the cinematic dream-world that controls all our emotions, from love and longings to ego and deathly anxieties. Miramount Nagasaki’s chemistry is everywhere, from the air-conditioning to the water sources. During the intervening two decades, the corporation has turned Robin Wright from a Hollywood actress with unfulfilled potential into an international superstar and fantasy. On-screen, she has remained forever young. In the animated world of the future, Miraramount Nagasaki is celebrating a huge gathering in the heart of the desert, “The Futurist Congress.” At the event, Miramount Nagasaki’s genius scientists — once creators of movies, now computer programmers who have evolved into chemists and pharmacists—will declare the next stage in the chemical evolution: free choice! From now on, every viewer can create movies in his own imagination, thanks to chemical selection. Robin Wright is now a mere chemical formula that every person can consume by taking the correct prescription, then staging whatever story they desire: Snow White, personal family dramas, or porn. It’s all in the brain, all through chemicals.

The animated Robin Wright is an “elderly” woman of 66. When she arrives at the congress as the guest of honor, no one recognizes her as the stunning beauty admired by all, a star whose image is broadcast on screens in every corner of the congress. She is lonely, about to become a chemical formula, when out of nowhere, Paramount Nagasaki’s utopian plan is suddenly derailed: the thinking man, the resister, the rebels who have been fighting the deceptive regime of the pharmaceutical world, unite and turn the Futurist Congress into a fatally violent arena. The struggle for clarity of thought becomes a war of independence for the right to imagine. Out of the forgetting and the loss, Robin suddenly regains the ability to choose. Will she go back to living in the world of truth, a gray world devoid of chemistry, where she is an aging, anonymous actress caring for her sick 30-year-old son? Or will she surrender to the captivating lie of the chemical world and remain forever young?

The Congress by Ari Folman

robinwright

In his novel The Futurological Congress, the great science fiction writer Stanislaw Lem foresaw a worldwide chemical dictatorship run by the leading pharmaceutical companies.  Written in the late nineteen-sixties, the book depicted drug manufacturers’ complete control of our entire range of emotions, from love and longings, to jealousy and deadly fear. Lem, considered sci-fi’s greatest prophet and philosopher (alongside Philip K. Dick), could not have realized how prescient he was in predicting the start of the third millennium. Into the psychochemical whirlwind foreseen by Lem, the film adaptation of his novel introduces the current cinematic technologies of 3-D and motion capture, which threaten to eradicate the cinema we grew up on. In the post-“Avatar” era, every filmmaker must ponder whether the flesh and blood actors who have rocked our imagination since childhood can be replaced by computer-generated 3-D images. Can these computerized characters create in us the same excitement and enthusiasm, and does it truly matter? The film, entitled The Congress, takes 3-D computer images one step further, developing them into a chemical formula that every customer may consume through prescription pills, thereby compiling in their minds the movies they have always wanted to see, staging their fantasies, and casting the actors they adore. In this world, these beloved creatures of stage and cinema become futile relics, lacking in content, remembered by no one. Where, then, do these actors go after selling their souls and identities to the studio devil? The Congress comprises quasi-documentary live-action sequences that follow one such actress, Robin Wright, as she accepts an offer to be scanned and signs a contract selling her identity to the studio, then transitions into an animated world that depicts her tribulations after selling her image, up until the moment when the studio turns her into a chemical formula. Only the mesmerizing combination of animation – with the beautiful freedom it bestows on cinematic interpretation – and quasidocumentary live-action, can illustrate the transition made by the human mind between psychochemical influence and deceptive reality. The Congress is primarily a futuristic fantasy, but it is also a cry for help and a profound cry of nostalgia for the old-time cinema we know and love.

INTERVIEW WITH ARI FOLMAN

THE CONGRESS presents a strongly dystopic vision of Hollywood and big studio movies – is that also how you view that part of the industry? Does your film reflect a fear for the future of cinema?

While searching for a suitable location in LA to shoot the scanning room scene, I was  shocked to learn that such a room already exists. Actors have been scanned for a number of years now – this technology is already here. Flesh and blood actors are not really needed in this ”post Avatar era“. I guess its economics now that dictate whether the next generation of films will be with scanned actors, or with a completely new generation of actors ”built from scratch“. As an optimist, I think the choice for a human actor will win out and I hope The Congress is our small contribution toward that goal.

So many details in THE CONGRESS are ”futuristic“ yet still very current – do you see any positive aspects of living in another reality, behind an online avatar for example? Do you think it approaches the film‘s idea of choosing your own reality?

I think the chemical world outlined in Lem‘s novel and in the film is a fantasy, but at the same time its still a major fear for those of us who travel in our imagination and our dreams. I have always had the feeling that everybody, everywhere lives in parallel universes, one, were we function in real time and the other, the universe where our mind takes us – with or without our control. Combining the two worlds into a one, is for me the biggest goal of being a filmmaker. 

The film is unique but features what seems like an encyclopedia of significant references in terms of cinema and otherwise. Were there key films or other influences that served as guides or inspirations as you made this movie?

The animated part is a tribute to the great Fleischer Brothers‘ work from the 30‘s. It‘s hand drawn, made in 8 different countries and took two and a half years to create 55 minutes of animation. It was by far the toughest mission of my life as a director. The team back home, led by the director of animation, Yoni Goodman were working 24/7 to ensure the animation from a number of different studios had a consistency in the characters from scene to scene. During the process we discovered that sleep is for mortals and animation for the insane! Elsewhere in the movie I try to pay tribute to my idol Stanley Kubrick twice; once with a reference to Dr. Strangelove and another to 2001: A Space Odyssey, still my favorite sci-fi movie ever.

Congresswide

For more behind the scenes go here.

Categories
Cinematography Disney Technology VFX

Lucid Dreams of Gabriel – Teaser

From Variety,

Disney and Swiss pubcaster SRF unveil experimental short at Locarno fest.

At the Locarno Film Festival, the Disney lab and SRF jointly unveiled an impressive experimental short titled “Lucid Dreams of Gabriel” (see teaser) which for the first time displayed local frame variation, local pixel timing, super slow motion effects, and a variety of artistic shutter functions showcasing this “The Flow-of-Time” technique.

The project was created by the Disney Research lab in tandem with the formidable computer graphics lab at the Swiss Federal Institute of Technology Zurich (ETH) with SRF providing studio space, personnel, and other resources.

“We wanted to control the perception of motion that is influenced by the frame rate (how many images are shown per second) as well as by the exposure time,” said Markus Gross, who is Vice President Research, Disney Research and director of Disney Research, Zurich, at the presentation.

Use of the new technologies in the short, which is a surreal non-linear story about a mother achieving immortality in her son’s eyes after an accident in the spectacular Engadin Alpine valley, allowed director Sasha A. Schriber to avoid using green screen and to make the transition from reality (at 24 frames per second) to a supernatural world (at 48 frames per second).

“Lucid Dreams Of Gabriel,” an experimental short film created by Disney Research in collaboration with ETH, Zurich, was shot at 120fps/RAW with all effects invented and applied in-house at Disney Research Zurich. We sought to produce a visual effects framework that would support the film’s story in a novel way. Our technique, called “The Flow-Of-Time,” includes local frame rate variation, local pixel timing and a variety of artistic shutter functions.

Effects include:
•High dynamic range imaging
•Strobe and rainbow shutters
•Global and local framerate variations
•Flow motion effects
•Super slow motion
•Temporal video compositing

The following scenes of the teaser, indicated by the timecode, demonstrate different components of our new technology:

Shots with a dark corridor and a window (0:08); a man sitting on a bed (0:16):
Our new HDR tone-mapping technique makes use of the full 14 bit native dynamic range of the camera to produce an image featuring details in very dark as well as very bright areas at the same time. While previous approaches have been mostly limited to still photography or resulted in artifacts such as flickering, we present a robust solution for moving pictures.

A hand holding a string of beads (0:14):
As we experimented with novel computational shutters, the classic Harris-shutter was extended to make use of the full rainbow spectrum instead of the traditional limitation to just red, green, and blue. For this scene, the input was rate converted using our custom technology, temporally split and colored, then merged back into the final result.

The double swings scene (0:20):
Extending on our experiments with computational shutters, this scene shows a variety of new techniques composed into a single shot. Fully facilitating the original footage shot in 120 fps, the boy has been resampled at a higher frame rate (30fps) and a short shutter, resulting an ultra crisp, almost hyper-real appearance, while the woman was drastically resampled at a lower frame rate (6fps) featuring an extreme shutter which is physically not possible and adding a strong motion blur to make her appear more surreal.

Car driving backwards and a flower (0:30); a train (0:36),
For these scenes, we were experimenting with extreme computational shutters. The theoretical motion blur for the scenes was extended with a buoyancy component and modified through a physical fluid simulation, resulting in physically impossible motion blur. As shown, it is possible to apply this effect selectively on specific parts of the frame, as well as varying the physical forces.

Super slow motion closeup of the boy (0:44); a handkerchief with motion blur and super slow motion (0:47); an hourglass (0:50):
These shots show the classical application of optical flow – slow motion. However, with our new technology we have been able to achieve extremely smooth pictures with virtually no artifacts, equivalent of a shutter speed at 1000 fps. At the same time, artificial motion blur equivalent of a shutter of far more than 360 degrees can be added to achieve a distinct “stroby” look, if desired, while maintaining very fluent motion in all cases. We are also able to speed up or slow down parts of the scene, e.g. to play the background in slow-motion while the foreground runs at normal speed. All of these effects can be applied on a per-pixel basis, thus giving full freedom to the artist.

Additional info on the film:

“Lucid Dreams Of Gabriel” is a surrealistic and non-linear story about a mother achieving immortality through her son, unconditional love, and the fluidity of time.

Producer: Markus Gross
DOP: Marco Barberi
Script & Director: Sasha A. Schriber
Camera & lenses: Arri Alexa XT with Zeiss prime lenses
Original language: English
Length: 11 minutes

Categories
Cinematography Oscars

Congratulations to Alfonso Cuarón and Emmanuel Lubezki.

gr42001-a-space-odyssey-stewardess-walking-on-space-age-velcro

Great minds think alike.

Facing the Void. From American Cinematographer.

The 3-D feature is enhanced by long takes and fluid camerawork that immerse the viewer in the beautiful but dangerous environment of space with a groundbreaking level of realism and detail. It is the fruit of a five-year collaboration involving director Alfonso Cuarón; cinematographer Emmanuel “Chivo” Lubezki, ASC, AMC; visual-effects supervisor Tim Webber, and their talented teams. Longtime friends Cuarón and Lubezki have worked together on six features to date, including Y Tu Mamá También and Children of Men (AC Dec. ’06). Webber supervised visual effects on the latter.

 

The technical and aesthetic accomplishments of Gravity become all the more impressive when Lubezki reveals that the only real elements in the space exteriors are the actors’ faces behind the glass of their helmets. Everything else in the exterior scenes — the spacesuits, the space station, the Earth — is CGI. Similarly, for a scene in which a suit-less Stone appears to float through a spaceship in zero gravity, Bullock was suspended from wires onstage, and her surroundings were created digitally. (Most of the footage in the space capsules was shot with the actors in a practical set.)

 

In many ways, Gravity provides a new paradigm for the expanding role of the cinematographer on films with significant virtual components. By all accounts, Lubezki was deeply involved in every stage of crafting the real and computer-generated images. In addition to conceiving virtual camera moves with Cuarón, he created virtual lighting with digital technicians, lit and shot live action that matched the CG footage, fine-tuned the final rendered image, supervised the picture’s conversion from 2-D to 3-D, and finalized the look of the 2-D, 3-D and Imax versions. “I was doing my work as a cinematographer on Gravity,” says Lubezki. “In the process, I had to learn to use some new tools that are part of what cinematography is becoming. I found it very exciting.”

 

Cuarón notes that whenever he was tempted “to do a camera move just because it was cool, Chivo would not allow that to happen.” He cites the example of the opening take, which ends with Stone drifting away toward open space. “When we were doing the previs, as she started floating away, I said, ‘We don’t need to cut. We can keep following her in the same shot, so the first two shots would be just one shot.’ But Chivo said, ‘I think when she’s floating away is the perfect moment to cut. If this were the chapter of a book, this would be the last phrase of the chapter.’ And he was right. Otherwise, we would have started calling attention to the long take and creating an expectation that that’s what the film was about. But that’s not what it’s about. The camerawork serves … I don’t want to say it serves the story, because I have my problems with that. For me, the story is like the cinematography, the sound, the acting and the color. They are tools for cinema, and what you have to serve is cinema, not story.”

 

Lubezki shot most of the live-action material in the film with Arri Alexa Classics and wide Arri Master Prime lenses, recording in the ArriRaw format to Codex recorders; the package was supplied by Arri Media in London. (Panavision London provided a Primo Close Focus lens that was used for a single shot.) He filmed a scene set on Earth on 65mm, using an Arri 765 and Kodak Vision3 500T 5219, to provide a visual contrast to the rest of the picture.

 

The robot arm was originally designed to assemble cars, according to Webber. He explains that Warner Bros. executive Chris DeFaria read about a San Francisco design-and-engineering studio, Bot & Dolly, which had used the arms to move a camera. Webber adds that the production worked with Bot & Dolly to add increased flexibility to the system, including the ability to adjust the speed of the preprogrammed moves so they could be adapted to the actors’ performances. To create even more options, they added a special remote head that was manned by camera operator Peter Taylor. Based on a Mo-Sys head, this remote unit was adapted to make it smaller and lighter, partly so that it would block less light. It could be operated live or set to play preprogrammed moves driven by the previs.

 

Gaffer John “Biggles” Higgins, who also worked with Lubezki on Children of Men, marvels that he has “never seen anything like the set of Gravity.” Apart from the LED Box, he notes, there were also other, slightly more traditional setups. For interiors of the space capsule as it hurtles to Earth, for instance, the filmmakers used an Alpha 4K HMI without its lens to simulate the sun, moving the source around the stationary capsule with a crane and a remote head. Higgins says they selected the Alpha because “it is the only head that can be operated shooting straight down.” He adds that Lubezki would provide ambient light by punching powerful tungsten 20Ks through 20’x 20′ frames, using two layers of diffusion, Half and Full Grid Cloth, as well as green and blue gels, to simulate sunlight. “These diffusions were mainly used on the real capsules,” explains Higgins. “The green and blue filters were stitched to the back of the closest diffusion, the 20-by-20 Full Grid.”

 

Full list of winners here

 

 

Categories
Animation VFX

Life After PI

I talked about the trailer for this film here, and it has been released. From hollywoodendingmovie.com.
“Life After Pi” is a short documentary about Rhythm & Hues Studios, the L.A. based Visual Effects company that won an Academy Award for its groundbreaking work on “Life of Pi”– just two weeks after declaring bankruptcy. The film explores rapidly changing forces impacting the global VFX community, and the Film Industry as a whole.

This is only the first chapter of an upcoming feature-length documentary “Hollywood Ending,” that delves into the larger, complex challenges facing the US Film Industry and the many professionals working within it, whose fates and livelihood are intertwined.