Tag: Cinematography

The Real Fake Cameras Of Toy Story 4

Very good video on using virtual cameras in animation.

Read more

Vittorio Storaro ASC at CineGear 2016

I went to CineGear this year. It was great, was able to catch up and reconnect with some old friends and make new ones.

  • I got a picture of the new Leonard Nimoy street sign on the Paramount lot where the Expo was held.

Nimoy sign

He was also a photographer as well as a director and did many projects at Disney including Body Wars.

  • Zeiss was there with a cut away of one of their lenses.

Zeiss cut away

Stage 19 Kane

  • Panavision showed the new DXL 8K camera. The footage shown was very nice!

panavisionDXL

  • The best thing was seeing Vittorio Storaro ASC.

He talked about working with Woody Allen on his new film for Amazon Studios, Cafe Society.

This is Woody’s first digital feature and Vittorio used the Sony F65;

“I had seen that the Sony F65 was capable of recording beautiful images in 4K and 16 bit-colour depth in 1:2, which is my favorite composition,” Storaro said. “So when Woody called me this year asking me to be the cinematographer of his new film with the working title ‘WASP 2015,’ my decision was already made. I convinced him to record the film in digital, so we can begin our journey together in the digital world. It’s time now for the Sony F65!”

He spoke of the Technicolor IB process, light, shadows and color and said that digital makes it too easy.

He stated that a trend that has emerged with the use of digital cameras is that “people want to work faster or show that they can use less light, but they don’t look for the proper light the scenes needs. That isn’t cinematography, that’s recording an image. … I was never happy in any set to just see available light,” said Storaro, who has won Oscars for Apocalypse Now, Reds and The Last Emperor. “Even in very important films that take Academy Awards, you can record an image without location lighting. But that’s not necessarily the right light for the character. We have to always move a story forward, not step back.”

Apocalypse Now
Apocalypse Now
Star Wars: The Force Awakens
Star Wars: The Force Awakens

He elaborated on his work with Coppola and that he hasn’t used anamorphic lenses for many years. Sorry Mr. Tarantino.

The best and most important part though, was when he got even more philosophical. He mentioned Mozart, the Lumiere brothers, Newton, Caravaggio, architecture, and Plato and the Cave. From his website:

Ever since Plato’s “Myth of the Cave” we are used to seeing Images in a specific space. In Plato’s myth, prisoners are kept in a cave facing an interior wall, while behind them, at the entrance to the cave, there is a lighted fire, with some people with statues and flags passing in front of the fire. At the same time, their shadows are projected onto the interior wall of the cave by fire’s light. The prisoners are looking at the moving shadows in that specific area of the wall. They are watching images as a simulation, a “simulacre” of reality, not reality itself. The myth of Plato is a metaphor for the Cinema.

He believes that film is a collaboration as opposed to the auteur theory and emphasized the importance of story.

“You need to find the balance of technology and art,” continued Storaro, who was inspiring and thought-provoking in his speech, also raising an argument against the use of the term ‘director of photography’ to define the role of the cinematographer. “That’s a major mistake. There cannot be two directors. … Let’s respect the director,” he asserted, saying that ‘cinematographer’ is the appropriate word, and adding that it’s not interchangeable with photographer. “Cinematography is motion, we need a journey and to arrive at another point. We don’t create a beautiful frame, but a beautiful film. That’s why I say ‘writing with light.'”

lastemperor

 

Kubrick and his lenses, with Joe Dunton BSC

ARRI IIC and lenses from the LACMA Kubrick exhibit.

IMG_1524

IMG_1514 - Version 2

IMG_1516

 

IMG_1513 - Version 2

Some of the questions are in French. Click on the arrows in the lower right to make full screen.

IN GLORIOUS TECHNICOLOR

Founded in 1915, the Technicolor Motion Picture Corporation transformed cinema forever with its revolutionary color processes. George Eastman House marks this important centennial with the exhibition In Glorious Technicolor, on view January 24 through April 26, 2015 in the special exhibition galleries.

The exhibition celebrates Technicolor’s vivid history, from the company’s early years through the making of such classics of the Hollywood studio era as The Wizard of Oz (1939), Gone With the Wind (1939), and Singin’ in the Rain (1952). Technicolor’s wide-ranging impact on the form and content of cinema is explored through original artifacts from the Technicolor Corporate Archive, projected video clips, and a range of stunning visual displays.

Highlights include the company’s evolving camera technology, from its early two-color camera from the 1920s to the massive Technirama widescreen system of the 1950s. Original costumes, production designs, posters, and photographs document how color was used creatively and presented to the public, while the vibrant dyes used to create Technicolor’s incomparable “look” shed light on the science behind the process. Rare tests from Douglas Fairbanks’s The Black Pirate (1926), behind-the-scenes stills from the Errol Flynn’s The Adventures of Robin Hood (1938), and home movies from the set of The African Queen (1951) reveal the stars and filmmakers most associated with color. Additionally, the exhibition honors the achievements of Academy Award–winning cinematographers Ray Rennahan and Jack Cardiff, as well as Technicolor’s often overlooked engineers, whose work remained largely out of the limelight.

In_Glorious_Technicolor_055

To complement the gallery exhibition, the Dryden Theatre is presenting a four-month series of Technicolor films, including some original Technicolor prints.

Magician: The Astonishing Life and Work of Orson Welles

The Lady from Shanghai (1947)

Update:The new Orson Welles documentary MAGICIAN starts in Los Angeles and New York City on December 10th! 

More info: http://cohenmedia.net/films/magician

orson-welles-magician

Magician: The Astonishing Life and Work of Orson Welles looks at the remarkable genius of Orson Welles on the eve of his centenary – the enigma of his career as a Hollywood star, a Hollywood director (for some a Hollywood failure), and a crucially important independent filmmaker. From Oscar-winner Chuck Workman.

With: Simon Callow, Christopher Welles Foder, Jane Hill Sykes, Norman Lloyd, Ruth Ford, Julie Taymor, Peter Bogdanovich, James Naremore, Steven Spielberg, Henry Jaglom, Elvis Mitchell, Beatrice Welles-Smith, Walter Murch, Costa-Gavras, Oja Kodar, Joseph McBride, Wolfgang Puck, Jonathan Rosenbaum, Michael Dawson, Paul Mazursky, Frank Marshall

VFX Legend Douglas Trumbull talks about the Future of Film … and Kubrick.

From the Sept. 12 issue of The Hollywood Reporter.

Trumbull drives me a short distance from his home to a full-size soundstage and escorts me into a screening room that he has constructed to meet his ideal specifications: a wide wall-to-wall and floor-to-ceiling curved screen, with surround sound, steeply rigged stadium seating and a 4K high-resolution projector. As I put on specially designed 3D glasses and settle into stadium seating, he tells me, with an unmistakable hint of nervousness, “You’re one of the first people on the planet to see this movie.”

Ten minutes later, the lights come back up and I sit in stunned silence. The short that I have just seen, UFOTOG (a blending of the words “UFO” and “fotog,” the latter slang for press photographer), is stunning not because of its story — we’ve all seen movies about UFOs — but because it shows, as it was designed to do, what movies can look like if theaters, studios and filmmakers embrace the MAGI process through which Trumbull brought it to the screen: bigger, brighter, clearer and with greater depth-of-field than anything ever seen in a cinema before.

All of the aforementioned conditions are part of the MAGI equation, but the most essential element is the rate of frames per second at which a film is projected. In the beginning, the Lumiere brothers projected films at 18 fps, slow enough to result in the appearance of flickering —  hence the early nickname for the movies, “the flickers” or “the flicks.” That figure eventually increased to 24 fps, and has remained there, for the most part, ever since.

In 2012, Peter Jackson dared to release The Hobbit‘s first installment at 48 fps, which was supposed to create a heightened sense of realism, but which instead struck many as strange-looking and some even as nauseating. Many deemed the experiment a failure. Trumbull disagreed. He felt that if a digitally shot film was projected even faster — markedly faster, as in 120 fps, via a bright projector and onto a big screen — then the movie screen itself would seemingly disappear and serve effectively as a window into a world on the other side that would appear as real as the world in which one sits.

tothemoon

To the Moon and Beyond featured a 70 mm circular image projected onto a dome screen and took viewers on a journey “from the Big Bang to the microcosm in 15 minutes.” Two of the thousands who saw it were Stanley Kubrick, the filmmaker, and Arthur C. Clarke, the writer, who came away from it convinced that an A-level sci-fi film — which eventually became 2001: A Space Odyssey — was possible. Kubrick contracted Graphic Films to produce conceptual designs for the project, but, once it got off the ground, moved it to London, at which point 23-year-old Trumbull cold-called the director and got a job on the film. His greatest contribution to it was devising a way to create a believable “Star Gate” effect, representing “the transformation of a character through time and space to another dimension.” Even though Kubrick alone claimed screen credit and an Oscar for the film’s VFX, Trumbull instantly became a name in the business.

silent-running-movie-poster-1972-1020209768

A few years later, he made his directorial debut with Silent Running (1972), a well-received film that landed him deals at Fox, MGM and Warner Bros. — but all of them “unraveled for stupid reasons.” By 1975, “desperate because you can’t live on development deals,” he and Richard Yuricich proposed the creation of the Future General Corporation, through which they would try to identify ways to improve the technology used to make films. Paramount agreed to sponsor the endeavor — which, to them, was a tax write-off — in return for 80 percent ownership. Within the first nine months of its existence, Trumbull says, “We invented Showscan [a manner of projecting films at 60 fps]. We invented the first simulator ride. We invented the 3D interactive videogame. And we invented the Magicam process [by which actors can perform in front of a blue screen, onto which nonexistent locations can be projected to create virtual realities].” And yet, in the end, Paramount “saw no future in the future of movies” and failed to support their efforts, devastating Trumbull, who was under exclusive contract to the studio for the next six years. (The studio’s one gesture that he did appreciate: loaning him out to Columbia to do the special effects for Close Encounters of the Third Kind.)

Trumbull got out of his Paramount contract in 1979 thanks to Star Trek: The Motion Picture. The original effects team that had been engaged for the highly anticipated film couldn’t handle the job, something the studio realized only six months before its long-scheduled Christmas release date. The studio begged Trumbull to take over, and he agreed to do so — provided he was paid a considerable fee and released from his contract. He got what he requested and, to the detriment of his health, also got the job done on time.

Newly a free agent, Trumbull continued to take on special effects jobs for others — for instance, Ridley Scott‘s Blade Runner (1982) — but his primary focus was on directing a film of his own that would demonstrate the capabilities of Showscan. For the project, which he called Brainstorm, he secured a top-notch cast, led by Natalie Wood, and a major distributor, MGM. Production got underway and was almost completed when, on Nov. 29, 1981, tragedy struck: Wood drowned under circumstances that remain mysterious to this day. Since Wood had only a few small scenes left to shoot, Trumbull felt that he could easily finish the film, but MGM, which was in dire financial straits, filed what he deemed a “fraudulent insurance claim” because “they wanted to get out of it.”

Doug Trumbull on motion simulator base for “In Search of the Obelisk” (1993) VistaVision ridefilm at the Luxor Las Vegas.
Doug Trumbull on motion simulator base for “In Search of the Obelisk” (1993) VistaVision ridefilm at the Luxor Las Vegas.

Photo courtesy of Mice Chat.

Then, in 1990, he was approached about making a Back to the Future ride for Universal Studios venues in Florida, Hollywood and Japan. Others had been unable to conquer it, but he made it happen — and in a groundbreaking way: “It took you out of your seat and put you into the movie. You were in a DeLorean car. You became Marty McFly. You became a participant in the movie. The movie was all around you.” It ran for 15 years, he says, but was “dismissed as a theme park amusement.” He felt it was something more. “This was a moment where, for the first time in history, you went inside a movie.” Even though others failed to see larger possibilities, he says, “That kinda kept me going for a long time because it validated that we could be here in the Berkshires and make breakthroughs that no one else was able to do in Hollywood or anywhere else.”

In 2009, James Cameron‘s Avatar, a digitally shot 3D production that grossed a record $2.8 billion worldwide, changed everything. Its success spurred, at long last, filmmakers to transition en masse to digital photography and theaters to transition en masse to digital projection — at which point Trumbull made a crucial discovery. He realized that digital projectors run at 144 fps — twice as fast as Showscan had been able to — but films were still being made at 24 fps, with each frame just flashing multiple times. “Could we do a new frame every flash?” he wondered. If so, he reasoned, it might just give people a reason to put down their smartphones, tablets and laptops and actually buy a ticket to see a movie in a theater.

After years of work on his farm, Trumbull is finally ready to unveil UFOTOG. Its first public presentation will take place on Sept. 11 as part of the Toronto International Film Festival’s Future of Cinema conference (at which Trumbull will also give a keynote address), and it will also screen days later at the IBC Conference in Amsterdam. At both venues, he says, his message will be rather straightforward: “It’s not rocket science, guys. It’s just a different shape, a different size, a different brightness and a different frame rate. Abandon all that crud that’s leftover from 1927. We’re in the digital age. Get with it.”

The cost of these changes, he insists, will be rather negligible: projectors are already equipped to handle faster frame rates, and would require only slightly more data time and render time; theaters are already adopting brighter projectors that employ laser illumination, which uses a longer-lasting bulb to produce twice the amount of light; and theaters, he believes, will soon recognize that they are in the “real estate business” and that it is in their interest to have fewer total screens but more big screens, for which the public has demonstrated a willingness to pay a premium.

Trumbull’s main objective, though, is “to show the industry what it is possible to do” with MAGI. He says he’s “dying to show” UFOTOG to filmmakers such as Jackson, Cameron and Christopher Nolan, whom he regards as kindred souls. But mostly, he wants to challenge the industry one more time, warning it, “If you want people to come to theaters, you better do something different.”


Microsoft Research First-person Hyperlapse Videos

Microsoft Researcher Johannes Kopf ascends Mount Shuksan in the North Cascades with a GoPro.

Standard video stabilization crops out the pixels on the periphery to create consistent frame-to-frame smoothness. But when applied to greatly sped up video, it fails to compensate for the wildly shaking motion.

Hyperlapse reconstructs how a camera moves throughout a video, as well as its distance and angle in relation to what’s happening in each frame. Then it plots out a smoother camera path and stitches pixels from multiple video frames to rebuild the scene and expand the field of view.

As you might imagine, working with raw video involves crunching a fair amount of data, which required a compute cluster to crunch data for several hours to complete for each video. Microsoft developed a series of new algorithms that lead to a more efficient process without compromising the image quality. The result is that Hyperlapse can now render a high-speed video in a fraction of the time, using a single PC.

The Interactive Visual Media Group focuses on the areas of computer vision, image processing, and statistical signal processing, specifically as they relate to things like enhancing images and video, 3D reconstruction, image-based modeling and rendering, and highly-accurate correspondence algorithms that are commonly used to “stitch” together images.

From Microsoft Research.

We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyper-lapse videos, i.e., time-lapse videos with a smoothly moving camera. At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up.


Scene Reconstruction
Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video (shown in red) that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input.
Next, we compute geometric proxies for each input frame. These allow us to render the frames from the novel viewpoints on the optimized path.

Proxy Geometry

Stitched & Blended
Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame. We present a number of results for challenging videos that cannot be processed using traditional techniques.

Disney Research Automatic Editing of Footage from Multiple Social Cameras

Disney Research demonstrated Automatic Editing of Footage from Multiple Social Cameras at SIGGRAPH.

Video cameras that people wear to record daily activities are creating a novel form of
creative and informative media. But this footage also poses a challenge: how to expeditiously
edit hours of raw video into something watchable. One solution, according to Disney researchers,
is to automate the editing process by leveraging the first-person viewpoints of multiple cameras
to find the areas of greatest interest in the scene.

The method they developed can automatically combine footage of a single event shot by
several such “social cameras” into a coherent, condensed video. The algorithm selects footage
based both on its understanding of the most interesting content in the scene and on established
rules of cinematography.

“The resulting videos might not have the same narrative or technical complexity that a
human editor could achieve, but they capture the essential action and, in our experiments, were
often similar in spirit to those produced by professionals,” said Ariel Shamir, an associate
professor of computer science at the Interdisciplinary Center, Herzliya, Israel, and a member of
the Disney Research Pittsburgh team.

Whether attached to clothing, embedded in eyeglasses or held in hand, social cameras
capture a view of daily life that is highly personal but also frequently rough and shaky. As more
– more –eople begin using these cameras, however, videos from multiple points of view will be
available of parties, sporting events, recreational activities, performances and other encounters.

“Though each individual has a different view of the event, everyone is typically looking
at, and therefore recording, the same activity – the most interesting activity,” said Yaser Sheikh,
an associate research professor of robotics at Carnegie Mellon University. “By determining the
orientation of each camera, we can calculate the gaze concurrence, or 3D joint attention, of the
group. Our automated editing method uses this as a signal indicating what action is most
significant at any given time.”

In a basketball game, for instance, players spend much of their time with their eyes on the
ball. So if each player is wearing a head-mounted social camera, editing based on the gaze
concurrence of the players will tend to follow the ball as well, including long passes and shots to
the basket.

The algorithm chooses which camera view to use based on which has the best quality
view of the action, but also on standard cinematographic guidelines. These include the 180-
degree rule – shooting the subject from the same side, so as not to confuse the viewer by the
abrupt reversals of action that occur when switching views between opposite sides.

Avoiding jump cuts between cameras with similar views of the action and avoiding very
short-duration shots are among the other rules the algorithm obeys to produce an aesthetically
pleasing video.
The computation necessary to achieve these results can take several hours. By contrast,
professional editors using the same raw camera feeds took an average of more than 20 hours to
create a few minutes of video.

The algorithm also can be used to assist professional editors tasked with editing large
amounts of footage.

Other methods available for automatically or semi-automatically combining footage from
multiple cameras appear limited to choosing the most stable or best lit views and periodically
switching between them, the researchers observed. Such methods can fail to follow the action
and, because they do not know the spatial relationship of the cameras, cannot take into
consideration cinematographic guidelines such as the 180-degree rule and jump cuts.

Automatic Editing of Footage from Multiple Social Cameras
Arik Shamir (DR Boston), Ido Arev (Efi Arazi School of Computer Science), Hyun Soo Park (CMU), Yaser Sheikh (DR Pittsburgh/CMU), Jessica Hodgins (DR Pittsburgh)
ACM Conference on Computer Graphics & Interactive Techniques (SIGGRAPH) 2014 – August 10-14, 2014
Paper [PDF, 25MB]

Lucid Dreams of Gabriel – Teaser

From Variety,

Disney and Swiss pubcaster SRF unveil experimental short at Locarno fest.

At the Locarno Film Festival, the Disney lab and SRF jointly unveiled an impressive experimental short titled “Lucid Dreams of Gabriel” (see teaser) which for the first time displayed local frame variation, local pixel timing, super slow motion effects, and a variety of artistic shutter functions showcasing this “The Flow-of-Time” technique.

The project was created by the Disney Research lab in tandem with the formidable computer graphics lab at the Swiss Federal Institute of Technology Zurich (ETH) with SRF providing studio space, personnel, and other resources.

“We wanted to control the perception of motion that is influenced by the frame rate (how many images are shown per second) as well as by the exposure time,” said Markus Gross, who is Vice President Research, Disney Research and director of Disney Research, Zurich, at the presentation.

Use of the new technologies in the short, which is a surreal non-linear story about a mother achieving immortality in her son’s eyes after an accident in the spectacular Engadin Alpine valley, allowed director Sasha A. Schriber to avoid using green screen and to make the transition from reality (at 24 frames per second) to a supernatural world (at 48 frames per second).

“Lucid Dreams Of Gabriel,” an experimental short film created by Disney Research in collaboration with ETH, Zurich, was shot at 120fps/RAW with all effects invented and applied in-house at Disney Research Zurich. We sought to produce a visual effects framework that would support the film’s story in a novel way. Our technique, called “The Flow-Of-Time,” includes local frame rate variation, local pixel timing and a variety of artistic shutter functions.

Effects include:
•High dynamic range imaging
•Strobe and rainbow shutters
•Global and local framerate variations
•Flow motion effects
•Super slow motion
•Temporal video compositing

The following scenes of the teaser, indicated by the timecode, demonstrate different components of our new technology:

Shots with a dark corridor and a window (0:08); a man sitting on a bed (0:16):
Our new HDR tone-mapping technique makes use of the full 14 bit native dynamic range of the camera to produce an image featuring details in very dark as well as very bright areas at the same time. While previous approaches have been mostly limited to still photography or resulted in artifacts such as flickering, we present a robust solution for moving pictures.

A hand holding a string of beads (0:14):
As we experimented with novel computational shutters, the classic Harris-shutter was extended to make use of the full rainbow spectrum instead of the traditional limitation to just red, green, and blue. For this scene, the input was rate converted using our custom technology, temporally split and colored, then merged back into the final result.

The double swings scene (0:20):
Extending on our experiments with computational shutters, this scene shows a variety of new techniques composed into a single shot. Fully facilitating the original footage shot in 120 fps, the boy has been resampled at a higher frame rate (30fps) and a short shutter, resulting an ultra crisp, almost hyper-real appearance, while the woman was drastically resampled at a lower frame rate (6fps) featuring an extreme shutter which is physically not possible and adding a strong motion blur to make her appear more surreal.

Car driving backwards and a flower (0:30); a train (0:36),
For these scenes, we were experimenting with extreme computational shutters. The theoretical motion blur for the scenes was extended with a buoyancy component and modified through a physical fluid simulation, resulting in physically impossible motion blur. As shown, it is possible to apply this effect selectively on specific parts of the frame, as well as varying the physical forces.

Super slow motion closeup of the boy (0:44); a handkerchief with motion blur and super slow motion (0:47); an hourglass (0:50):
These shots show the classical application of optical flow – slow motion. However, with our new technology we have been able to achieve extremely smooth pictures with virtually no artifacts, equivalent of a shutter speed at 1000 fps. At the same time, artificial motion blur equivalent of a shutter of far more than 360 degrees can be added to achieve a distinct “stroby” look, if desired, while maintaining very fluent motion in all cases. We are also able to speed up or slow down parts of the scene, e.g. to play the background in slow-motion while the foreground runs at normal speed. All of these effects can be applied on a per-pixel basis, thus giving full freedom to the artist.

Additional info on the film:

“Lucid Dreams Of Gabriel” is a surrealistic and non-linear story about a mother achieving immortality through her son, unconditional love, and the fluidity of time.

Producer: Markus Gross
DOP: Marco Barberi
Script & Director: Sasha A. Schriber
Camera & lenses: Arri Alexa XT with Zeiss prime lenses
Original language: English
Length: 11 minutes

Fireworks filmed with a drone.

UPDATE: It is illegal to do this.