Happy Birthday to George Lucas! As we know George was a big proponent of the use of digital technology in cinema. When I worked at Sony in the 1990's, we were on the cutting edge of using digital cameras for cinematography.
Dann Cahn on editing I Love Lucy with the Three Headed Monster, a Moviola which played the film from all three camera angles simultaneously and (hopefully) in sync with an optical track for sound.
FaceDirector can seamlessly blend several takes to create nuanced blends of emotions, potentially cutting down on the number of takes necessary in filming.
Interview with Walter Murch who received the Camerimage festival’s Special Award to Editor with Unique Visual Sensitivity. He edited sound on American Graffiti and The Godfather: Part II, won his first Academy Award nomination for The Conversation, won his first Oscar for Apocalypse Now, and won an unprecedented double Oscar for Best Sound and Best Film Editing for his work on The English Patient.
What has three letters, many aliases and is of major significance to the sound community? You guessed it: ADR aka Automated Dialog Replacement aka Additional Dialog Recording aka Dubbing aka Looping. All of these monikers are understood as the process of re-recording dialog that cannot be salvaged from a production. To make one thing clear, there is nothing automated about it. ADR is an art.
Composer Neil Brand celebrates the art of cinema music, Neil explores how changing technology has taken soundtracks in bold new directions and even altered our very idea of how a film should sound.
In the last of three programmes in which composer Neil Brand celebrates the art of cinema music, Neil explores how changing technology has taken soundtracks in bold new directions and even altered our very idea of how a film should sound.
Neil tells the story of how the 1956 science fiction film Forbidden Planet ended up with a groundbreaking electronic score that blurred the line between music and sound effects, and explains why Alfred Hitchcock’s the Birds has one of the most effective soundtracks of any of his films – despite having no music. He shows how electronic music crossed over from pop into cinema with Midnight Express and Chariots of Fire, while films like Apocalypse Now pioneered the concept of sound design – that sound effects could be used for storytelling and emotional impact.
Neil tracks down some of the key composers behind these innovations to talk about their work, such as Vangelis (Chariots of Fire, Blade Runner), Carter Burwell (Twilight, No Country for Old Men) and Clint Mansell (Requiem for a Dream, Moon).
Sound of Cinema: The Music that Made the Movies
The Lady from Shanghai (1947)
Update:The new Orson Welles documentary MAGICIAN starts in Los Angeles and New York City on December 10th!
More info: http://cohenmedia.net/films/magician
Magician: The Astonishing Life and Work of Orson Welles looks at the remarkable genius of Orson Welles on the eve of his centenary – the enigma of his career as a Hollywood star, a Hollywood director (for some a Hollywood failure), and a crucially important independent filmmaker. From Oscar-winner Chuck Workman.
With: Simon Callow, Christopher Welles Foder, Jane Hill Sykes, Norman Lloyd, Ruth Ford, Julie Taymor, Peter Bogdanovich, James Naremore, Steven Spielberg, Henry Jaglom, Elvis Mitchell, Beatrice Welles-Smith, Walter Murch, Costa-Gavras, Oja Kodar, Joseph McBride, Wolfgang Puck, Jonathan Rosenbaum, Michael Dawson, Paul Mazursky, Frank Marshall
Disney Research demonstrated Automatic Editing of Footage from Multiple Social Cameras at SIGGRAPH.
Video cameras that people wear to record daily activities are creating a novel form of
creative and informative media. But this footage also poses a challenge: how to expeditiously
edit hours of raw video into something watchable. One solution, according to Disney researchers,
is to automate the editing process by leveraging the first-person viewpoints of multiple cameras
to find the areas of greatest interest in the scene.
The method they developed can automatically combine footage of a single event shot by
several such “social cameras” into a coherent, condensed video. The algorithm selects footage
based both on its understanding of the most interesting content in the scene and on established
rules of cinematography.
“The resulting videos might not have the same narrative or technical complexity that a
human editor could achieve, but they capture the essential action and, in our experiments, were
often similar in spirit to those produced by professionals,” said Ariel Shamir, an associate
professor of computer science at the Interdisciplinary Center, Herzliya, Israel, and a member of
the Disney Research Pittsburgh team.
Whether attached to clothing, embedded in eyeglasses or held in hand, social cameras
capture a view of daily life that is highly personal but also frequently rough and shaky. As more
– more –eople begin using these cameras, however, videos from multiple points of view will be
available of parties, sporting events, recreational activities, performances and other encounters.
“Though each individual has a different view of the event, everyone is typically looking
at, and therefore recording, the same activity – the most interesting activity,” said Yaser Sheikh,
an associate research professor of robotics at Carnegie Mellon University. “By determining the
orientation of each camera, we can calculate the gaze concurrence, or 3D joint attention, of the
group. Our automated editing method uses this as a signal indicating what action is most
significant at any given time.”
In a basketball game, for instance, players spend much of their time with their eyes on the
ball. So if each player is wearing a head-mounted social camera, editing based on the gaze
concurrence of the players will tend to follow the ball as well, including long passes and shots to
The algorithm chooses which camera view to use based on which has the best quality
view of the action, but also on standard cinematographic guidelines. These include the 180-
degree rule – shooting the subject from the same side, so as not to confuse the viewer by the
abrupt reversals of action that occur when switching views between opposite sides.
Avoiding jump cuts between cameras with similar views of the action and avoiding very
short-duration shots are among the other rules the algorithm obeys to produce an aesthetically
The computation necessary to achieve these results can take several hours. By contrast,
professional editors using the same raw camera feeds took an average of more than 20 hours to
create a few minutes of video.
The algorithm also can be used to assist professional editors tasked with editing large
amounts of footage.
Other methods available for automatically or semi-automatically combining footage from
multiple cameras appear limited to choosing the most stable or best lit views and periodically
switching between them, the researchers observed. Such methods can fail to follow the action
and, because they do not know the spatial relationship of the cameras, cannot take into
consideration cinematographic guidelines such as the 180-degree rule and jump cuts.
Automatic Editing of Footage from Multiple Social Cameras
Arik Shamir (DR Boston), Ido Arev (Efi Arazi School of Computer Science), Hyun Soo Park (CMU), Yaser Sheikh (DR Pittsburgh/CMU), Jessica Hodgins (DR Pittsburgh)
ACM Conference on Computer Graphics & Interactive Techniques (SIGGRAPH) 2014 – August 10-14, 2014
Paper [PDF, 25MB]
A very good documentary about the EditDroid. When I worked at Disney Imagineering, we were using laser disc players in a lot of places, including EPCOT starting the 1980s. We were also innovators and early adoptors of nonlinear editing and video to film matchback.
From the film’s website;
The EditDroid was (one of) the first nonlinear electronic editing system and used several laser disc players loaded with the raw footage of a film. The simple computer interface was unique for its time. After a short period of success the EditDroid disappeared from the film scene and George Lucas sold the machine’s patents to a small company called Avid.
I highly recommend the book Droidmaker: George Lucas And the Digital Revolution. It is also available as an iBook and on Kindle.
This book ventures in territory never explored, as Rubin-a former member of the Lucasfilm Computer Division-reconstructs the events in Hollywood, in Silicon Valley, and at Lucas’ private realm in Marin County, California, to track the genesis of modern media. With unprecedented access to images and key participants from Lucasfilm, Pixar and Zoetrope-from George Lucas and the executives who ran his company, to the small team of scientists who made the technological leaps, Rubin weaves a tale of friendships, a love of movies, and the incessant forward movement of technology. This is a compelling story that takes the reader into an era of technological innovation almost completely unknown
Special thanks to The Soundworks Collection for this video.
“Ray Dolby was a brilliant scientist whose inventions are in use every day in recording studios, sound editing suites, mix stages and cinemas worldwide,” said MPSE president Frank Morrone. “He was a giant in our industry and we take great pride is saluting his many contributions to our craft.”
Dolby, who passed away last September, is the founder of Dolby Laboratories. He is credited with developing a noise reduction system which delivered sound recordings with greater clarity and fidelity that was previously possible. The Academy Award winner also developed the first commercially-viable surround-sound system, which led to the widespread use of 5.1- and 7.1-channel sound systems in theaters and homes.
In 2012, the home of the Academy Awards was renamed the Dolby Theater, and the grand ballroom at Hollywood & Highland is now known as the Ray Dolby Ballroom.
Ray Dolby Tribute by Walter Murch.