Happy Birthday to George Lucas! As we know George was a big proponent of the use of digital technology in cinema. When I worked at Sony in the 1990’s, we were on the cutting edge of using digital cameras for cinematography. Here is a video from Sony that highlights the development of the Sony cameras used in Star Wars.
Here is another video from ILM about all the areas that George changed with digital technology for editing and VFX. Thank you Mr. Lucas!
The back stories of I Love Lucy are probably familiar to many, but the extent of Cahn’s influence and importance within the post-production community cannot be overstated. In 1951, the studio system was changing and the industry was in flux. Fewer audiences were going to movie theaters and more were staying home to watch the new medium, television. No one could have imagined then how television would impact the film industry, American culture, or the combination of events that would launch the historic and groundbreaking series, I Love Lucy. Most television shows were broadcast live from New York City, sometimes recorded onto very poor quality Kinescope to broadcast to the rest of the country.
Lucille Ball’s husband and executive producer, Desi Arnaz had other ideas. He negotiated with CBS for Desilu Productions to pay the difference to shoot the show in 35mm film with an IA Hollywood crew, in front of a live audience. Filming episodes gave him flexibility and control; he could stay home in Los Angeles with his wife and newborn daughter Lucie, work with familiar crews and production facilities and add spontaneity to each episode with a live audience. Arnaz also negotiated to own the negative, which would pay off in unexpected ways later with syndication and archiving. Of course he was also unaware that the series would ever be available to view on formats like VHS and DVD many years later.
I was introduced to Cahn at the Lucy-Desi Museum on the evening of the first day’s events. The museum has a permanent installation of the recreated sets of the Ricardo’s New York City apartment (the one with the window where Lucy, dressed as Superman, hid on the ledge outside) and the Beverly Palms Hotel (when Lucy and Harpo Marx mimed each other’s gestures) site of the “LA at Last” episodes. He was clearly thrilled to be “on set” again, and wasted no time indoctrinating me to his oeuvre with a personal guided tour.
Along the entire expanse of the wall opposite these sets, is a life-sized black and white photograph taken during the filming of an episode in 1951, showing the audience with the three-camera set up and everyone involved in the show, including Cahn. Also opposite the sets is a seven-minute video loop where he vibrantly explains how the first season’s episodes came together, working every day of the week for over 35 consecutive weeks, with a live audience. As we walked, Cahn identified everyone from that first season; Marc Daniels, the director, Jess Oppenheimer the producer, and cinematographer Karl Freund, who won the 1938 Best Cinematography Academy Award for The Good Earth. I asked how the series’ crew came together. “Well,” he said, “Lucy and Desi wanted the best in the business, the people they had worked with before in film, so they approached those same people.” I then asked if the show was required to go under a union contract, and he said, “No, but they insisted on working with the best, and those people worked union, so that’s the way it was going to be.”
We continued the tour and came upon the infamous “Three-Headed Monster”–a Moviola which played the film from all three camera angles simultaneously and (hopefully) in sync with an optical track for sound. Dubbed “The Monster” by Dann, because the props room was the only space on the stage large enough to accommodate its size, this machine is enormous. It was daunting for me, a digital-age assistant editor, to imagine the reels of film from three cameras running through it, the editor marking and making changes, all the while determining and meeting the demands for the new workflow, which was really multi-cam editing in its beta stage. We had a long weekend ahead of us, so after the tour, we decided to save some of my questions for a one-on-one breakfast meeting the next day, before the other events began.
The first thing Cahn said to me at breakfast was, “Don’t ask me about the Three-Headed Monster, everyone asks about that.” So that scrapped my first question. Instead, I asked about the optical sound track, and how that worked, since I had never worked with one. His eyes widened and he said, “Oh well, you know we used to read the optical track, we read it with a sort of shorthand. We could actually see the sound that had been recorded by reading the levels on the print itself. So when we switched to Mag the second season we couldn’t read the lines anymore, and it was a lot more work for us! But we adjusted to it and saved a lot of money and time and the quality was much better!” This got me thinking about schedules and time constraints, so I asked him to describe a typical work week.
The schedule was tight, Cahn related, especially compared to the more familiar pace of feature film editing. A new episode had a table read on Monday, rehearsal on Tuesday, camera rehearsal on Wednesday, and a full camera run-through on Thursday. On Friday evening in front of a live audience, the episode was filmed, in scripted scene order; the film was processed, printed, and in the cutting room on Monday morning usually by eight AM. Dann marked with a grease pencil, Bud made the cuts (with scissors) and pasted; cut scenes were then adjusted and fixed. The editor’s cut was ready to screen with the director by the time rehearsals for the next episode were already under way. Very quickly, due to demands on set, and with Cahn’s natural ability at cutting comedy and working fast, the director’s cut dissipated. A pattern of six-day work weeks and 14-hour days was unavoidably established.
In the context of the high-pressure schedule, he recalled, “They thought that the Monster would enable me to do everything, but it was just a tool, like the Avid is today; we couldn’t do everything within the time constraints! It’s expected today that picture editors do temp sound and music work.” The crew quickly increased to include an apprentice and an additional editor for sound effects and music. Dann remembered Desi’s remark to him, “Danny you want a crew bigger than my band? But that that’s exactly what eventually happened as Desilu expanded its productions as well as Cahn’s role in the company.
Inevitably, just as the workload seemed more manageable with his expanded crew working on the first episode, it was decided that the second episode would air first. The reaction to the second episode was so strong, the sponsor and CBS decided to the switch the air dates. The six-day editorial work week immediately shifted to seven days, and within four weeks all the editing and sound work, opticals, negative cutting and answer print was completed and delivered within hours of airtime. In addition to these unforeseen shakeups, Cahn also had to think creatively and act fast, especially when things didn’t run as smoothly as planned.The first serious technical issue the editing team confronted was one still familiar to assistant editors today; fixing out of sync dailies. The three-camera setup used a “blue light” system instead of the traditional clapper; as the camera rolled at the start of a scene, all the film rolls were buzzed and flashed with a light that was exposed onto a frame of film and soundtrack. The three-camera setup was interlocked so that the flash would occur on all three cameras simultaneously. However, the flash from the three different cameras never actually wound up in the same place, as intended, so the task of eye synching most of the footage was added to the crunched schedule. After the first few shows, Cahn decided to go to the studio mill and make a giant sized wooden clapper that would cover all three cameras, and the sync problem was resolved. He then recalled Karl Freund’s wisecrack to Jess Oppenheimer, “We’ve got a bright boy here; with this giant clapper he’s reinvented the wheel!”
I asked Dann about music cues and how that developed. “Director Marc Daniels’ experience was in live theater, and that kind of spontaneity was great for the show, but not to get the music cues I needed for a cut,” he explained. I’d get music with dailies, but they were never the right length and nothing ever matched. So to get around this, I’d cut the episode and take the timings to the set on Friday, just like we did in features; the band was set up, and I’d give them my list of cues to record. They had to learn that not everything could happen all at once in the cutting room; it wasn’t like live TV or theater. The show had to be scored just like a movie and I was always adapting motion picture techniques to everything we did!”
Not all issues were necessarily technical problems to be solved. Sometimes it was inspiration out of necessity. I brought up the subjects of visual effects and opticals, and Dann offered two interesting examples. The first was the “LA at Last” episodes, when the Ricardos and the Mertzes traveled from New York City to Los Angeles in a convertible car. There was no time to send the cast to New York or anywhere else for these episodes. Location shots with a second unit would be faster and keep the story authentic. The location photography was assigned to Cahn, and he worked out the various angles with the DP and the second season director, Bill Asher.
The very first location was the George Washington Bridge, which provided the BG for the first process photography for television. In the completed episode, we see the gang at the start of their trip, crossing the bridge, along with all the regular car traffic following and passing. Cahn shot the traffic from the back of a truck, and that became the film plate that was projected behind the gang in the convertible on set in Los Angeles. Another optical was for the sponsor commercials. “Every week we received new commercials from the Milton Biow Agency in New York City,” Cahn recalled. “These played an integral part and tied into each episode in a unique way week-to-week. The commercials were animation stick figures of Lucy and Desi doing different things, and the animation would peel away to reveal the upcoming scene.” These effects were not firsts for film, but they were for television.
As with the day before, everyone had questions. One of the last questions of the morning was about delivering prints for broadcast, “How did the prints get to their locations on time and what if they didn’t?” Cahn folded his arms and smiled. “The prints flew out on planes, and because there were no jets, it was a long trip to New York, with stops along the way!” he said. We all looked at each other as if this one element was what we could all finally relate to in our real day-to-day lives, how much times have changed and how different the world is. Cahn continued, “There was one close call, and the print arrived in New York only a couple of hours before air time, but we made it!”
FaceDirector software can seamlessly blend several takes to create nuanced blends of emotions, potentially cutting down on the number of takes necessary in filming.
A new software, from Disney Research in conjunction with the University of Surrey, may help cut down on the number of takes necessary, thereby saving time and money. FaceDirector blends images from several takes, making it possible to edit precise emotions onto actors’ faces.
Shooting a scene in a movie can necessitate dozens of takes, sometimes more. In Gone Girl, director David Fincher was said to average 50 takes per scene. For The Social Network actors Rooney Mara and Jesse Eisenberg acted the opening scene 99 times (directed by Fincher again; apparently he’s notorious for this). Stanley Kubrick’s The Shining involved 127 takes of the infamous scene where Wendy backs up the stairs swinging a baseball bat at Jack, widely considered the most takes per scene of any film in history.
“Producing a film can be very expensive, so the goal of this project was to try to make the process more efficient,” says Derek Bradley, a computer scientist at Disney Research in Zurich who helped develop the software.
Disney Research is an international group of research labs focused on the kinds of innovation that might be useful to Disney, with locations in Los Angeles, Pittsburgh, Boston and Zurich. Recent projects include a wall-climbing robot, an “augmented reality coloring book” where kids can color an image that becomes a moving 3D character on an app, and a vest for children that provides sensations like vibrations or the feeling of raindrops to correspond with storybook scenes. The team behind FaceDirector worked on the project for about a year, before presenting their research at the International Conference on Computer Vision in Santiago, Chile this past December.
Figuring out how to synchronize different takes was the project’s main goal and its biggest challenge. Actors might have their heads cocked at different angles from take to take, speak in different tones or pause at different times. To solve this, the team created a program that analyzes facial expressions and audio cues. Facial expressions are tracked by mapping facial landmarks, like the corners of the eyes and mouth. The program then determines which frames can be fit into each other, like puzzle pieces. Each puzzle piece has multiple mates, so a director or editor can then decide the best combination to create the desired facial expression.
To create material with which to experiment, the team brought in a group of students from Zurich University of the Arts. The students acted several takes of a made-up dialogue, each time doing different facial expressions—happy, angry, excited and so on. The team was then able to use the software to create any number of combinations of facial expressions that conveyed more nuanced emotions—sad and a bit angry, excited but fearful, and so on. They were able to blend several takes—say, a frightened and a neutral—to create rising and falling emotions.
The FaceDirector team isn’t sure how or when the software might become commercially available. The product still works best when used with scenes filmed while sitting in front of a static background. Moving actors and moving outdoor scenery (think swaying trees, passing cars) present more of a challenge for synchronization.
We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states. As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production. Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.
Download File “FaceDirector- Continuous Control of Facial Performance in Video-Paper” [PDF, 13.22 MB]
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.
After the Camerimage international film festival’s special screening of The Talented Mr. Ripley, Anthony Minghella’s wonderful and haunting 1999 film with Matt Damon in the starring role, I had the unique pleasure and honor of seeing and listening to probably the greatest film editor and sound designer of the last half a century. Walter Murch, the living legend of the filmmaking business whose career was built on films such as Apocalypse Now, The Conversation and The Godfather trilogy, was invited to Bydgoszcz, Poland to receive the festival’s Special Award to Editor with Unique Visual Sensitivity. This is the first time I’ve ever had the chance to see him in person and, besides coming off as a very nice and humble human being, to listen to him talk about filmmaking, editing and the history of film was incredibly inspiring and satisfying.
Sitting at a small table on stage, with a glass of water at his side, Walter Murch engaged the audience and the crowded theater—mind you, many of the audience are filmmakers themselves—bombarded him with questions, seeking his advice and wanting to soak up as much wisdom as possible. Murch briefly discussed his relationship with Minghella, calling him an extremely collaborative director who wanted and accepted input from his crew (but “still had strong vision and ideas”), recalling how they met and how Minghella explained to him that, when he found a perfect T-shirt, he’d buy hundreds of them, never to have to set out on the risky task of finding new clothes. The message was clear—if Murch proved to be a capable editor, Minghella would want to work with him for the rest of his life. They did three films together (The English Patient, The Talented Mr. Ripley, Cold Mountain), and would definitely collaborate again had it not been for the director’s tragic death in 2008.
One of the most interesting parts of the conversation was when Murch explained one the things that inevitably changed with the rise of digital technology and its use in filmmaking. Back in the good old days, after a hard day’s work on set, the crew would gather and watch the ‘dailies,’ the material they filmed that specific day. With minds clear and concentrated on the film, they would immerse in their footage and have discussions on the material. Dailies became a part of history, as there’s no need for them when the crew can monitor what’s being filmed on set simultaneously on their screens. Since during filming people have tons of things on their mind and can hardly relax in front of the screen, Murch believes dailies should be brought back into practice, as they proved very useful in the past.
Walter Murch mixing Apocalypse Now in 1979
On the unsurprising question of what you need to have to be a good editor, Murch said you needed to be ready to spend 16 hours a day in a small, stuffy room with no windows, being repeated the same things over and over again like torture. Furthermore, a good editor has to have a good sense of rhythm because, after all, editing is basically choreographing a line of images. The other important thing is to be able to anticipate the audience’s reaction. According to Murch, the editor is the only representative of the audience in a film crew: his job is to predict how the viewer will respond to the movie, and to do so, he has to place himself in their shoes. Therefore, Murch tends to avoid seeing any part of filming, he visits the set only if really necessary, believing too much information would prove to be a burden, as it will distance him from the position of the viewer, who will see the film without any knowledge of the size of the set or the sort of sandwiches served in breaks. The editor, Murch continues, is one of the few people on set with great effect on the film who can completely isolate himself if he wants to.
What I did not know was that Murch had some influence on the script for The Talented Mr. Ripley. As he was sent the screenplay six months prior to filming, he made a couple of suggestions regarding the way the film should open and how it should end, and Minghella listened. But it’s not strange, Murch says, that editors get the screenplay months, or even a year, in advance: it’s actually common practice nowadays.
Needless to say, I left the theater impressed like a school boy, as I should be in the presence of a professional of such caliber. This made me a little more nervous during our interview, but it turned out there was no need whatsoever to feel uncomfortable. That’s who Murch is—an editing genius capable of making you feel as if he’s your friend from elementary school.
Fellow USC alums Walter Murch and George Lucas
In an interesting interview you recently gave to Indiewire, you said that films are called motion pictures, but that they could be easily called emotion pictures since the point of every film should be to cause an emotional response in the audience. Do you think this should be top priority in any film?
Yes, with the proviso that it should be the correct emotion. Films are very good at stirring up emotion but you have to be careful about which emotion you’re stirring up. So in a sense the filmmakers, from the directors to anybody else, have to really say—what emotion are we going for here and why are we going for it? And how does that emotion relate to what we had in the previous and will have in the following scene? And can we also track not only the emotion but the logic of everything that’s happening, basically is the story understandable? So this dance between intellect and emotion, which is kind of basic to what human beings are, is something that we have to be very careful about. In a film, for instance, you could stage a murder in a very brutal way which would stir up emotions in the audience, but is that going to confuse things later on in the story?
You also talked about over-intentionality in movies, how it’s easy for the audience to feel manipulated into feeling something if things are edited in a certain way. How difficult is it for you not to cross that border, to cause an organic feeling in a viewer rather than a manipulated one?
It’s very difficult. Because films are evolving under our fingers, so to speak. And we want to communicate certain things and we’re anxious that the audience understands what we’re trying to say. And so many things are uncertain in a making of a film that you can sometimes hold on to a scene as being important, but you can learn later that, in fact, by removing that scene in a strange, sometimes mystifying way the whole film relaxes, and the audience gets everything you’re saying even without this very definite moment. I remember many years ago working on a film with Fred Zinnemann called Julia. These arrows began to point at one scene in particular at the beginning of the film. Maybe we should lose this scene, because again, there was this over-intentionality to it. And so we, meaning Fred and I, said let’s take it out. So I was undoing the splices, back in the day when we made physical splices, and he observed, you know, when I read the script of this project, when I read this scene, I knew that I should do this film. In other words, the very scene he connected with was the scene we are now taking out. So I asked myself, am I removing the heart of the movie? Or am I removing the umbilical cord of the movie? This scene was important to connect Fred with the film, but let’s say, once the nutrients have flowed into the whole film, not only now can you remove the umbilical cord, you have to remove it. We walk around with the belly button, but not with the umbilical cord. So there are scenes like that that deliver their message very particularly, but you should be suspicious of those very scenes and wonder if this film can ride the bike without these training wheels.
A lot of big American movies these days treat the viewers as if they are incapable of connecting the dots, explaining far too much in the process. Do you see that trend in American cinema today?
Yeah, I think so. I think that’s partly down to everything we’ve just been talking about. It’s also that, in quotes, American cinema is also global cinema, in that American cinema is more than Chinese cinema, more than Indian cinema, more than European cinema. It’s the one cinema that goes all the way around the world so it has to be understandable by the Chinese, Africans, South-Americans, Europeans. Inevitably, there is a coarsening of the message there because of trying to adapt to all these different sensibilities and different ways of thinking on the different continents of the globe. But very often it’s simply lazy filmmaking. It’s hard to make it the other way because of the uncertainty of it all, because it’s risky. I find it much more interesting to make things this way precisely because it does involve the audience in the film. And really the last creative act of any film is viewing by the audience. The audience are really the ones who are creating the film, it doesn’t really exist on the screen, it exists in a kind of penumbra between the audience and the screen, the interaction of those two things. And exactly what you’re saying allows that interaction to take place. Otherwise, the audience is just blasted by the things coming from the screen, and they just have to sit there and take it.
Since Return to Oz wasn’t a critical or commercial success, the film practically blocked your potential directorial path. But it must be nice to see what happened to the film in the decades that followed. How do you feel about the project now?
I’m very happy that it has this afterlife. The film was made in the early 1980s, really at the dawn of home cinema. VHS had just come in at that point, I think. So I made it not knowing everything that was going to happen in the next thirty years with DVDs, Blu-rays, streaming and all of these other things that allowed people to see the film in a variety of different circumstances. On the other hand, it has to be good enough for the people to want to see it. So I’m very pleased to see it has this afterlife to it. Ironically, one of the things that happened is that the studio, Disney, at the time of the release of the film had changed management, and the new management really had no interest in Return to Oz at all, really. It was kind of abandoned, but that meant ironically that I had more control over it because if they hadn’t abandoned it, they would have been far more aggressive with me, trying to bend it this way or that, kind of like what happened with Orson Welles on The Touch of Evil. The finished film is as much as any film pretty much as I wanted to make it.
But you said you had some projects you wanted to make, but you were force to abandon it. You stated one of the movies you wanted to make was about Nikola Tesla. Why him?
I’m just fascinated with him as a character. I discovered him in the process of doing research for Return to Oz because the inspiration for the Emerald City, this fantastic place, was the Columbia World’s Fair in Chicago in 1893. And that was the fair that Tesla appeared at, and he was the one that electrified the fairs. This was the first World’s Fair to be electrified with Tesla’s alternating current, and he was at the fair giving demonstrations. So he was arguably the living wizard of that festival, and he was called The Wizard. So I think L. Frank Baum, the author, who lived in Chicago, went to the fair and saw Tesla and Tesla was the wizard. But the more I learned about Tesla and his story, the more fascinated I became. I wanted to do a kind of Mozart-Salieri story on the tension between Tesla and Edison, who were two very, very different personalities, both competing in the same territory.
This story might have made for a great film.
You’ve worked with a lot of great filmmakers in your career. Which collaboration holds a special place in your heart?
It has to be Francis Coppola because the first feature film I’ve worked on was his film, The Rain People in 1969. And I worked with him in 2009 on Tetro, the last film. Which is… how many? Four decades of working together? And on some remarkable films. There’s a gap between Apocalypse Now and Apocalypse Now Redux. But he and I share many sensibilities and he gives a great deal of control to the people who work with him. Working with Francis, I was astonished how much control he gave. We was, like, just go and do something.
A lot of trust.
Yes, a lot of trust, but the surprising thing about trust is, if you’re given all of this trust, you repay it, you know how much he has given you and so you are anxious to fulfill and more the trust he has given you. And that works in opposite way with directors who are always controlling everything, did you do this, I want this, I want that… At a certain point you say, OK, let’s all do what you want. But this other way of working, the Francis way, is a wonderful way of working.
When we compare what editing used to be to editing today, with the development of technology and the trend that movies resemble music videos, what would you say about contemporary, modern editing?
There is a shift. On the other hand, also if you look at the decades, the fastest editing ever in a motion picture was Man with a Movie Camera, Dziga Vertov’s film from 1929. Well, not the whole film, but there’s a section of the film that’s so rapidly cut that you just kind of had to stand back the way you look at fireworks. We, meaning in the larger sense, are investigating the borderline between effect and comprehensibility. And it’s clear that, to achieve a certain effect, this kind of fireworks in editing—you can do that, but you lose comprehensibility. Things are happening on the screen and maybe you’ll capture a thing here or there. For briefs periods of time this is fine in any film. But as a general principle, it’s something to be wary of. Without question, music videos and commercials and even videos you see in clothing stores on video-screens, have all affected the way we see edited images, and they’ve worked their way into the theaters. And we’re looking at films on very different mediums, on iPhones or 20-meter screens in a movie palace, or on virtual reality goggles. So all of those are very different formats, and yet at the moment we have to edit as if they are all the same. This creates dissonances with the rate of cutting.
For example, the videos on screens in clothing stores. They are rapidly cut with lots of moving, so as to make you look at them. So you’re in a store that’s mostly static, people moving fairly slowly, and yet over here there’s a screen going like this (waves his hand frantically), forcing you to look at it. Taking that sensibility though and transposing it into a movie palace, where that’s the only thing we’re looking at and the screen is sixty feet wide, can create undesirable side effects, people get sick looking at it. In the long term, we’ll figure all this out, and it does change from decade to decade. Dialogue, for instance, in the 1930s and 1940s was said much quicker than it is today. The cutting was slower, but people talked much faster, quick, quick, quick. His Girl Friday, for instance. Films just don’t sound like that today. That’s the dialogue equivalent to quick cutting. You can’t see that today. The closest thing would probably be The Social Network, those scenes very quickly paced in terms of dialogue.
The experience of watching feature motion pictures in theaters is barely one hundred years old. Birth of a Nation came out in 1915, and it’s 2015. And I’ve been working in films for half that time. (laughs) We’re still learning how to do this, and adapting to different circumstances, so it’s natural for the pendulum to swing far in one direction, and then far in the opposite direction. Inarritu’s film last year had no edits in it, at all, there were technically concealed edits in there, but the experience of watching it was that there were no cuts whatsoever.
Francis Ford Coppola and editor/re-recording mixer Walter Murch (back) in the Philippines during the shoot of Apocalypse Now in March 1977. Photo by Richard Beggs. Courtesy of Walter Murch
Would you say that The Apocalypse Now was the most troublesome project you ever worked on?
It was troubled, but in a good way. Meaning, it’s a very contentious subject matter, especially at that time. And we were investigating all the possible ways to tell this story. It was turbulent and maybe troublesome, but in a good, creative way. In any film you’re working on, there’s a great deal of uncertainty. Can we do this, is this going to work, do we have time to do this… Everyone is wondering how it is going to work. But it was certainly the longest postproduction of any film I worked on, I was on it for two years, Richie Marx was on it even a year longer. It was a long period and you have to also gage your own energy level and focus on something that lasts that long. That was another kind of an invisible challenge for all of us involved.
You mean coming back to ordinary life?
Sure, that’s an occupational hazard of any film, it completely occupies a great deal of real estate in your brain as you’re working on it, and then suddenly it’s over and all of that real estate is available, empty, and now you have to re-program your brain to get to normal. It’s the equivalent, I think, to a kind of sea sickness. You know you’re finished objectively, but you’re body is still working on something, but there’s nothing to work on. The collision between those two things, what you objectively know and what you feel… it usually takes from two or three weeks to two or three months for these things to come back in alignment.
How long a pause did you have to take after Apocalypse Now?
After that, I started writing a screenplay, one of the projects I was going to direct. So… six months. But at the end of those six months I started writing, which is different than making films, a different rhythm. So after Apocalypse, the next thing I did was Return to Oz. We began preproduction in 1983, so it was almost four years since Apocalypse. So, first I wrote an unproduced screenplay, then Return to Oz.
What was the screenplay about?
It was about an archaeologist in Egypt, a kind of a ghost story, but more along the lines of what you were talking about earlier, one that was ambiguous. There were not a lot of special effects in it, it was about a personality change. Was that down to an accident that happened, or did something spiritual happen to this person? But it ended up in a drawer somewhere.
Mr. Murch, thanks for your time. It was a pleasure.
Sound montage associate Mark Berger, left, Francis Ford Coppola and sound montage/re-recording mixer Walter Murch mixing The Godfather II in October 1974. Photo courtesy of Walter Murch
The art of ADR is much more than having a collection of microphones and knowing how to use them, although Doc’s mic cabinet is pretty impressive. It’s also more than having the latest and greatest hardware and software, but rest assured, Doc has all of the most modern bells and whistles.Perhaps even more important and some might even argue that it qualifies ADR as an art, is the sensitivity to the client.
ABOUT DOC KANE:
What has three letters, many aliases and is of major significance to the sound community? You guessed it: ADR aka Automated Dialog Replacement aka Additional Dialog Recording aka Dubbing aka Looping. All of these monikers are understood as the process of re-recording dialog that cannot be salvaged from a production. To make one thing clear, there is nothing automated about it. ADR is an art. And here to tell us more about the art is an artist whose name also has only three letters and many aliases but nonetheless has made a significant impact on the sound community.
His name is Doc Kane but most just call him Doc. He has over 300 projects under his belt and a slew of awards and nominations, including four Academy Award nominations.
Tom Hanks talks about the fact that the voice of Woody for toys and games is sometimes actually the voice of his brother, Jim. He tells a story about what it is like working on Stage B when he is recording the voice of Woody for the Toy Story films.
Composer Neil Brand celebrates the art of cinema music, Neil explores how changing technology has taken soundtracks in bold new directions and even altered our very idea of how a film should sound.
In the last of three programmes in which composer Neil Brand celebrates the art of cinema music, Neil explores how changing technology has taken soundtracks in bold new directions and even altered our very idea of how a film should sound.
Neil tells the story of how the 1956 science fiction film Forbidden Planet ended up with a groundbreaking electronic score that blurred the line between music and sound effects, and explains why Alfred Hitchcock’s the Birds has one of the most effective soundtracks of any of his films – despite having no music. He shows how electronic music crossed over from pop into cinema with Midnight Express and Chariots of Fire, while films like Apocalypse Now pioneered the concept of sound design – that sound effects could be used for storytelling and emotional impact.
Neil tracks down some of the key composers behind these innovations to talk about their work, such as Vangelis (Chariots of Fire, Blade Runner), Carter Burwell (Twilight, No Country for Old Men) and Clint Mansell (Requiem for a Dream, Moon).
Magician: The Astonishing Life and Work of Orson Welles looks at the remarkable genius of Orson Welles on the eve of his centenary – the enigma of his career as a Hollywood star, a Hollywood director (for some a Hollywood failure), and a crucially important independent filmmaker. From Oscar-winner Chuck Workman.
With: Simon Callow, Christopher Welles Foder, Jane Hill Sykes, Norman Lloyd, Ruth Ford, Julie Taymor, Peter Bogdanovich, James Naremore, Steven Spielberg, Henry Jaglom, Elvis Mitchell, Beatrice Welles-Smith, Walter Murch, Costa-Gavras, Oja Kodar, Joseph McBride, Wolfgang Puck, Jonathan Rosenbaum, Michael Dawson, Paul Mazursky, Frank Marshall
Disney Research demonstrated Automatic Editing of Footage from Multiple Social Cameras at SIGGRAPH.
Video cameras that people wear to record daily activities are creating a novel form of
creative and informative media. But this footage also poses a challenge: how to expeditiously
edit hours of raw video into something watchable. One solution, according to Disney researchers,
is to automate the editing process by leveraging the first-person viewpoints of multiple cameras
to find the areas of greatest interest in the scene.
The method they developed can automatically combine footage of a single event shot by
several such “social cameras” into a coherent, condensed video. The algorithm selects footage
based both on its understanding of the most interesting content in the scene and on established
rules of cinematography.
“The resulting videos might not have the same narrative or technical complexity that a
human editor could achieve, but they capture the essential action and, in our experiments, were
often similar in spirit to those produced by professionals,” said Ariel Shamir, an associate
professor of computer science at the Interdisciplinary Center, Herzliya, Israel, and a member of
the Disney Research Pittsburgh team.
Whether attached to clothing, embedded in eyeglasses or held in hand, social cameras
capture a view of daily life that is highly personal but also frequently rough and shaky. As more
– more –eople begin using these cameras, however, videos from multiple points of view will be
available of parties, sporting events, recreational activities, performances and other encounters.
“Though each individual has a different view of the event, everyone is typically looking
at, and therefore recording, the same activity – the most interesting activity,” said Yaser Sheikh,
an associate research professor of robotics at Carnegie Mellon University. “By determining the
orientation of each camera, we can calculate the gaze concurrence, or 3D joint attention, of the
group. Our automated editing method uses this as a signal indicating what action is most
significant at any given time.”
In a basketball game, for instance, players spend much of their time with their eyes on the
ball. So if each player is wearing a head-mounted social camera, editing based on the gaze
concurrence of the players will tend to follow the ball as well, including long passes and shots to
The algorithm chooses which camera view to use based on which has the best quality
view of the action, but also on standard cinematographic guidelines. These include the 180-
degree rule – shooting the subject from the same side, so as not to confuse the viewer by the
abrupt reversals of action that occur when switching views between opposite sides.
Avoiding jump cuts between cameras with similar views of the action and avoiding very
short-duration shots are among the other rules the algorithm obeys to produce an aesthetically
The computation necessary to achieve these results can take several hours. By contrast,
professional editors using the same raw camera feeds took an average of more than 20 hours to
create a few minutes of video.
The algorithm also can be used to assist professional editors tasked with editing large
amounts of footage.
Other methods available for automatically or semi-automatically combining footage from
multiple cameras appear limited to choosing the most stable or best lit views and periodically
switching between them, the researchers observed. Such methods can fail to follow the action
and, because they do not know the spatial relationship of the cameras, cannot take into
consideration cinematographic guidelines such as the 180-degree rule and jump cuts.
Automatic Editing of Footage from Multiple Social Cameras
Arik Shamir (DR Boston), Ido Arev (Efi Arazi School of Computer Science), Hyun Soo Park (CMU), Yaser Sheikh (DR Pittsburgh/CMU), Jessica Hodgins (DR Pittsburgh)
ACM Conference on Computer Graphics & Interactive Techniques (SIGGRAPH) 2014 – August 10-14, 2014
Paper [PDF, 25MB]
A very good documentary about the EditDroid. When I worked at Disney Imagineering, we were using laser disc players in a lot of places, including EPCOT starting the 1980s. We were also innovators and early adoptors of nonlinear editing and video to film matchback.
From the film’s website;
The EditDroid was (one of) the first nonlinear electronic editing system and used several laser disc players loaded with the raw footage of a film. The simple computer interface was unique for its time. After a short period of success the EditDroid disappeared from the film scene and George Lucas sold the machine’s patents to a small company called Avid.
I highly recommend the book Droidmaker: George Lucas And the Digital Revolution. It is also available as an iBook and on Kindle.
This book ventures in territory never explored, as Rubin-a former member of the Lucasfilm Computer Division-reconstructs the events in Hollywood, in Silicon Valley, and at Lucas’ private realm in Marin County, California, to track the genesis of modern media. With unprecedented access to images and key participants from Lucasfilm, Pixar and Zoetrope-from George Lucas and the executives who ran his company, to the small team of scientists who made the technological leaps, Rubin weaves a tale of friendships, a love of movies, and the incessant forward movement of technology. This is a compelling story that takes the reader into an era of technological innovation almost completely unknown
“Ray Dolby was a brilliant scientist whose inventions are in use every day in recording studios, sound editing suites, mix stages and cinemas worldwide,” said MPSE president Frank Morrone. “He was a giant in our industry and we take great pride is saluting his many contributions to our craft.”
Dolby, who passed away last September, is the founder of Dolby Laboratories. He is credited with developing a noise reduction system which delivered sound recordings with greater clarity and fidelity that was previously possible. The Academy Award winner also developed the first commercially-viable surround-sound system, which led to the widespread use of 5.1- and 7.1-channel sound systems in theaters and homes.
In 2012, the home of the Academy Awards was renamed the Dolby Theater, and the grand ballroom at Hollywood & Highland is now known as the Ray Dolby Ballroom.