Happy Birthday to George Lucas! As we know George was a big proponent of the use of digital technology in cinema. When I worked at Sony in the 1990’s, we were on the cutting edge of using digital cameras for cinematography. Here is a video from Sony that highlights the development of the Sony cameras used in Star Wars.
Here is another video from ILM about all the areas that George changed with digital technology for editing and VFX. Thank you Mr. Lucas!
Very good video on using virtual cameras in animation. Here Pixar uses computer animation to duplicate complex photographic tools such as split diopters as you would in “real” cinematography. Thus making the film look more authentic.
While I was at CineGear Expo, I met Michael Frediani at the SOC booth and thanked about about his research into Jerry Lewis and told him I would post his article on video assist. I also included an article from the 695 Quarterly about Jim Songer about his development of thru the lens video assist. There is a lot of debate on the topic of who “invented” video assist. Like most technical innovations there is no one single inventor, but many improvements from each contributor. Here is the earlier post about Jerry and video assist.
Jerry Lewis was an influence on Francis Ford Coppola.
Francis Ford Coppola later developed his own “electronic cinema” previsualization called Image and Sound Control.
As well as being an entertainer, “Jerry Lewis was a major innovator in motion pictures,” stated director Francis Ford Coppola. “His invention of putting a video camera next to the motion picture camera so he could play it back and direct himself, has been used for decades by every director in the movie industry. I watched him on the set of The Ladies Man in 1961 and was amazed by his groundbreaking innovation, the Video Assist.”
Video Village is a standard feature on the modern movie set. Producers, writers, clients and others can view the action clustered around a monitor far enough away from the set to stay out of trouble. Their segregation in the video ghetto allows camera people and others to go about their tasks without the distraction of people jockeying for position at the viewfinder. It also helps makeup and wardrobe personnel to see how their work appears on camera and it has become an essential tool for the director and continuity person. Even the sound crew benefits by having extension monitors to see the frame and position the boom microphone. All this is made possible by a video assist system perfected by Jimmie Songer, a Local 695 technician.The advantages of using a video camera as an aid to directing movies were apparent from the very beginning. Several directors began to set up TV cameras adjacent to the film camera so they could see an approximate frame. This became a common practice particularly on commercials where the placement of the product is crucially important. To match the view and perspective, assistants would carefully adjust the aim and image size to closely approximate the view of the film camera.
Of course, that isn’t really a video assist system. The image is useful for the simplest shots but not much help when the camera moves or the lens is adjusted. Every setup change or lens adjustment necessitates a recalibration of video camera position and exposure settings. To be a fully functional system, both the video and film cameras would have to view the scene through the same lens to avoid parallax errors and exposure sensitivities would have to track together. This presents a series of technical challenges.
It was a cowboy from East Texas with little formal education who took on the challenge and worked out all the engineering obstacles. Jimmie Songer grew up on a ranch in Burleston, south of Fort Worth, with a keen interest in how radio and television worked. He and his friend, Don Zuccaro, would purchase crystal radio kits, assemble them and string the antenna wire along his mother’s clothesline.
As a teenager, he took a road trip that would set up the course of his life. He and his friends traveled north as far as Bloomington, Indiana, when funds began to run out. Looking for a job to replenish assets, he applied to the RCA plant on Rogers Street. Ordinarily, his lack of formal training would have been an impediment but RCA was just then experimenting with designs for color sets and there was no established technology to learn. By diagramming from memory the circuit design of a popular RCA model, he demonstrated familiarity with the major components and was hired on the spot to be a runner for the engineers developing the new color system.
His duties at RCA consisted largely of gathering components requested by the engineers and distributing them. Along the way, he asked questions about the function of each element and how it fit into the overall design. He stayed about a year, not long enough to see the model CTC4 they were developing go on sale. That didn’t happen until a couple of years later in 1955. But, when he did move back to Texas, he had a pretty good understanding of how video, and color video in particular, worked.
Graduating from crystal radio sets, he and his friend, Don Zuccaro, made a mail-order purchase of plans for a black & white television. Components were not readily available at that time but Jimmie and Don were ingenious and purchased a war surplus radar set with A&B scopes and cannibalized it for parts. The task of hand-winding the tuning coil was simplified because Fort Worth had only one TV station so there was no need to tune anything other than Channel 5.
With skills honed from building his own set and working at the RCA plant in Indiana, Jimmie Songer quickly found work with appliance shops in the Fort Worth area that were beginning to sell television sets but had no one to set them up, connect antennas and service them when needed. This led to an offer, in 1953, to work setting up KMID, Channel 2, in the Midland Odessa area. After a few years with KMID, he worked awhile in the Odessa area and then returned to Fort Worth but he stayed only a year before setting out for Los Angeles in April 1963.
In Los Angeles, he worked at first for a TV repair shop in Burbank while he tinkered with his own experimental projects. Hearing that Dr. Richard Goldberg, the chief scientist at Technicolor, was looking for people with experience with color, he sought him out and secured a job calibrating the color printers. Dr. Goldberg was also developing a two-perforation pull-down camera for widescreen use. Songer became fascinated by the possibility of using that design at 48 fps to make alternate images, one atop the other, which might be used for 3D and built some experimental rigs to test the idea.
This work with Dr. Goldberg in the early ’60s brought him to the attention of Gordon Sawyer at Samuel Goldwyn Studios. Sawyer wanted him to help with an ongoing project for Stan Freberg involving simultaneous video and film recording. Freberg was using side-by-side cameras to create video records of film commercials. The side-byside positioning produced parallax errors but his commercials were mostly static. Generally, the results were good enough for timing and performance checks. But issues of accurately tracking motion would arise whenever the camera did move and Stan Freberg wanted a better system.
Under general supervision from Gordon Sawyer, the team first addressed the issue by adjusting the position of the video camera. They attached a small Panasonic camera to the mount for an Obie light. This put the video lens exactly in line with the film camera lens and only a couple of inches above it. Left-right parallax was effectively eliminated and the vertical alignment could be adjusted to match the film camera with only minimal keystone effect. By affixing a mirror just above the lens mount at a 45-degree angle and mounting the video camera vertically to shoot into the mirror, they reduced vertical parallax to almost nothing. Jimmie Songer addressed the keystone problem by devising a circuit that slightly adjusted the horizontal scan, applying an opposite keystone effect to neutralize the optical effect that was a consequence of slightly tilting the video camera to match the film camera image. Most of the time, this system worked well but there were still limitations. The video system needed to be recalibrated with every lens change. Even with careful adjustment, use of a separate lens for the video meant that depth of field would be different so the video image would only approximate the film image. Blake Edwards knew Gordon Sawyer and approached the team to design a system suitable for movies with moving cameras and frequent lens changes.
The limitations could only be resolved if the video camera used the very same lens used by the film camera. Accomplishing that would require exact positioning of the video lens and adjusting sensitivity of the system both to obtain sufficient light for exposure and to track with the film exposure. Jimmie Songer set about developing a system that could be built into a Panavision Silent Reflex camera (PSR) that used a pellicle mirror to reflect the image to the viewfinder. They left the image path from the lens to the film completely untouched but introduced a second pellicle mirror to reflect the image from the ground glass to a video camera they built into the camera door. This one design change eliminated many of the limitations of previous systems in one stroke. Since the video used the film camera lens and picked up the exact image seen by the film and the camera operator, issues of parallax and matching depth of field were completely eliminated. There was no need to recalibrate the system with every lens change and the video camera was configured to use the same battery supply as the camera. The introduction of a second pellicle mirror did flip the image but Songer corrected this easily by reversing the wires on the deflection coil. But the issue of having sufficient light for the video image still remained.
In one way, a pellicle reflex system is ideal for video use. Unlike a mirror shutter, the pellicle system delivers an uninterrupted image to the viewfinder so there is no need to coordinate the 30-frame video system with a 24-frame film camera. While there would be more frames in a single second of video, the running times would match and that was all that was important. Furthermore, the video image would be free of the flicker seen in the viewfinder of a mirror shutter camera. However, the pellicle mirror used in the reflex path deflected only about one-third of the light to the viewfinder. That was no problem when filming outside in daylight but there was insufficient light when working interiors.
Jimmie Songer needed to make three refinements to the system to address the exposure issue. First, he replaced the vidicon tube that was normally fitted to the camera with a newly available saticon tube that was more sensitive and also provided 1,600 lines of resolution. That helped but wasn’t enough. He then adjusted the optics so that the image, rather than being spread over the full sensitive area of the tube, was delivered only to the center portion. By concentrating on the image, he obtained more exposure and adjusting the horizontal and vertical gain allowed him to spread out the smaller image to fill the monitor. But, there are limits to how much can be gained by this approach. Even with a high-resolution saticon tube, the image will begin to degrade if magnified too far. There was still not enough light for an exposure but the video system had been pushed to its limits so Songer turned his attention to the film camera.
Recognizing that the ground glass itself absorbed a considerable amount of light, Songer contacted Panavision and asked them to fabricate a replacement imaging glass using fiber optic material. Although the potential of using optical fibers for light transmission had been recognized since the 19th century, the availability of sheets of tightly bundled fiber suitable for optics was a recent development in the 1960s. The fiber optic ground “glass” was the trick that made the system work, allowing the video camera function with the light diverted to the viewfinder.
Jimmie Songer and his assistant used the system, first called “instant replay” but now renamed “video assist” to avoid confusion with sports replay systems, on The Party in 1968 and then Darling Lili in 1970. It worked flawlessly, delivering the exact image of the main camera so Blake Edwards, the Director, could follow the action as it happened. It never held up production; to the contrary, Edwards said that it streamlined production because the certain knowledge of how the take looked freed him from making protection takes.
After Darling Lili, the key figures behind the project formed a company, Video West, to further develop the system. They met with rep representatives of the ASC to draw up a series of specifications for video assist systems. Don Howard was brought in to interface the camera with the playback system and operate it in the field. Harry Flagle, the inventor of Quad-Split viewing technology and one of the Ampex engineers who worked on the development of the Model VR-660 portable two-inch recorder, joined the team soon after.
They next used the system on Soldier Blue, directed by Ralph Nelson, and then Wild Rovers, again with Blake Edwards. It proved so popular with producers that Songer and Don Howard, his assistant who was primarily responsible for operating and cuing the video recorder, scheduled projects months in advance and went from film to film. The work was so tightly booked that they sometimes had to ship the camera directly from one project to the next without a return to the shop.
Jimmie Songer joined Local 695, sponsored by Gordon Sawyer, shortly after Darling Lili and continued as a member until his membership was transferred to Local 776 in 1997. In the course of his career, he obtained seventeen US patents for a variety of innovations in high-definition TV and 3D video imaging.
In 2002, he received a Technical Achievement Award from the Academy for his work developing video assist. He lives today on a ranch near Fort Worth but continues to refine the video engineering work that has been his life.
A quote, attributed to Tacitus, claims that success has many fathers while defeat is an orphan. It’s just so with the invention of video assist which is claimed by several people. Jerry Lewis is often cited as the inventor and he certainly incorporated simultaneous video recording in his filming practices very early. He began development work in 1956 and first used a video record and playback system during the filming of The Bellboy in 1960. He used the system to view and evaluate his own performance immediately after each take. But the system he used on The Bellboy was the simplest version; a video camera was lashed just above the main lens and would be adjusted to approximately match the view of the film camera lens with each setup. Later, Jerry Lewis also worked to develop a system that would use a pellicle mirror to view the image through the primary lens.
The assertion that Jerry Lewis “invented” video assist is overstated. The original patent for a video assist system dates to 1947 and subsequent patents in 1954 and 1955 added the refinements of merging optical systems to eliminate parallax and adding a second beamsplitter to permit simultaneous use of video and film viewfinders. The integrated video systems that came into general use in films were the work of many individuals each building on the accomplishments of predecessors. Jimmie Songer’s contributions were many and essential as recognized in 2002 by the Academy of Motion Picture Arts and Sciences.
Glossary for highlighted words
Deflection coil – In a CRT (cathode ray tube), the beam of electrons is aimed by magnetic fields generated by coils of wire surrounding the tube. Adjusting the electrical energy sent to different coils directs the electron stream.
Obie light – A diffuse light mounted very near the camera lens, typically just above the matte box, to provide soft fill on faces in close-ups. Lucien Ballard, ASC developed the light to photograph Merle Oberon after her face was scarred in an auto accident.
Pellicle mirror – A semi-transparent mirror used in optical devices. A pellicle reflects a certain percentage of light and allows the remainder to pass through. In the Panavision PSR camera, a pellicle mirror deflected approximately 30% of light to the viewfinder and passed about 70% to the film plane.
Saticon tube – A saticon tube is a refinement of the vidicon tube design that adds particular chemicals to the photosensitive surface to stabilize the signal.
Vidicon tube – A vidicon is one of the early image capture devices made for television cameras. An image focused on a photoconductive surface produces a charge-density pattern that may be scanned and read by an electron beam.
Panavision showed the new DXL 8K camera. The footage shown was very nice!
The best thing was seeing Vittorio Storaro ASC.
He talked about working with Woody Allen on his new film for Amazon Studios, Cafe Society.
This is Woody’s first digital feature and Vittorio used the Sony F65;
“I had seen that the Sony F65 was capable of recording beautiful images in 4K and 16 bit-colour depth in 1:2, which is my favorite composition,” Storaro said. “So when Woody called me this year asking me to be the cinematographer of his new film with the working title ‘WASP 2015,’ my decision was already made. I convinced him to record the film in digital, so we can begin our journey together in the digital world. It’s time now for the Sony F65!”
He spoke of the Technicolor IB process, light, shadows and color and said that digital makes it too easy.
He stated that a trend that has emerged with the use of digital cameras is that “people want to work faster or show that they can use less light, but they don’t look for the proper light the scenes needs. That isn’t cinematography, that’s recording an image. … I was never happy in any set to just see available light,” said Storaro, who has won Oscars for Apocalypse Now, Reds and The Last Emperor. “Even in very important films that take Academy Awards, you can record an image without location lighting. But that’s not necessarily the right light for the character. We have to always move a story forward, not step back.”
He elaborated on his work with Coppola and that he hasn’t used anamorphic lenses for many years. Sorry Mr. Tarantino.
The best and most important part though, was when he got even more philosophical. He mentioned Mozart, the Lumiere brothers, Newton, Caravaggio, architecture, and Plato and the Cave. From his website:
Ever since Plato’s “Myth of the Cave” we are used to seeing Images in a specific space. In Plato’s myth, prisoners are kept in a cave facing an interior wall, while behind them, at the entrance to the cave, there is a lighted fire, with some people with statues and flags passing in front of the fire. At the same time, their shadows are projected onto the interior wall of the cave by fire’s light. The prisoners are looking at the moving shadows in that specific area of the wall. They are watching images as a simulation, a “simulacre” of reality, not reality itself. The myth of Plato is a metaphor for the Cinema.
He believes that film is a collaboration as opposed to the auteur theoryand emphasized the importance of story.
“You need to find the balance of technology and art,” continued Storaro, who was inspiring and thought-provoking in his speech, also raising an argument against the use of the term ‘director of photography’ to define the role of the cinematographer. “That’s a major mistake. There cannot be two directors. … Let’s respect the director,” he asserted, saying that ‘cinematographer’ is the appropriate word, and adding that it’s not interchangeable with photographer. “Cinematography is motion, we need a journey and to arrive at another point. We don’t create a beautiful frame, but a beautiful film. That’s why I say ‘writing with light.'”
Since we are now going through a film usage renaissance, I thought a good historical overview of color film processes was in order. Technicolor and Eastman color color are just two of the many mentioned here (courtesy BFI). Most are now defunct, but of particular interest is tinting, used in The Adventures of Prince Achmed by Lotte Reiniger.
Do you know your Technicolor from your Kodachrome? The science of colour in film, which will be explored in a second annual event at BFI Southbank in March, has brought us many innovative systems over the past 120 years. Here’s an expert’s guide to 10 of the best.
This is an amended version of an article first published on 14 January 2015.
In early 2015, the BFI hosted Colour in Film, an enthusiastically received symposium held by the Colour Group, an interdisciplinary society bringing together experts in the field of colour. The event highlighted issues of colour film restoration, and how and where these related to the discipline of colour science, furthering the interaction between these two vibrant but thus far largely separated communities.
We are happy to announce that we can now embark on our next step to extend Colour in Film into a regular scholarly event to foster and grow the interaction between the colour film restoration and colour science circles. The international conference Colour in Film will begin on 2 March 2016, 14.00 at BFI Southbank with specially selected colour film screenings contextualised by expert introductions as well as an opening reception. It then continues with all-day presentations on 3 March, 9.00-17.00, at Friends House near London Euston Station.
Many of the colour systems featured in the conference appear in the below list, which we first published to coincide with the first Colour in Film session in 2015. Even within the year since this first conference, trends in film exhibition and restoration have shifted, so we have updated and amended the list below accordingly. Colour in Film is a joint initiative of the Colour Group, the British Film Institute and HTW – University of Applied Sciences, Berlin.
What IS colour? Colour is a sensation, a product of the human mind, as we shall learn in this year’s keynote address by Prof. Andrew Stockman. Yet it can be measured – or rather, derived from measurement – as we learned at the 2015 event and shall be reminded this year by Prof. Mike Pointer. Since the beginning of film and photography, attempts have been made to capture and reproduce colour. And some of these systems are especially beautiful where they do not reach their goal, ‘natural colour’, but rather achieve a heightened sensation of colour and emotion that we still struggle to understand and reproduce in restoration – the main, and vastly interdisciplinary focus of our event.
Such colour systems include:
So called LADs (Laboratory Aim Density charts) with ‘China Girls’ allow assessment of select defined colours as well as of skin tones in these Eastmancolor clippings Credit: U. Ruedel
Often maligned, perhaps in comparison to the ‘Glorious’ Technicolor it replaced, or for the fading of some materials, Eastmancolor and similar colour negative processes are often too easily dismissed, ignoring their revolutionary accomplishments. In the 1950s, they overcame the registration and fringing issues of Technicolor and, in this manner, made colour film ready for the widescreen revolution.
They developed the chemistry pioneered by Kodachrome reversal and Agfacolor negative stocks further by introducing the colour ‘mask’ that is responsible for the typical orange of colour negative films, making printing and copying in colour much more feasible. Under severe threat from digital projection nowadays, modern, analogue colour film still remains the top standard for any large-screen colour moving image experience, especially on the large 70mm film stock, just so gloriously revived by Quentin Tarantino for his The Hateful Eight (2015).
A Colour Box (1935), Dufaycolor reversal colour positive Credit: BFI National Archives. These and other colour treasures were documented by Professor Barbara Flueckiger in a joint project with BFI for her Timeline of Historical Film Colors
With its mosaic of red, green and blue colour areas known as a réseau, Dufaycolor was an additive system, that is, one creating colour in the same manner as, say, the modern red, green and blue pixels of a computer monitor. It’s even tempting to see its mosaic colour pattern, which blends at sufficient viewing distance into the intended colours, as a precursor of the modern colour pixel.
This complex process emerged in 1933, though was soon to become outdated due to more effective subtractive systems such as Gasparcolor, Technicolor, Kodachrome and, eventually, colour negative film. But this was not before making some of the most beautiful British colour films possible, including the famous abstract films by animator Len Lye, such as 1935’s A Colour Box, seen in the above frame grab. Significant progress in the restoration of Dufaycolor has been made within Prof. Barbara Flueckiger’s DIASTOR project, and will be showcased in the 2016 Colour in Film screenings.
Principle of colour separations, starting from an Eastmancolor LAD (digital simulation) Credit: U. Ruedel
It bears repeating that the most stable colour records are those separated into black-and-white film strips, representing the three primary colours, each as a black-and-white master positive or negative. Technicolor produced its negatives on three black-and-white film strips or (for animation) as three successive images on one strip, in each case separately recording and rendering three basic colours. They still hold up breathtakingly, copied onto modern film or remastered to the latest DCP or high-def formats. Gasparcolor used the same approach for its negatives, and to this day, copying colour materials onto three separate silver gelatin, black-and-white images remains the most durable colour record for moving images ever invented.
7. Two-color Technicolor and Two-color Kodachrome
Much like Technicolor’s revolutionary colour system in the 1920s, the earliest Kodachrome (related to the later, better-known reversal process one by name only) used only two primary colours, red and green-blue, with its respective photographic emulsions on two sides of a single film strip. Both systems are somewhat similar in their photographic-chemical approaches, if not in detail, and their limited colour rendition might not accurately represent the scene photographed, but could beautifully render skin tones in a glowing, sometimes nearly marble-like beauty.
Technicolor’s implementation (in two different versions known as Technicolor No. II and No. III) enjoyed quite some success in 1920s cinema, more often than not for select sequences highlighted by their ‘natural’ colour, like the religious scenes in Ben-Hur (1925). Famous features entirely shot in the process include Douglas Fairbanks’ The Black Pirate (1926), preserved at the BFI in its proper colours resembling period book illustrations, and Michael Curtiz’s pioneering early colour horror films, Doctor X (1932) and Mystery of the Wax Museum (1933). For the definitive history of two-color Technicolor, look no further than James Layton’s and David Pierce’s magnum opus published by the George Eastman Museum.
Ship of the Ether (1934) Credit: BFI National Archive. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
Admittedly short-lived after its 1933 introduction, but vibrant and pioneering, Gasparcolor was a so-called dye destruction system, requiring extensive exposure. Animation scholar William Moritz described the system by its “perfect hues for animation,” and it was this way it was used by artists such as Oskar Fischinger, Len Lye and George Pal. For a while it was the only technically serious competitor to Technicolor. For the most recent restoration study, consult HTW graduate Andrea Krämer’s master thesis (in German), available through the Timeline of Historical Film Colors.
Introduced in 1935, the Kodachrome reversal process was the first successful colour system employing what we know today as colour development, using so-called couplers that create the dyes within a film upon photographic development. Available as a reversal material only, it entered the amateur movie market while the cinema market was only slowly moving towards Technicolor.
Most Kodachrome films are vibrant (even a national park, Utah’s Kodachrome Basin, has been named after the system) and quite stable, and thus home or non-theatrical movies shot in the format can provide rare historic colour images such as those of the Second World War or the lives of ordinary people.
Developed in Germany in the 1930s, but tainted by its emergence in the state-controlled cinema of the Nazi era, Agfacolor was the first successful colour negative material, and as such, a major innovation and a technical predecessor of the American Eastman colour negative introduced in the 1950s.
In the post-war era, the material’s typical pastel hues offered a beautiful alternative to America’s more candy-coloured Technicolor and Eastmancolor materials for the national cinemas, say, of Scandinavia or Germany, and its equivalent East German successor Orwocolor (renamed in 1964) has similarly recently been re-appraised, such as in the 2013 Emulsion Matters series at the Il Cinema Ritrovato festival in Bologna. One of the most important – and problematic – Agfacolor films is Veit Harlan’s Opfergang (1944). Extracts from the ongoing restoration of the film by Murnaustiftung will for the first time publicly be shown by Prof. Flueckiger during the 2 March Colour in Film screenings.
Opfergang (1944). Excerpts of the new restoration of this Veit Harlan film will be premiered during the 2016 conference, Colour in Film Credit: By courtesy of Murnaustiftung, Wiesbaden. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
Die verwitterte Melodie (1943). Orwocolor safety print of this Agfacolor film Credit: U. Ruedel
3. Tinting and toning
Even in the earliest silent movie era, the majority of films were in colour, though colours were subsequently applied to the black-and-white image rather than naturally photographed. Tinting would literally bathe the entire image in colour dyes, resulting in subtle or saturated, vibrant, incredibly transparent colours that often are still impossible to fully match with today’s photographic or digital imaging methods.
Colour in Film offers the unique opportunity to see extracts of German silent films films chemically tinted following the historic processes. These will be screened on 2 March in comparison with modern analogue and digital restorations, presented by Anke Wilkening (Murnaustiftung) and Prof. Ulrich Ruedel (HTW Berlin). We will also be treated to the most recent restoration techniques to best digitally approximate the look of tinting, in Der Märchenwald, ein Schattenspiel (1919), presented by Prof. Flueckiger.
Der Märchenwald, ein Schattenspiel (1919), which will be screened during the 2016 conference, Colour in Film Credit: By courtesy of Deutsche Kinemathek, Berlin. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
Der Märchenwald, ein Schattenspiel (1919) Credit: By courtesy of Deutsche Kinemathek, Berlin. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
Toning, in contrast, would replace the black-and-white silver image with one equally crisp and defined, but comprised from inorganic pigments such as the well-known sepia brown or Prussian Blue. These chemistries were measured and their palettes will be explored in depth in Prof. Ruedel’s 3 March presentation.
Tinting and toning could also be evocatively combined, yielding two-colour, yet artificial images of particular beauty, such as in the brand-new restoration of Fritz Lang’s melancholic masterpiece Destiny (1921), freshly restored by Murnaustiftung and to be explored by Anke Wilkening in depth in her 3 March talk.
Dänische Landschaften [Danish Landscapes] (1912) Credit: Joye Collection, BFI National Archive. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
Essentially a silent film technique, tinting would also occasionally be applied in the sound era, perhaps most recently in The Day the Earth Caught Fire (1961) to suggest the heat experienced on the Earth as it drifts to the sun. For our recent sci-fi season, the film was digitally restored by the BFI, drawing on this rare original tinted print (note the typical image squeeze, meant to be stretched to CinemaScope dimensions on the screen) as a colour guide.
Tinted frame for The Day the Earth Caught Fire (1961) Credit: BFI National Archive
Adapting an approach well known for, say, photographic postcards from the 19th century (but dating back even to prehistoric cave paintings), hand and stencil colours were made from solutions of synthetic dyes applied to films shot on black-and-white materials, much like in tinting but selectively, thus to create the first ‘colour’ films.
The Cascades of the Houyoux (Province of Liège, Belgium) (1911) Credit: Joye Collection, BFI National Archive. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
The version done frame by frame, by hand, copy per copy, would be the most laborious, but soon French production company Pathé would establish its semi-automated stencil colour system, where, once stencils were cut for every colour for a given film, they could more easily be brushed onto numerous copies, and benefit from very concise outlines. Artificial but also incredibly beautiful, whether lavishly applied or selectively, naturalistically or fantastically, these colours still exert their magic spell on an audience after more than a century.
In the year of the 100th anniversary of the company that developed it through the years (‘Glorious Technicolor’ is actually the fourth Technicolor system, with two-color Technicolor systems No. 2 and 3 its most important predecessors, see above), Hollywood’s first enduring colour system easily makes the number one spot on this list. With prints that are essentially made in a lithographic, ‘dye transfer’ process, its vibrancy is reminiscent of the earlier, ‘unnatural’ applied dye colours, yet it was the first system to offer full ‘natural’ colour photographic moving images.
The Red Shoes (1948) Credit: BFI National Archives. Photograph by Barbara Flueckiger, Timeline of Historical Film Colors
Hollywood classics like Gone with the Wind (1939) are unthinkable without the systems, but there were also, in their aesthetics, distinctly European and British implementations of the system, such as The Red Shoes (1948). Even after the bulky Technicolor camera had to succumb to the use of Eastman colour negatives in conventional film cameras, Technicolor printing remained the preferred way to ensure vibrant prints even from Eastman negatives, well into the 1970s, for everything from spaghetti westerns to Hammer horror. This may be the subject of a future Colour in Film edition.
Want to know more? Consult the Timeline of Historical Film Colors
Timeline of Historical Film Colours
The BFI National Archive’s vaults are home to a host of treasures reflecting the international history of film colour, including British contributions ranging from early colour systems such as Friese-Greene to the unique aesthetics achieved with American Technicolor by cinematographers such as Jack Cardiff.
In film restoration, rendering historical colours faithfully in modern photographic or digital copies remains a substantial challenge. Improvements have been made both within the analogue realm and by the extended possibilities digital restoration offers. Still, many existing copies only to a limited extent reflect original colour appearance, and even in brand-new restorations, the colours of originals can turn out to be beyond the range of modern films or digital colour spaces. Thus, ‘passive’ conservation to protect original materials for the future in state-of-the-art film store facilities remains of the highest priority, but so does further research and documentation on the colours of 20th-century motion pictures towards improved understanding and more faithful restoration.
The BFI has always been engaged with these problems, ranging from issues related to the very earliest colour films to the authentic colour rendering in major BFI restoration projects, with scientific research and outreach informing such endeavours. Recognising the quantum leap towards documentation of historical colour systems facilitated with her Timeline of Historical Film Colors, the BFI’s conservation managers and curators were thus delighted to welcome the University of Zurich’s Professor Barbara Flueckiger to the J. Paul Getty Jr Conservation Centre in March 2014. In a joint project and with help from the conservation and collection teams, various colour systems evidenced in the rich collections were to be visually documented for dissemination within the Timeline website.
During two days on site, Professor Flueckiger thus captured numerous high-quality, colour-calibrated digital images of a carefully selected, yet extensive number of historic colour prints, often on volatile nitrate stock, from the BFI’s vaults. The high-quality images thus generated now form part of the BFI’s Collections and Information Database and are available both to specialists and the public through the galleries in the Timeline of Historical Film Colors.
Oscar-winning cinematographer Robert Richardson ASC digs into the technical nitty-gritty of the large-format anamorphic film process that hasn’t been used in nearly 50 years.
The comeback of motion picture film will literally get its biggest boost yet with the Ultra Panavision 70 release of celluloid defender Quentin Tarantino’s post-Civil War Western “The Hateful Eight.”
Shot on 65mm film with classic Panavision lenses in the widest aspect ratio of 2.76:1, this marks the first anamorphic 70mm theatrical release in nearly 50 years. The two-week roadshow engagement (they’re aiming for 100 theaters with the Cinerama Dome in contention for LA, of course) would be the best holiday gift for cinephiles.
“The Hateful Eight” will also pit three-time Oscar-winning cinematographer Robert Richardson (“Hugo,” “The Aviator,” “JFK”) in a shoot-out with Emmanuel “Chivo” Lubezki, who’s going for a third Oscar in a row for his own frozen wilderness adventure, “The Revenant,” from “Birdman” director Alejandro G. Iñárritu. (Both films are racing to the editorial finish line for a Christmas Day release.)
Richardson proclaimed that Ultra Panavision 70 more than reinforces the notion that film can coexist with digital: it provides such unparalleled scope, resolution and beauty that everyone should be using it. “When we saw Sam Jackson in a closeup — or anyone — it just aided the skin. It’s remarkable. We never used diffusion, the only filters we ever did were outside. It was stunning.”
The last Ultra Panavision 70 release was “Khartoum” (1966), the biopic with Charlton Heston as British Gen. Charles Gordon. The list also includes “Ben-Hur,” “Mutiny on the Bounty,” “It’s a Mad, Mad, Mad, Mad World,” “The Fall of the Roman Empire,” “The Greatest Story Ever Told” and “The Battle of the Bulge.”
In fact, Panavision took Tarantino into a screening room and surprised him with the chariot race from “Ben-Hur,” starting with the sides at the normal width and then spread out to expose the full frame — and the film nerd was totally hooked on Ultra Panavision 70.
But this all began accidentally: “We went in thinking we were going to shoot standard format for 65mm and one day I was with Gregor Tavenner, my first camera assistant, and Dan Sasaki [Panavision VP of optical engineering] was showing us standard Panavision lenses for 65mm and while looking at them, I slipped behind the curtain and saw this shelf filled with odd-shaped lenses [triangular with prisms]. They were Ultra Panavision lenses,” Richardson said.
Panavision also made a 2,000-foot magazine for the film cameras to accommodate Tarantino’s penchant for long takes.
The team brought a very analogue approach to shooting in Telluride and onstage at LA’s Red Studios, where they lowered the temperature to 30 degrees. They screened dailies in 70mm, with no digital intermediate, and the film is being color-timed photochemically, the old-school way, by FotoKem.
“There’s a great deal of interior landscape available and the actors loved it. Also, I think they enjoyed the visual feast that was given to them in terms of their own faces,” said Richardson, who admitted, though, that throwback photochemical color timing has been frightening.
“I’d become reliant on a digital intermediate for fixing things in post and you can let certain things go. For example, you realize that the backgrounds are blown out but you don’t want to take the time to put a hard gel up. You know you can rescue that with the window and tracking, or if your weather doesn’t quite match, it’s easier to work a look between sunny and overcast.
But not when it came to this gorgeous look. And this is just the beginning, as Gareth Edwards’ “Rogue One: A Star Wars Story” is also reportedly being shot with Ultra Panavision 70 lenses.
Most of us know Lytro from its revolutionary stills camera which allowed for an image to be adjusted in post as never before – it allowed focus to be changed. It did this by capturing a Lightfield and it seemed to offer a glimpse into the future of cameras built on a cross of new technology and the exciting field of computational photography.
Why then did the camera fail? Heck, we sold ours about 8 months after buying it.
Lightfield technology did allow for the image to be adjusted in terms of depth or focus in post, but many soon found that this was just delaying a decision from on location. If you wanted to send someone a Lytro image you almost always just picked the focus and sent a flat .jpeg. The only alternative was to send them a file which required a special viewer. The problem with the later was simple, someone else ‘finished’ taking your photo for you – you had no control. It was delaying an on set focus decision to the point that you never decided at all! The problem with the former, ie. rendering a jpeg, was that the actual image was not better than one could get from a good Canon or Nikon, actually it was a bit worse as the optics for Lightfield could not outgun your trusty Canon 5D.
In summary: the problem was we did not have a reason to not want to lock down the image. Lightfield was a solution looking for a problem. We needed somewhere it made sense to not ‘lock down’ the image and keep it ‘alive’ for the end user.
Enter VR – it is the solution that Lightfield solves.
Currently much of the VR that is cutting edge is computer generated – the rigs that incorporate head movement can understand you are moving your head to the side and it renders the right pair of images for your eyes. While a live action capture will allow you to spin on the spot and see in all directions, a live action capture did not (until now) allow you to lean to one side to miss a slow motion bullet traveling right at you the way a CG scene could.
Live action was stereo and 360 but there was no parallax. If you wanted to see around a thing…you couldn’t. There are some key exceptions such as 8i which have managed to capture video from multiple cameras and then allow a live action playback with head tracking, parallax and the full six degrees of motion, thus becoming dramatically more immersive. However, 8i is a specialist rig which is effectively a concave wall or bank of cameras around someone, a few meters back from them. The new Immerge from Lytro is different – it is a ball of cameras on a stick.
Lytro Immerge seems to be the world’s first commercial professional Lightfield solution for cinematic VR, which will capture ‘video’ from many points of view at once and thereby provide a more lifelike presence for live action VR through six degrees of freedom. It is built from the ground up as a full workflow, camera, storage and even NUKE compositing to color grading pipeline. This allows the blending of live action and computer graphics (CG) using Lightfield data, although details on how you will render your CGI to match the Lightfield captured data is still unclear.
With this configurable capture and playback system, any of the appropriate display head rigs should support the new storytelling approach, since at the headgear end, there is no new format, all the heavy lifting is done earlier in the pipeline.
How does it work?
The only solution dynamic six degrees of freedom is to render the live action and CGI as needed, in response to the head units render requests. In effect you have a render volume. Imagine a meter square box within which you can move your head freely. Once the data is captured the system can solve for any stereo pair anywhere in the 3D volume. Conceptually, this is not that different from what happens now for live action stereo. Most VR rigs capture images from a set of camera and then resolve a ‘virtual’ stereo pair from the 360 overlapping imagery. It is hard to do but if you think of the level 360 panorama view as a strip that is like a 360 degree mini-cinema screen that sits around you like a level ribbon of continuous imagery, then you just need to find the right places to interpolate between camera view.
Of course, if the cameras had captured the world as a nodal pan there would be no stereo to see. But no camera rig does this – given the physical size of cameras all sitting in a circle… a camera to the left of another sees a slightly different view and that offset, that difference in parallax, is your stereo. So if solving off the horizontal offset around a ring is the secret to stereo VR live action, then the Lytro Immerge does this not just around the outside ring but anywhere in the cube volume. Instead of interpolating between camera views it builds up a vast set of views from its custom lenses and then virtualizes the correct view from anywhere.
Actually it even goes further. You can move outside the ‘perfect’ volume, but at this point it will start to not have previously obstructed scene information. So if you look at some trees, and then move your head inside the volume, you can see perfectly around one to another. But if you move too far there will be some part of the back forest that was never captured and hence can’t be used or provided in the real time experience, in a sense you have an elegant fall off in fidelity as you ‘brake the viewing cube’.
VR was already a lot of data, but once you move to Lightfield capture it is vastly more, which is why Lytro has developed a special server, which will feed into editing pipelines and tools such as NUKE and which can record and hold one hour of footage. The server has a touch-screen interface, designed to make professional cinematographers feel at home. PCmag reports that it allows for control over camera functions via a panel interface, and “even though the underlying capture technology differs from a cinema camera, the controls—ISO, shutter angle, focal length, and the like—remain the same.”
Doesn’t this seem like a lot of work just for head tracking?
The best way to explain this is to say, it must have seemed like a lot of work to make B/W films become color…but it added so much there was no going back. You could see someone in black and white and read a good performance, but in color there was a richer experience, closer to the real world we inhabit.
With six degrees of freedom, the world comes alive. Having seen prototype and experimental Lightfield VR experiences all I can say is that it does make a huge difference. A good example comes from an experimental piece done by Otoy. Working with USC-ICT and Dr Paul Debevec they made a rig that effectively scanned a room. Instead of rows and rows of cameras in a circle and stacked on top of one another virtually, the team created a vast data set for Lightfield generation by having the one camera swung around 360 at one height – then lifted up and swung around again, and again all with a robotic arm. This sweeping meant a series of circular camera data sets that in total added up to a ball of data.
Unlike the new Lytro approach, this works only on a static scene, a huge limitation compared to the Immerge, but still a valid data set. This ball of data is however conceptually similar to the ball of data that is at the core of the Lytro limitation, but unlike the Lytro this was an experimental piece and as such was completed earlier this year. What is significant is just how different this experience is over a normal stereo VR experience. For example, even though the room is static, as you move your head the specular highlights change and you can much more accurately sense the nature of the materials being used. In a stereo rig, I was no better able to tell you what a bench top was made of than looking at a good quality still, but in a Lightfield you adjust your head, see the subtle spec shift and break up and you are immediately informed as to what something might feel like. Again spec highlights seem trivial but it is one of the key things we use to read faces. And this brings us to the core of why the Lytro Immerge is so vastly important, people.
VR can be boring. It may be unpopular to say so but it is the truth. For all the whizz bang uber tech, it can lack story telling. Has anyone ever sent you a killer timelapse show reel? As a friend of mine once confessed, no matter how technically impressive, no matter how much you know it would have been really hard to make, after a short while you fast forward through the timelapse to the end of the video. VR is just like this. You want to sit still and watch it but it is not possible to hang in there for too long as it just gets dull – after you get the set up…amazing environment, wow…look around…wow, ok I am done now.
What would make the difference is story, and what we need for story is actors – acting. There is nothing stopping someone from filming VR now, and most VR is live action, but you can’t film actors talking and fighting, punching and laughing – and move your head to see more of what is happening – you can only look around, and then more often than not, look around in mono.
The new Lytro Immerge and the cameras that will follow it offer us professional kit that allows professional full immersive storytelling.
Right now an Oculus Rift DK2 is not actually that sharp to the eye. The image is OK but the next generation of head set gear have vastly better screens and this will make the Lightfield technology even more important. Subtle but real spec changes are not relevant when you can’t make out a face that well due to low res screens, but the prototype new Sony, Oculus and Valve systems are going to scream out for such detail.
Sure they’ll be expensive, but then an original Sony F900 HDCAM was $75,000 when it came out and now my iPhone does better video. Initially, you might only even think about buying one if you had either a stack of confirmed paid work, or a major rental market to service, but hopefully the camera will validate the approach and provide a much needed professional solution for better stories.
How much and when?
No news on when the production units will actually ship, many of the images released for the launch are actually concept renderings, but the company has one of the only track records for shipping actual Lightfield cameras so the expectation is very positive about them pulling the Immerge off technically and delivering.
In Verge, Vrse co-founder and CTO Aaron Koblin commented that “light field technology is probably going to be at the core of most narrative VR” When a prototype version comes out in the first quarter of 2016, it’ll cost “multiple hundreds of thousands of dollars” and is intended for rental.
Lytro CEO Jason Rosenthal says the new cameras actually contain “multiple hundreds” of cameras and sensors and went on to suggest that the company may upgrade the camera quarterly.
Authors James Layton and David Pierce discuss how Herbert Kalmus, Walt Disney and Ray Rennahan, ASC, all contributed to Technicolor’s success.
by Jason Apuzzo
Over the course of its storied first century, Technicolor came to represent more than a motion-picture technology company. Marked by a vividness of color and an exuberant style, Technicolor became synonymous with an entire era of Hollywood filmmaking, the golden age of studio production from the late 1930s to the early 1950s. This era did not emerge overnight, however, and a new book by James Layton and David Pierce, The Dawn of Technicolor 1915-1935, published by George Eastman House to coincide with Technicolor’s 100th anniversary, documents the company’s earlier, groundbreaking “two-color” era.
It was during this formative period that Technicolor based its technology on the innovative use of red and green filters and dyes — colors chosen to prioritize accurate skin tone and foliage hues. Two-color Technicolor was achieved by way of a beam-splitting prism behind the camera lens that sent light through red and green filters, creating two separate red and green color records on a single strip of black-and-white film. Separate prints of these two color records (with their silver removed) were later cemented together in the final printing process, with red and green dyes then added; this was a complex and error-prone process that later gave way to a two-color “dye-transfer” printing process, in which the color dyes were pressed onto a single piece of film, one color at a time.
As Layton and Pierce’s book reveals, this early two-color system, which was unable to properly reproduce blues, purples or yellows, was eventually superseded by Technicolor’s more famous, three-color process. Yet surviving motion pictures from Technicolor’s two-color period, such as Douglas Fairbanks’ The Black Pirate (1926) and the color sequences in Ben-Hur (1925), reveal a subtlety and understated elegance unique to the technology.
Lavishly illustrated and meticulously researched, and featuring an extensive filmography co-authored by Crystal Kui, The Dawn of Technicolor includes more than 400 illustrations, over half of which were made directly from negatives and original nitrate prints.
Rich in technical detail, the book also offers a fascinating glimpse into the early career of Ray Rennahan, ASC, arguably the most important of Technicolor’s early cinematographers. After cutting his teeth on early Technicolor efforts like the two-color scenes in Cecil DeMille’s The Ten Commandments (1923), Rennahan photographed the first three-color Technicolor feature, Becky Sharp (1935), and won two Academy Awards for Color Cinematography, for Gone with the Wind (1939) and Blood and Sand (1941).
Layton and Pierce sat down with AC to discuss their new book at the recent TCM Classic Film Festival, where they presented two-color Technicolor rarities to a packed house at the Egyptian Theatre.
American Cinematographer: So much attention has been paid to the introduction of sound in cinema, and comparatively less attention has been paid to color. Why do you think it has taken so long for a book as comprehensive as yours to be published?
James Layton: There have been books on Technicolor, but they tend to focus on the complete history, from the beginning to the present. As a result, they’ve been a bit more generalized. Our interest has always been in silent and early sound cinema, and in color and motion-picture technology in particular, so it was natural for us to gravitate to that period.
David Pierce: I think that we’re probably at the start of an examination of color and its effect upon audiences. So many of the films that included color, or tinting and toning, were preserved in black-and-white, so those [color processes] were kind of invisible to audiences …. Also, if you dupe a nitrate print of a two-color film, it doesn’t look great. But for various technical reasons, if you do a digital copy, you can actually create a very accurate representation of two-color. So I think the time is right to look at how directors and cinematographers used those tools.
Technicolor was invented by talented MIT researchers who applied their scientific knowledge of color and optics to motion-picture technology. How did the fact that these researchers came out of Boston’s hothouse academic environment, and not from Hollywood, shape their efforts?
Layton: They had a very practical and regimented approach. They knew what the end goal was that they wanted, and they just worked in steps. In the book, we talk about a ‘step development’ process. They didn’t try to achieve their goal overnight. It took a lot of money, a lot of research and development. They would just experiment, and keep experimenting, and if something worked, they’d proceed with that. Technicolor had the R&D team from the start. But another reason they were successful was that [Technicolor co-founder] Herbert Kalmus found the investment and secured the money for 15 years before the first year of profit. Most other companies would’ve shut down way before that.
So patience was a key factor in Technicolor’s success during this two-color era.
Pierce: Patient capital. The other firms ran out of capital, ran out of funding, before they ran out of technical problems to solve.
From the outset, Technicolor had its own proprietary film prints, cameras, lab processes, makeup requirements and shooting style, all of which required specially trained personnel who often worked outside the studio system. What kind of challenges did this create in terms of getting the industry to fully embrace color?
Layton: The reason they had all that in place was to maintain high quality, high standards, because if they’d just given it to the studios, the studios wouldn’t have known how to use it properly. The creators of other color processes did that. With Multicolor, for instance, you could adapt any camera, you could buy the stock, and you could shoot it with your own cameraman. And you got a cheaper product for it. Technicolor always wanted to maintain the best quality.
Yet there was reportedly some tension between the Hollywood crews and the Technicolor crews on set.
Layton: There was a certain mindset in the 1920s — you know, the black-and-white cameramen were ‘artists,’ sculpting with light, et cetera. The Technicolor cameramen were seen more as ‘technicians.’ They used their exposure meters when exposure meters weren’t common, because they had to get a certain amount of light. Also, color was initially seen as a gimmick, some flashy thing that studios were using to sell the film and excite audiences. It wasn’t perceived as artistic or creative.
Technicolor had to overcome many challenges during the 1920s and early 1930s, including print quality, competition, the Depression, and even audience ambivalence toward color itself. Technicolor also had to produce its own films in order to convince Hollywood that the technology was viable. It seems that the company’s success owes a great deal to sheer persistence.
Pierce: They had a master salesman in Herbert Kalmus, and Kalmus, who was a Ph.D. engineer, quite consciously developed himself into a very accomplished businessman, an excellent communicator. He knew how to cultivate mentors who helped him acquire captains of industry for the Technicolor board of directors, and he was able to communicate about the technology to non-technical people. So the industry saw him as the technical expert, the ‘genius,’ and the engineers saw him as the business leader. He was not the inventor of Technicolor; he was the one who marshaled the resources the engineers needed in order to deliver, and he got them enough time. In that way, he was very much like Steve Jobs. He didn’t write the code, but he hired the right people, he set the goals, and he was able to ensure that what they delivered had an impact on the industry and was what people wanted.
Another factor in Technicolor’s success seems to have been that it was championed by people like Douglas Fairbanks, Cecil B. DeMille, Jack Warner and Walt Disney, who really understood color’s potential. Among Technicolor’s patrons, who do you think was the most crucial?
Pierce: I think Disney was in many ways the most important, because the test film Flowers and Trees , where they didn’t know if they were going to release it in color or black-and-white, got such an enormous response that Disney decided to release it in color even though United Artists said, ‘We’ve already sold these pictures at a flat rate! We’re going to make less money by releasing this in color!’ Disney, as always, was thinking ahead. This really pulled the entire animation industry into color, and that built up the three-color printing and really helped Technicolor work out the bugs for three-color before there was much live action [in the format].
Do you consider Ray Rennahan the greatest of the Technicolor cameramen?
Pierce: Technicolor did. He started as an assistant and lab guy, and he had an instinctive understanding of all the skills of black-and-white and how to create effective color. And the quality of his work continued to improve as he grew into that role. Of the other people that Technicolor recruited, including some first-class cameramen, none was really able to work at Rennahan’s caliber. The black-and-white cameramen tended to look down on Technicolor because it didn’t require you to set up your three-point lighting to make the person stand out from the background — color does that for you. So [Technicolor] wasn’t seen as sophisticated in terms of the lighting you used and the knowledge you had to have. But it was much more sophisticated in some ways. For example, you had to understand how to use color balance to draw the viewer’s eye to the right things within the frame.
Filmmaking technology is in a constant state of evolution, and the story of Technicolor’s first 20 years seems to offer a world of lessons to modern innovators. What would you say those lessons are?
Pierce: I think investors are far less forgiving now than they were back then. Today there’s the need to deliver something that can go commercial faster. Technicolor was not a licensing company; it was a ‘we do it for you’ company. If you were doing something comparable now, you’d probably say, for instance, ‘We’ve come up with a camera phone. Do we build it ourselves so that we’re the only cell phone that has a camera, or do we decide to become a patents company and license it out to everybody?’ Given the technical hurdles, I think engineers coming up with Technicolor today would probably develop the components and then offer it out so that everyone else could become a color company.
Do you plan to follow this book with one that looks at the next era of Technicolor?
Layton: We’ve kind of joked about it. We’ve already found a lot of research material that was related to the later period. We’ve joked that we could do one that just covers the next five years, up to Gone with the Wind and The Wizard of Oz . There are a lot of stories!
Click here to watch a short video produced by George Eastman House about the two-color process.
Founded in 1915, the Technicolor Motion Picture Corporation transformed cinema forever with its revolutionary color processes. George Eastman House marks this important centennial with the exhibition In Glorious Technicolor, on view January 24 through April 26, 2015 in the special exhibition galleries.
The exhibition celebrates Technicolor’s vivid history, from the company’s early years through the making of such classics of the Hollywood studio era as The Wizard of Oz (1939), Gone With the Wind (1939), and Singin’ in the Rain (1952). Technicolor’s wide-ranging impact on the form and content of cinema is explored through original artifacts from the Technicolor Corporate Archive, projected video clips, and a range of stunning visual displays.
Highlights include the company’s evolving camera technology, from its early two-color camera from the 1920s to the massive Technirama widescreen system of the 1950s. Original costumes, production designs, posters, and photographs document how color was used creatively and presented to the public, while the vibrant dyes used to create Technicolor’s incomparable “look” shed light on the science behind the process. Rare tests from Douglas Fairbanks’s The Black Pirate (1926), behind-the-scenes stills from the Errol Flynn’s The Adventures of Robin Hood (1938), and home movies from the set of The African Queen (1951) reveal the stars and filmmakers most associated with color. Additionally, the exhibition honors the achievements of Academy Award–winning cinematographers Ray Rennahan and Jack Cardiff, as well as Technicolor’s often overlooked engineers, whose work remained largely out of the limelight.
To complement the gallery exhibition, the Dryden Theatre is presenting a four-month series of Technicolor films, including some original Technicolor prints.