Category: Cinematography

VFX Legend Douglas Trumbull talks about the Future of Film … and Kubrick.

From the Sept. 12 issue of The Hollywood Reporter.

Trumbull drives me a short distance from his home to a full-size soundstage and escorts me into a screening room that he has constructed to meet his ideal specifications: a wide wall-to-wall and floor-to-ceiling curved screen, with surround sound, steeply rigged stadium seating and a 4K high-resolution projector. As I put on specially designed 3D glasses and settle into stadium seating, he tells me, with an unmistakable hint of nervousness, “You’re one of the first people on the planet to see this movie.”

Ten minutes later, the lights come back up and I sit in stunned silence. The short that I have just seen, UFOTOG (a blending of the words “UFO” and “fotog,” the latter slang for press photographer), is stunning not because of its story — we’ve all seen movies about UFOs — but because it shows, as it was designed to do, what movies can look like if theaters, studios and filmmakers embrace the MAGI process through which Trumbull brought it to the screen: bigger, brighter, clearer and with greater depth-of-field than anything ever seen in a cinema before.

All of the aforementioned conditions are part of the MAGI equation, but the most essential element is the rate of frames per second at which a film is projected. In the beginning, the Lumiere brothers projected films at 18 fps, slow enough to result in the appearance of flickering —  hence the early nickname for the movies, “the flickers” or “the flicks.” That figure eventually increased to 24 fps, and has remained there, for the most part, ever since.

In 2012, Peter Jackson dared to release The Hobbit‘s first installment at 48 fps, which was supposed to create a heightened sense of realism, but which instead struck many as strange-looking and some even as nauseating. Many deemed the experiment a failure. Trumbull disagreed. He felt that if a digitally shot film was projected even faster — markedly faster, as in 120 fps, via a bright projector and onto a big screen — then the movie screen itself would seemingly disappear and serve effectively as a window into a world on the other side that would appear as real as the world in which one sits.

tothemoon

To the Moon and Beyond featured a 70 mm circular image projected onto a dome screen and took viewers on a journey “from the Big Bang to the microcosm in 15 minutes.” Two of the thousands who saw it were Stanley Kubrick, the filmmaker, and Arthur C. Clarke, the writer, who came away from it convinced that an A-level sci-fi film — which eventually became 2001: A Space Odyssey — was possible. Kubrick contracted Graphic Films to produce conceptual designs for the project, but, once it got off the ground, moved it to London, at which point 23-year-old Trumbull cold-called the director and got a job on the film. His greatest contribution to it was devising a way to create a believable “Star Gate” effect, representing “the transformation of a character through time and space to another dimension.” Even though Kubrick alone claimed screen credit and an Oscar for the film’s VFX, Trumbull instantly became a name in the business.

silent-running-movie-poster-1972-1020209768

A few years later, he made his directorial debut with Silent Running (1972), a well-received film that landed him deals at Fox, MGM and Warner Bros. — but all of them “unraveled for stupid reasons.” By 1975, “desperate because you can’t live on development deals,” he and Richard Yuricich proposed the creation of the Future General Corporation, through which they would try to identify ways to improve the technology used to make films. Paramount agreed to sponsor the endeavor — which, to them, was a tax write-off — in return for 80 percent ownership. Within the first nine months of its existence, Trumbull says, “We invented Showscan [a manner of projecting films at 60 fps]. We invented the first simulator ride. We invented the 3D interactive videogame. And we invented the Magicam process [by which actors can perform in front of a blue screen, onto which nonexistent locations can be projected to create virtual realities].” And yet, in the end, Paramount “saw no future in the future of movies” and failed to support their efforts, devastating Trumbull, who was under exclusive contract to the studio for the next six years. (The studio’s one gesture that he did appreciate: loaning him out to Columbia to do the special effects for Close Encounters of the Third Kind.)

Trumbull got out of his Paramount contract in 1979 thanks to Star Trek: The Motion Picture. The original effects team that had been engaged for the highly anticipated film couldn’t handle the job, something the studio realized only six months before its long-scheduled Christmas release date. The studio begged Trumbull to take over, and he agreed to do so — provided he was paid a considerable fee and released from his contract. He got what he requested and, to the detriment of his health, also got the job done on time.

Newly a free agent, Trumbull continued to take on special effects jobs for others — for instance, Ridley Scott‘s Blade Runner (1982) — but his primary focus was on directing a film of his own that would demonstrate the capabilities of Showscan. For the project, which he called Brainstorm, he secured a top-notch cast, led by Natalie Wood, and a major distributor, MGM. Production got underway and was almost completed when, on Nov. 29, 1981, tragedy struck: Wood drowned under circumstances that remain mysterious to this day. Since Wood had only a few small scenes left to shoot, Trumbull felt that he could easily finish the film, but MGM, which was in dire financial straits, filed what he deemed a “fraudulent insurance claim” because “they wanted to get out of it.”

Doug Trumbull on motion simulator base for “In Search of the Obelisk” (1993) VistaVision ridefilm at the Luxor Las Vegas.
Doug Trumbull on motion simulator base for “In Search of the Obelisk” (1993) VistaVision ridefilm at the Luxor Las Vegas.

Photo courtesy of Mice Chat.

Then, in 1990, he was approached about making a Back to the Future ride for Universal Studios venues in Florida, Hollywood and Japan. Others had been unable to conquer it, but he made it happen — and in a groundbreaking way: “It took you out of your seat and put you into the movie. You were in a DeLorean car. You became Marty McFly. You became a participant in the movie. The movie was all around you.” It ran for 15 years, he says, but was “dismissed as a theme park amusement.” He felt it was something more. “This was a moment where, for the first time in history, you went inside a movie.” Even though others failed to see larger possibilities, he says, “That kinda kept me going for a long time because it validated that we could be here in the Berkshires and make breakthroughs that no one else was able to do in Hollywood or anywhere else.”

In 2009, James Cameron‘s Avatar, a digitally shot 3D production that grossed a record $2.8 billion worldwide, changed everything. Its success spurred, at long last, filmmakers to transition en masse to digital photography and theaters to transition en masse to digital projection — at which point Trumbull made a crucial discovery. He realized that digital projectors run at 144 fps — twice as fast as Showscan had been able to — but films were still being made at 24 fps, with each frame just flashing multiple times. “Could we do a new frame every flash?” he wondered. If so, he reasoned, it might just give people a reason to put down their smartphones, tablets and laptops and actually buy a ticket to see a movie in a theater.

After years of work on his farm, Trumbull is finally ready to unveil UFOTOG. Its first public presentation will take place on Sept. 11 as part of the Toronto International Film Festival’s Future of Cinema conference (at which Trumbull will also give a keynote address), and it will also screen days later at the IBC Conference in Amsterdam. At both venues, he says, his message will be rather straightforward: “It’s not rocket science, guys. It’s just a different shape, a different size, a different brightness and a different frame rate. Abandon all that crud that’s leftover from 1927. We’re in the digital age. Get with it.”

The cost of these changes, he insists, will be rather negligible: projectors are already equipped to handle faster frame rates, and would require only slightly more data time and render time; theaters are already adopting brighter projectors that employ laser illumination, which uses a longer-lasting bulb to produce twice the amount of light; and theaters, he believes, will soon recognize that they are in the “real estate business” and that it is in their interest to have fewer total screens but more big screens, for which the public has demonstrated a willingness to pay a premium.

Trumbull’s main objective, though, is “to show the industry what it is possible to do” with MAGI. He says he’s “dying to show” UFOTOG to filmmakers such as Jackson, Cameron and Christopher Nolan, whom he regards as kindred souls. But mostly, he wants to challenge the industry one more time, warning it, “If you want people to come to theaters, you better do something different.”


Hyperlapse from Instagram

Product designer Chris Connolly, and software engineers Thomas Dimson and Alex Karpenko.

Photo: Ariel Zambelich/WIRED

From Wired.

Today at 10am PST, Instagram is lifting the veil on Hyperlapse, one of the company’s first apps outside of Instagram itself. Using clever algorithm processing, the app makes it easy to use your phone to create tracking shots and fast, time-lapse videos that look as if they’re shot by Scorsese or Michael Mann. What was once only possible with a Steadicam or a $15,000 tracking rig is now possible on your iPhone, for free. (Instagram hopes to develop an Android version soon, but that will require changes to the camera and gyroscope APIs on Android phones.) And that’s all thanks to some clever engineering and an elegantly pared-down interaction design. The product team shared their story with WIRED.

The Inspirations
By day, Thomas Dimson quietly works on Instagram’s data, trying to understand how people connect and spread content using the service. Like a lot of people working at the company, he’s also a photo and movie geek—and one of his longest-held affections has been for Baraka, an art-house ode to humanity that features epic tracking shots of peoples all across the world. “It was my senior year, and my friend who was an architect said, ‘You have to see it, it will blow you away,’” says Dimson. He wasn’t entirely convinced. The movie, after all, was famous for lacking any narration or plot. But watching the film in his basement, Dimson was awestruck. “Ever since, it’s always been the back of my mind,” he says.

A sample shot from Baraka

By 2013, Dimson was at Instagram. That put him back in touch with Alex Karpenko, a friend from Stanford who had sold his start-up to Instagram in 2013. Karpenko and his firm, Luma, had created the first-ever image-stabilization technology for smartphone videos. That was obviously useful to Instagram, and the company quickly deployed it to improve video capture within the app. But Dimson realized that it had far greater creative potential. Karpenko’s technology could be used to shoot videos akin to all those shots in Baraka. “It would have hurt me not to work on this,” says Dimson.

Once you start using the app, you quickly see that replay speed itself becomes a novel, alluring tool: For pets and people, replaying at about 1x gives you the sense that you’re creating a tracking shot like that Copacabana scene in Goodfellas. The higher replay speeds work better for shooting the sky out your airplane window, the scenery scrolling past during a train ride, or anything else that’s moving slowly or at a distance. Where Instagram’s filters are all about changing color and light, Hyperlapse uses a simple speed slider as its main creative decision.

All of those choices must be built-in up front with traditional camera rigs. Usually, capturing even a brief tracking shot requires intricate choreography between where you’ll move with the camera and what your subjects will be doing when you film them. Time-lapse set-ups are even more intense, requiring a camera be set up on a track and programmed to move at a steady speed. Both of those art forms are hardly spontaneous, and spontaneity is supposed to be Instagram’s calling card.

Hyperlapse, by contrast, let’s you create a tracking shot in less than a minute. “This is an app that let’s you be in the moment in a different way,” says Krieger. “We did that by taking a pretty complicated image processing idea, and reducing it to a single slider. That’s super Instagram-y.”


The Technology behind Hyperlapse from Instagram

INSTAGRAM, HYPERLAPSE, INSTAGRAM ENGINEERING, VIDEO STABILIZATION
Yesterday we released Hyperlapse from Instagram—a new app that lets you capture and share moving time lapse videos. Time lapse photography is a technique in which frames are played back at a much faster rate than that at which they’re captured. This allows you to experience a sunset in 15 seconds or see fog roll over hills like a stream of water flowing over rocks. Time lapses are mesmerizing to watch because they reveal patterns and motions in our daily lives that are otherwise invisible.

Hyperlapses are a special kind of time lapse where the camera is also moving. Capturing hyperlapses has traditionally been a laborious process that involves meticulous planning, a variety of camera mounts and professional video editing software. With Hyperlapse, our goal was to simplify this process. We landed on a single record button and a post-capture screen where you select the playback rate. To achieve fluid camera motion we incorporated a video stabilization algorithm called Cinema (which is already used in Video on Instagram) into Hyperlapse.

In this post, we’ll describe our stabilization algorithm and the engineering challenges that we encountered while trying to distill the complex process of moving time lapse photography into a simple and interactive user interface.

Cinema Stabilization

Video stabilization is instrumental in capturing beautiful fluid videos. In the movie industry, this is achieved by having the camera operator wear a harness that separates the motion of the camera from the motion of the operator’s body. Since we can’t expect Instagrammers to wear a body harness to capture the world’s moments, we instead developed Cinema, which uses the phone’s built-in gyroscope to measure and remove unwanted hand shake.

Go here for more in depth info and technical videos http://instagram-engineering.tumblr.com/post/95922900787/hyperlapse

Microsoft Research First-person Hyperlapse Videos

Microsoft Researcher Johannes Kopf ascends Mount Shuksan in the North Cascades with a GoPro.

Standard video stabilization crops out the pixels on the periphery to create consistent frame-to-frame smoothness. But when applied to greatly sped up video, it fails to compensate for the wildly shaking motion.

Hyperlapse reconstructs how a camera moves throughout a video, as well as its distance and angle in relation to what’s happening in each frame. Then it plots out a smoother camera path and stitches pixels from multiple video frames to rebuild the scene and expand the field of view.

As you might imagine, working with raw video involves crunching a fair amount of data, which required a compute cluster to crunch data for several hours to complete for each video. Microsoft developed a series of new algorithms that lead to a more efficient process without compromising the image quality. The result is that Hyperlapse can now render a high-speed video in a fraction of the time, using a single PC.

The Interactive Visual Media Group focuses on the areas of computer vision, image processing, and statistical signal processing, specifically as they relate to things like enhancing images and video, 3D reconstruction, image-based modeling and rendering, and highly-accurate correspondence algorithms that are commonly used to “stitch” together images.

From Microsoft Research.

We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyper-lapse videos, i.e., time-lapse videos with a smoothly moving camera. At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up.


Scene Reconstruction
Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video (shown in red) that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input.
Next, we compute geometric proxies for each input frame. These allow us to render the frames from the novel viewpoints on the optimized path.

Proxy Geometry

Stitched & Blended
Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame. We present a number of results for challenging videos that cannot be processed using traditional techniques.

Disney Research Automatic Editing of Footage from Multiple Social Cameras

Disney Research demonstrated Automatic Editing of Footage from Multiple Social Cameras at SIGGRAPH.

Video cameras that people wear to record daily activities are creating a novel form of
creative and informative media. But this footage also poses a challenge: how to expeditiously
edit hours of raw video into something watchable. One solution, according to Disney researchers,
is to automate the editing process by leveraging the first-person viewpoints of multiple cameras
to find the areas of greatest interest in the scene.

The method they developed can automatically combine footage of a single event shot by
several such “social cameras” into a coherent, condensed video. The algorithm selects footage
based both on its understanding of the most interesting content in the scene and on established
rules of cinematography.

“The resulting videos might not have the same narrative or technical complexity that a
human editor could achieve, but they capture the essential action and, in our experiments, were
often similar in spirit to those produced by professionals,” said Ariel Shamir, an associate
professor of computer science at the Interdisciplinary Center, Herzliya, Israel, and a member of
the Disney Research Pittsburgh team.

Whether attached to clothing, embedded in eyeglasses or held in hand, social cameras
capture a view of daily life that is highly personal but also frequently rough and shaky. As more
– more –eople begin using these cameras, however, videos from multiple points of view will be
available of parties, sporting events, recreational activities, performances and other encounters.

“Though each individual has a different view of the event, everyone is typically looking
at, and therefore recording, the same activity – the most interesting activity,” said Yaser Sheikh,
an associate research professor of robotics at Carnegie Mellon University. “By determining the
orientation of each camera, we can calculate the gaze concurrence, or 3D joint attention, of the
group. Our automated editing method uses this as a signal indicating what action is most
significant at any given time.”

In a basketball game, for instance, players spend much of their time with their eyes on the
ball. So if each player is wearing a head-mounted social camera, editing based on the gaze
concurrence of the players will tend to follow the ball as well, including long passes and shots to
the basket.

The algorithm chooses which camera view to use based on which has the best quality
view of the action, but also on standard cinematographic guidelines. These include the 180-
degree rule – shooting the subject from the same side, so as not to confuse the viewer by the
abrupt reversals of action that occur when switching views between opposite sides.

Avoiding jump cuts between cameras with similar views of the action and avoiding very
short-duration shots are among the other rules the algorithm obeys to produce an aesthetically
pleasing video.
The computation necessary to achieve these results can take several hours. By contrast,
professional editors using the same raw camera feeds took an average of more than 20 hours to
create a few minutes of video.

The algorithm also can be used to assist professional editors tasked with editing large
amounts of footage.

Other methods available for automatically or semi-automatically combining footage from
multiple cameras appear limited to choosing the most stable or best lit views and periodically
switching between them, the researchers observed. Such methods can fail to follow the action
and, because they do not know the spatial relationship of the cameras, cannot take into
consideration cinematographic guidelines such as the 180-degree rule and jump cuts.

Automatic Editing of Footage from Multiple Social Cameras
Arik Shamir (DR Boston), Ido Arev (Efi Arazi School of Computer Science), Hyun Soo Park (CMU), Yaser Sheikh (DR Pittsburgh/CMU), Jessica Hodgins (DR Pittsburgh)
ACM Conference on Computer Graphics & Interactive Techniques (SIGGRAPH) 2014 – August 10-14, 2014
Paper [PDF, 25MB]

Lucid Dreams of Gabriel – Teaser

From Variety,

Disney and Swiss pubcaster SRF unveil experimental short at Locarno fest.

At the Locarno Film Festival, the Disney lab and SRF jointly unveiled an impressive experimental short titled “Lucid Dreams of Gabriel” (see teaser) which for the first time displayed local frame variation, local pixel timing, super slow motion effects, and a variety of artistic shutter functions showcasing this “The Flow-of-Time” technique.

The project was created by the Disney Research lab in tandem with the formidable computer graphics lab at the Swiss Federal Institute of Technology Zurich (ETH) with SRF providing studio space, personnel, and other resources.

“We wanted to control the perception of motion that is influenced by the frame rate (how many images are shown per second) as well as by the exposure time,” said Markus Gross, who is Vice President Research, Disney Research and director of Disney Research, Zurich, at the presentation.

Use of the new technologies in the short, which is a surreal non-linear story about a mother achieving immortality in her son’s eyes after an accident in the spectacular Engadin Alpine valley, allowed director Sasha A. Schriber to avoid using green screen and to make the transition from reality (at 24 frames per second) to a supernatural world (at 48 frames per second).

“Lucid Dreams Of Gabriel,” an experimental short film created by Disney Research in collaboration with ETH, Zurich, was shot at 120fps/RAW with all effects invented and applied in-house at Disney Research Zurich. We sought to produce a visual effects framework that would support the film’s story in a novel way. Our technique, called “The Flow-Of-Time,” includes local frame rate variation, local pixel timing and a variety of artistic shutter functions.

Effects include:
•High dynamic range imaging
•Strobe and rainbow shutters
•Global and local framerate variations
•Flow motion effects
•Super slow motion
•Temporal video compositing

The following scenes of the teaser, indicated by the timecode, demonstrate different components of our new technology:

Shots with a dark corridor and a window (0:08); a man sitting on a bed (0:16):
Our new HDR tone-mapping technique makes use of the full 14 bit native dynamic range of the camera to produce an image featuring details in very dark as well as very bright areas at the same time. While previous approaches have been mostly limited to still photography or resulted in artifacts such as flickering, we present a robust solution for moving pictures.

A hand holding a string of beads (0:14):
As we experimented with novel computational shutters, the classic Harris-shutter was extended to make use of the full rainbow spectrum instead of the traditional limitation to just red, green, and blue. For this scene, the input was rate converted using our custom technology, temporally split and colored, then merged back into the final result.

The double swings scene (0:20):
Extending on our experiments with computational shutters, this scene shows a variety of new techniques composed into a single shot. Fully facilitating the original footage shot in 120 fps, the boy has been resampled at a higher frame rate (30fps) and a short shutter, resulting an ultra crisp, almost hyper-real appearance, while the woman was drastically resampled at a lower frame rate (6fps) featuring an extreme shutter which is physically not possible and adding a strong motion blur to make her appear more surreal.

Car driving backwards and a flower (0:30); a train (0:36),
For these scenes, we were experimenting with extreme computational shutters. The theoretical motion blur for the scenes was extended with a buoyancy component and modified through a physical fluid simulation, resulting in physically impossible motion blur. As shown, it is possible to apply this effect selectively on specific parts of the frame, as well as varying the physical forces.

Super slow motion closeup of the boy (0:44); a handkerchief with motion blur and super slow motion (0:47); an hourglass (0:50):
These shots show the classical application of optical flow – slow motion. However, with our new technology we have been able to achieve extremely smooth pictures with virtually no artifacts, equivalent of a shutter speed at 1000 fps. At the same time, artificial motion blur equivalent of a shutter of far more than 360 degrees can be added to achieve a distinct “stroby” look, if desired, while maintaining very fluent motion in all cases. We are also able to speed up or slow down parts of the scene, e.g. to play the background in slow-motion while the foreground runs at normal speed. All of these effects can be applied on a per-pixel basis, thus giving full freedom to the artist.

Additional info on the film:

“Lucid Dreams Of Gabriel” is a surrealistic and non-linear story about a mother achieving immortality through her son, unconditional love, and the fluidity of time.

Producer: Markus Gross
DOP: Marco Barberi
Script & Director: Sasha A. Schriber
Camera & lenses: Arri Alexa XT with Zeiss prime lenses
Original language: English
Length: 11 minutes

Game of Drones. John Bailey ASC on drone safety.

Above: a photo I took of a drone at Cine Gear Expo 2012.

Recently, I had a post that included spectacular footage of a fireworks show shot from a drone. The increase use of drones with GoPros has become both a safety and personal privacy headache.

The Seattle Police Department recently spoke with an Amazon employee who flew a drone too close to the Space Needle.

DJI has a new system called Dropsafe, drones that deploy parachutes.

From the IEEE Spectrum

According to the Hollywood Reporter there is now a new organization called the Society of Aerial Cinematographers, prompted by the increase of drone use in movie and television production.

John Bailey ASC, on his blog post called Drones, Drones, Drones, explains the many safety issues and FAA regulations about using camera drones.

…Thornier yet are certain looming questions of privacy rights, sexual “Peeping Tom” infringements, aural and visual harassments by drones buzzing around in public spaces, and the very real danger posed by malfunctioning, mis-piloted or mis-programmed drones — including higher-altitude (but still amateur) drones that already have nearly caused midair collisions with commercial jets

….I found videos of out-of-the-box amateur drones taking to the air (even as their new owners were still reading instructions) and crashing into high rises in midtown Manhattan, then falling onto the street below, nearly injuring a pedestrian.

…Just like the Steadicam before it, these small 4-rotor and 8-rotor drone helicopters mounted with HD cameras, from GoPros to  Canon 5Ds, are quickly changing the scale of imagery that can be photographed for feature films. Many productions that have been unable to afford traditional piloted helicopters with sophisticated camera-stabilizing systems can now engage a two-person ground-based crew of pilot and operator to shoot sweeping images that “open up” a film. But that is only a small part of drones’ potential as a new camera system.

Last winter, watching director Nabil Ayouch’s Horses of God, the Moroccan entry for the Academy’s foreign-film Oscar, I saw a shot that took my breath away. A group of boys are playing on a dirt soccer pitch in the Casablanca slum of Sidi Moumen. Everything is photographed at ground level, with long-lens panning shots intercut with wider-angle close coverage on the Steadicam to build up the action sequence. A very low-angle shot then follows several boys chasing the ball — and suddenly sweeps past them, rising above their heads to reveal the intricate warren of passageways in the slum beyond. The camera continues up higher for an overview of the slum and of downtown Casablanca. It is a stunning moment because it comes at the end of an eye-level sequence. It also sets up the disjunction between these still innocent, poor children playing soccer in a trash-ridden, dusty lot—- with the indifferent modern city nearby. The film climaxes with a sequence set years later, in May 2003, when these same boys, now trained suicide bombers, simultaneously blow up several buildings in downtown Casablanca, killing themselves and 33 people. This single camera move, made with a small HD camera on a drone, set up the visual and narrative flow for the rest of the film.

Here is the trailer. There are several brief cuts of the boys on the pitch early on, and a very brief overhead drone shot tracking though the slum at 0.44:

Fireworks filmed with a drone.

UPDATE: It is illegal to do this.

UFOTOG, a new film by Douglas Trumbull

I have always been a great admirer of Douglas Trumbull. Here is the press release about his newest project.

SEATTLE, April 30, 2014 — Academy Award winner Douglas Trumbull, announced today his forward-looking ten-minute demonstration movie UFOTOG. Produced at TRUMBULL STUDIOS in western Massachusetts, the experimental sci-fi adventure written and directed by Trumbull in 4K 3D at 120 frames per second, demonstrates his new process called MAGI, which explores a new cinematic language that invites the audience to experience a powerful sense of immersion and impact that is not possible using conventional 24 fps or 3D standards.

UFOTOG is a dramatic short story about a lone man attempting to photograph UFOs. Trumbull felt that it would be ideal to premiere UFOTOG at Paul Allen’s iconic Seattle Cinerama Theater as the headlining event at the annual Science Fiction Film Festival Sunday May 11, 2014, in conjunction with special screenings of 2001: A SPACE ODYSSEY and CLOSE ENCOUNTERS OF THE THIRD KIND, both of which have alien contact stories. In addition, Trumbull will introduce his movie BRAINSTORM, on Friday, May 9, which marked the beginning of his quest for immersive cinema.

Trumbull has embarked on a project to write, produce, and direct using the MAGI technology for the production of his own feature films at Trumbull Studios. Through his comprehensive understanding of the needed technological advances, Trumbull has constructed a laboratory/stage/studio where he can shoot 120 fps 4K 3D live action within virtual environments, and see the results on the large screen adjacent to the shooting space. Trumbull announced: “This way, we can explore and discover a new landscape of audience excitement, and do it inexpensively and quickly – we are pushing the envelope to condense movie production time, intending to make films at a fraction of current blockbuster costs, yet with a much more powerful result on the screen.”

Trumbull Studios partnered with Christie to explore the potential of 3D 4K 120 fps projection, using the latest Christie Mirage 4K35 projection system. A special Mirage system will be installed in the Seattle Cinerama Theater for the premiere of UFOTOG, and the theater will soon offer the first public installation of Christie’s new laser illumination system in the fall of this year.

Trumbull Studios is committed to Eyeon Software, which has enabled the production of UFOTOG with the unheard-of impact of 3D in 4K at 120 fps, using Eyeon Fusion, Generation, Connection, and Dimension.

Trumbull’s pioneering work also included his Academy Award winning 70mm 60 frames per second SHOWSCAN process, which was widely acclaimed by industry professionals, and that led to the development of the film BRAINSTORM, which was to debut the process worldwide, with Trumbull directing. Yet in the days of celluloid film and the attendant high 70mm print costs and projector upgrades, the process did not get traction. Now, with digital projectors regularly operating at 144 frames per second for 3D, implementing much higher frame rates and increasing resolution is proving to be a cost-effective way to improve movie impact and profitability.

One objective of Trumbull’s initiative is to demonstrate to the film industry that the successful future of the movie-going experience needs to be a “special event” on larger screens, at high brightness, and with ultra-high frame rate 2K and 4K presentations that cannot be emulated on television, laptops, tablets, or smartphones. “Today, the multiplex is in your pocket…” says Trumbull, “…so younger audiences are enjoying the benefits of low cost and convenience via downloading and streaming, causing tidal shifts in the entertainment industry, and particularly in theatrical exhibition. Theaters must offer an experience that is so powerful and overwhelming that people will see the reward of going out to a movie.”

Trumbull is legendary for his ground-breaking visual effects work on films such as 2001: A SPACE ODYSSEY, CLOSE ENCOUNTERS OF THE THIRD KIND, and BLADE RUNNER, as well as his directorial achievements on SILENT RUNNING and BRAINSTORM and special venue projects such as BACK TO THE FUTURE – THE RIDE, and a trilogy of giant screen high frame rate attractions at the Luxor Pyramid in Las Vegas. Trumbull has more recently been pursuing what he believes can usher in a powerful transformation of cinema itself. At a time when the major studios have embarked on a business model to produce only tent pole-franchise-superhero-comic book action films, theater attendance is in decline. Trumbull believes that a jolt of high technology energy is needed to improve the impact of these expensive productions via photographic and exhibition technology that fully delivers the production value that is presently being throttled down by 24 fps, 2K resolution identical to television, and low brightness 3D on small screens.

Trumbull Studios includes equipment provided by Christie, Dolby Laboratories, RealD, Eyeon, Stewart Filmscreen, Composite Components, Abel Cine, Vision Research, nVidia, 3Ality Technica, Codex, Motion Analysis, Virident, Limelight Productions, and many more. Facilities include a shooting stage, production offices, multiple workshops, screening rooms, editorial, compositing, and sound mixing.

UFOTOG was written and directed by Douglas Trumbull, produced by Julia Hobart Trumbull and Steve Roberts, executive producers Donald Rosenfeld and Andreas Roald, starring Ryan Winkles, director of photography Richard Sands, original music by Claes Nystrom, produced at Trumbull Studios, with special production services provided by Eyeon.

RELATED LINKS
http://www.douglastrumbull.com

Video Assist, Jerry Lewis 1966 behind the scenes featurette “Man in Motion”

A very good video about "Jerry's Noisy Toy." Here is a more advanced version seen in the behind the scenes film "Man in Motion"

Read more

SOC 2014 – The Motion Picture Camera: Past, Present and Future

History of The Motion Picture Camera.

Read more