Categories
Cinematography Film Editing Film Sound People Technology VFX

Here We Go Again: The Digital Cinema Revolution Begins

Happy Birthday to George Lucas! As we know George was a big proponent of the use of digital technology in cinema. When I worked at Sony in the 1990’s, we were on the cutting edge of using digital cameras for cinematography. Here is a video from Sony that highlights the development of the Sony cameras used in Star Wars.

Here is another video from ILM about all the areas that George changed with digital technology for editing and VFX. Thank you Mr. Lucas!

Categories
Film Editing Film Sound Filmmaking Technology

Happy 75th Birthday, George Lucas

Categories
Architecture Technology

Light Matters: Translating Tradition into Dynamic Facades

The Al Bahr Towers by Aedas Photo from ArchDaily

The Dynamic Facades are a great use of adaptive technology to solve architectural problem.


The solar-responsive dynamic screen decreases the towers’ solar gain. According to Aedes, the lightly tinted glass reduces the incoming daylight at all times and not only for temperature-critical situations. The system even includes about 2.000 umbrella-like modules per tower driven by photovoltaic panels.

Mashrabiya' facade at Al Bahr Towers, Abu Dhabi, UAE. Architecture: Aedas UK. Image © Christian Richters
Mashrabiya’ facade at Al Bahr Towers, Abu Dhabi, UAE. Architecture: Aedas UK. Image © Christian Richters

Completed in June 2012, the 145 meter towers’ Masharabiya shading system was developed by the computational design team at Aedas.  Using a parametric description for the geometry of the actuated facade panels, the team was able to simulate their operation in response to sun exposure and changing incidence angles during the different days of the year.

© Aedas
© Aedas

The screen operates as a curtain wall, sitting two meters outside the buildings’ exterior on an independent frame.  Each triangle is coated with fiberglass and programmed to respond to the movement of the sun as a way to reduce solar gain and glare.  In the evening, all the screens will close.

“At night they will all fold, so they will all close, so you’ll see more of the facade.  As the sun rises in the morning in the east, the mashrabiya along the east of the building will all begin to close and as the sun moves round the building, then that whole vertical strip of mashrabiya will move with the sun,” said Peter Oborn, the deputy chairman of Aedas.

Responsive Facade © Aedas
Responsive Facade © Aedas

It is estimated that such a screen will reducing solar gain by more than 50 percent, and reduce the building’s need for energy-draining air conditioning.  Plus, the shade’s ability to filter the light has allowed the architects to be more selective in glass finished.  ”It (the screen) allows us to use more naturally tinted glass, which lets more light in so you have better views and less need of artificial light.

Responsive Facade © Aedas
Responsive Facade © Aedas

“The façade on Al Bahar, computer-controlled to respond to optimal solar and light conditions, has never been achieved on this scale before. In addition, the expression of this outer skin seems to firmly root the building in its cultural context,” explained Awards Juror Chris Wilkinson of Wilkinson Eyre Architects.

Responsive Facade © Aedas
Responsive Facade © Aedas

Such an award acknowledges the importance of the necessary integration of architectural form, structure, systems, and sustainable design strategies.

Cite:Karen Cilento. “Al Bahar Towers Responsive Facade / Aedas” 05 Sep 2012. ArchDaily.

Categories
Cinematography Filmmaking Technology

“On the Set with Video Assist” and Jimmie Songer

noisy_toy

Photo from the Jerry Lewis Comedy Museum.

While I was at CineGear Expo, I met Michael Frediani at the SOC booth and thanked about about his research into Jerry Lewis and told him I would post his article on video assist. I also included an article from the 695 Quarterly about Jim Songer about his development of thru the lens video assist. There is a lot of debate on the topic of who “invented” video assist. Like most technical innovations there is no one single inventor, but many improvements from each contributor. Here is the earlier post about Jerry and video assist.

Audio interview: Jerry Lewis + Peter Bogdanovich

Video assist section starts at 1:09:09.

Jerry Lewis was an influence on Francis Ford Coppola.

ONE FROM THE HEART, Francis Ford Coppola, 1982
ONE FROM THE HEART, Francis Ford Coppola, 1982

Francis Ford Coppola later developed his own “electronic cinema” previsualization called Image and Sound Control. 

As well as being an entertainer, “Jerry Lewis was a major innovator in motion pictures,” stated director Francis Ford Coppola. “His invention of putting a video camera next to the motion picture camera so he could play it back and direct himself, has been used for decades by every director in the movie industry. I watched him on the set of The Ladies Man in 1961 and was amazed by his groundbreaking innovation, the Video Assist.”

The wonderful book DROIDMAKER by Michael Rubin has more info.

 

Two articles from Peter Glaskowsky at CNET.

Video assist predates Jerry Lewis ‘patent’

Jerry Lewis and the elusive Video Assist patent

 

This illustration, from an article written by Jim Songer for American Cinematographer magazine, shows a Panavision camera with the video assist subsystem integrated into the loading door. Jim Songer and Video West
This illustration, from an article written by Jim Songer for American Cinematographer magazine, shows a Panavision camera with the video assist subsystem integrated into the loading door. Jim Songer and Video West

Jimmy Songer and the Development of Video Assist

by David Waelder

From IATSE Local 695 Quarterly

Video Village is a standard feature on the modern movie set. Producers, writers, clients and others can view the action clustered around a monitor far enough away from the set to stay out of trouble. Their segregation in the video ghetto allows camera people and others to go about their tasks without the distraction of people jockeying for position at the viewfinder. It also helps makeup and wardrobe personnel to see how their work appears on camera and it has become an essential tool for the director and continuity person. Even the sound crew benefits by having extension monitors to see the frame and position the boom microphone. All this is made possible by a video assist system perfected by Jimmie Songer, a Local 695 technician.The advantages of using a video camera as an aid to directing movies were apparent from the very beginning. Several directors began to set up TV cameras adjacent to the film camera so they could see an approximate frame. This became a common practice particularly on commercials where the placement of the product is crucially important. To match the view and perspective, assistants would carefully adjust the aim and image size to closely approximate the view of the film camera.

Of course, that isn’t really a video assist system. The image is useful for the simplest shots but not much help when the camera moves or the lens is adjusted. Every setup change or lens adjustment necessitates a recalibration of video camera position and exposure settings. To be a fully functional system, both the video and film cameras would have to view the scene through the same lens to avoid parallax errors and exposure sensitivities would have to track together. This presents a series of technical challenges.

It was a cowboy from East Texas with little formal education who took on the challenge and worked out all the engineering obstacles. Jimmie Songer grew up on a ranch in Burleston, south of Fort Worth, with a keen interest in how radio and television worked. He and his friend, Don Zuccaro, would purchase crystal radio kits, assemble them and string the antenna wire along his mother’s clothesline.

As a teenager, he took a road trip that would set up the course of his life. He and his friends traveled north as far as Bloomington, Indiana, when funds began to run out. Looking for a job to replenish assets, he applied to the RCA plant on Rogers Street. Ordinarily, his lack of formal training would have been an impediment but RCA was just then experimenting with designs for color sets and there was no established technology to learn. By diagramming from memory the circuit design of a popular RCA model, he demonstrated familiarity with the major components and was hired on the spot to be a runner for the engineers developing the new color system.

His duties at RCA consisted largely of gathering components requested by the engineers and distributing them. Along the way, he asked questions about the function of each element and how it fit into the overall design. He stayed about a year, not long enough to see the model CTC4 they were developing go on sale. That didn’t happen until a couple of years later in 1955. But, when he did move back to Texas, he had a pretty good understanding of how video, and color video in particular, worked.

Graduating from crystal radio sets, he and his friend, Don Zuccaro, made a mail-order purchase of plans for a black & white television. Components were not readily available at that time but Jimmie and Don were ingenious and purchased a war surplus radar set with A&B scopes and cannibalized it for parts. The task of hand-winding the tuning coil was simplified because Fort Worth had only one TV station so there was no need to tune anything other than Channel 5.

With skills honed from building his own set and working at the RCA plant in Indiana, Jimmie Songer quickly found work with appliance shops in the Fort Worth area that were beginning to sell television sets but had no one to set them up, connect antennas and service them when needed. This led to an offer, in 1953, to work setting up KMID, Channel 2, in the Midland Odessa area. After a few years with KMID, he worked awhile in the Odessa area and then returned to Fort Worth but he stayed only a year before setting out for Los Angeles in April 1963.

In Los Angeles, he worked at first for a TV repair shop in Burbank while he tinkered with his own experimental projects. Hearing that Dr. Richard Goldberg, the chief scientist at Technicolor, was looking for people with experience with color, he sought him out and secured a job calibrating the color printers. Dr. Goldberg was also developing a two-perforation pull-down camera for widescreen use. Songer became fascinated by the possibility of using that design at 48 fps to make alternate images, one atop the other, which might be used for 3D and built some experimental rigs to test the idea.

This work with Dr. Goldberg in the early ’60s brought him to the attention of Gordon Sawyer at Samuel Goldwyn Studios. Sawyer wanted him to help with an ongoing project for Stan Freberg involving simultaneous video and film recording. Freberg was using side-by-side cameras to create video records of film commercials. The side-byside positioning produced parallax errors but his commercials were mostly static. Generally, the results were good enough for timing and performance checks. But issues of accurately tracking motion would arise whenever the camera did move and Stan Freberg wanted a better system.

Under general supervision from Gordon Sawyer, the team first addressed the issue by adjusting the position of the video camera. They attached a small Panasonic camera to the mount for an Obie light. This put the video lens exactly in line with the film camera lens and only a couple of inches above it. Left-right parallax was effectively eliminated and the vertical alignment could be adjusted to match the film camera with only minimal keystone effect. By affixing a mirror just above the lens mount at a 45-degree angle and mounting the video camera vertically to shoot into the mirror, they reduced vertical parallax to almost nothing. Jimmie Songer addressed the keystone problem by devising a circuit that slightly adjusted the horizontal scan, applying an opposite keystone effect to neutralize the optical effect that was a consequence of slightly tilting the video camera to match the film camera image. Most of the time, this system worked well but there were still limitations. The video system needed to be recalibrated with every lens change. Even with careful adjustment, use of a separate lens for the video meant that depth of field would be different so the video image would only approximate the film image. Blake Edwards knew Gordon Sawyer and approached the team to design a system suitable for movies with moving cameras and frequent lens changes.

The limitations could only be resolved if the video camera used the very same lens used by the film camera. Accomplishing that would require exact positioning of the video lens and adjusting sensitivity of the system both to obtain sufficient light for exposure and to track with the film exposure. Jimmie Songer set about developing a system that could be built into a Panavision Silent Reflex camera (PSR) that used a pellicle mirror to reflect the image to the viewfinder. They left the image path from the lens to the film completely untouched but introduced a second pellicle mirror to reflect the image from the ground glass to a video camera they built into the camera door. This one design change eliminated many of the limitations of previous systems in one stroke. Since the video used the film camera lens and picked up the exact image seen by the film and the camera operator, issues of parallax and matching depth of field were completely eliminated. There was no need to recalibrate the system with every lens change and the video camera was configured to use the same battery supply as the camera. The introduction of a second pellicle mirror did flip the image but Songer corrected this easily by reversing the wires on the deflection coil. But the issue of having sufficient light for the video image still remained.

In one way, a pellicle reflex system is ideal for video use. Unlike a mirror shutter, the pellicle system delivers an uninterrupted image to the viewfinder so there is no need to coordinate the 30-frame video system with a 24-frame film camera. While there would be more frames in a single second of video, the running times would match and that was all that was important. Furthermore, the video image would be free of the flicker seen in the viewfinder of a mirror shutter camera. However, the pellicle mirror used in the reflex path deflected only about one-third of the light to the viewfinder. That was no problem when filming outside in daylight but there was insufficient light when working interiors.

Jimmie Songer needed to make three refinements to the system to address the exposure issue. First, he replaced the vidicon tube that was normally fitted to the camera with a newly available saticon tube that was more sensitive and also provided 1,600 lines of resolution. That helped but wasn’t enough. He then adjusted the optics so that the image, rather than being spread over the full sensitive area of the tube, was delivered only to the center portion. By concentrating on the image, he obtained more exposure and adjusting the horizontal and vertical gain allowed him to spread out the smaller image to fill the monitor. But, there are limits to how much can be gained by this approach. Even with a high-resolution saticon tube, the image will begin to degrade if magnified too far. There was still not enough light for an exposure but the video system had been pushed to its limits so Songer turned his attention to the film camera.

Recognizing that the ground glass itself absorbed a considerable amount of light, Songer contacted Panavision and asked them to fabricate a replacement imaging glass using fiber optic material. Although the potential of using optical fibers for light transmission had been recognized since the 19th century, the availability of sheets of tightly bundled fiber suitable for optics was a recent development in the 1960s. The fiber optic ground “glass” was the trick that made the system work, allowing the video camera function with the light diverted to the viewfinder.

Jimmie Songer and his assistant used the system, first called “instant replay” but now renamed “video assist” to avoid confusion with sports replay systems, on The Party in 1968 and then Darling Lili in 1970. It worked flawlessly, delivering the exact image of the main camera so Blake Edwards, the Director, could follow the action as it happened. It never held up production; to the contrary, Edwards said that it streamlined production because the certain knowledge of how the take looked freed him from making protection takes.

After Darling Lili, the key figures behind the project formed a company, Video West, to further develop the system. They met with rep representatives of the ASC to draw up a series of specifications for video assist systems. Don Howard was brought in to interface the camera with the playback system and operate it in the field. Harry Flagle, the inventor of Quad-Split viewing technology and one of the Ampex engineers who worked on the development of the Model VR-660 portable two-inch recorder, joined the team soon after.

They next used the system on Soldier Blue, directed by Ralph Nelson, and then Wild Rovers, again with Blake Edwards. It proved so popular with producers that Songer and Don Howard, his assistant who was primarily responsible for operating and cuing the video recorder, scheduled projects months in advance and went from film to film. The work was so tightly booked that they sometimes had to ship the camera directly from one project to the next without a return to the shop.

Jimmie Songer joined Local 695, sponsored by Gordon Sawyer, shortly after Darling Lili and continued as a member until his membership was transferred to Local 776 in 1997. In the course of his career, he obtained seventeen US patents for a variety of innovations in high-definition TV and 3D video imaging.

In 2002, he received a Technical Achievement Award from the Academy for his work developing video assist. He lives today on a ranch near Fort Worth but continues to refine the video engineering work that has been his life.

Video Assist

A quote, attributed to Tacitus, claims that success has many fathers while defeat is an orphan. It’s just so with the invention of video assist which is claimed by several people. Jerry Lewis is often cited as the inventor and he certainly incorporated simultaneous video recording in his filming practices very early. He began development work in 1956 and first used a video record and playback system during the filming of The Bellboy in 1960. He used the system to view and evaluate his own performance immediately after each take. But the system he used on The Bellboy was the simplest version; a video camera was lashed just above the main lens and would be adjusted to approximately match the view of the film camera lens with each setup. Later, Jerry Lewis also worked to develop a system that would use a pellicle mirror to view the image through the primary lens.

The assertion that Jerry Lewis “invented” video assist is overstated. The original patent for a video assist system dates to 1947 and subsequent patents in 1954 and 1955 added the refinements of merging optical systems to eliminate parallax and adding a second beamsplitter to permit simultaneous use of video and film viewfinders. The integrated video systems that came into general use in films were the work of many individuals each building on the accomplishments of predecessors. Jimmie Songer’s contributions were many and essential as recognized in 2002 by the Academy of Motion Picture Arts and Sciences.


Glossary for highlighted words

Deflection coil – In a CRT (cathode ray tube), the beam of electrons is aimed by magnetic fields generated by coils of wire surrounding the tube. Adjusting the electrical energy sent to different coils directs the electron stream.

Obie light – A diffuse light mounted very near the camera lens, typically just above the matte box, to provide soft fill on faces in close-ups. Lucien Ballard, ASC developed the light to photograph Merle Oberon after her face was scarred in an auto accident.

Pellicle mirror – A semi-transparent mirror used in optical devices. A pellicle reflects a certain percentage of light and allows the remainder to pass through. In the Panavision PSR camera, a pellicle mirror deflected approximately 30% of light to the viewfinder and passed about 70% to the film plane.

Saticon tube – A saticon tube is a refinement of the vidicon tube design that adds particular chemicals to the photosensitive surface to stabilize the signal.

Vidicon tube – A vidicon is one of the early image capture devices made for television cameras. An image focused on a photoconductive surface produces a charge-density pattern that may be scanned and read by an electron beam.

 

Categories
Animation Technology VFX

Face2Face: Real-time Face Capture

I have already blogged about software that allows actors facial expressions to be edited in post. Now take a look at Face2Face: Real-time Face Capture. It can map new facial expressions real time over video. While very interesting from a technological viewpoint, the idea of ‘photoshopping” video will certainly affect journalistic ethics and the trustworthiness of video evidence.

From Michael Zhang Petapixel.

Face swap camera apps are all the rage these days, and Facebook even acquired one this month to get into the game. But the technology is getting more and more creepy: you can now hijack someone else’s face in real-time video.

A team of researchers at the University of Erlangen-Nuremberg, Max Planck Institute for Informatics, and Stanford University are working on a project called Face2Face, which is described as “real-time face capture and reenactment of RGB videos.”

bushhead

Basically, they’re working on technology that lets you take over the face of anyone in a video clip. By sitting in front of an ordinary webcam, you can, in real-time, manipulate the face of someone in a target video. The result is convincing and photo-realistic.

vladimirhead

The face swap is done by tracking the facial expressions of both the subject and the target, doing a super fast “deformation transfer” between the two, warping the mouth to produce an accurate fit, and rerendering the synthesized face and blending it with real-world illumination.

techdemo

To test the system, the researchers invited subjects to puppeteer the faces of famous people (e.g. George W. Bush, Vladimir Putin, and Arnold Schwarzenegger) in video clips found on YouTube. You can see the results (and an explanation of the technology) in this 6.5-minute video:

Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, June 2016.

Abstract

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.

See Matthias Nießner for more info.

 

It can also be done with 2 live cameras.

ACM Transactions on Graphics 2015 (TOG)

Abstract

We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photo-realistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.

Categories
Animation Disney Filmmaking Technology

New Software Can Actually Edit Actors’ Facial Expressions

FaceDirector software can seamlessly blend several takes to create nuanced blends of emotions, potentially cutting down on the number of takes necessary in filming.

A new software, from Disney Research in conjunction with the University of Surrey, may help cut down on the number of takes necessary, thereby saving time and money. FaceDirector blends images from several takes, making it possible to edit precise emotions onto actors’ faces.

Shooting a scene in a movie can necessitate dozens of takes, sometimes more. In Gone Girl, director David Fincher was said to average 50 takes per scene. For The Social Network actors Rooney Mara and Jesse Eisenberg acted the opening scene 99 times (directed by Fincher again; apparently he’s notorious for this). Stanley Kubrick’s The Shining involved 127 takes of the infamous scene where Wendy backs up the stairs swinging a baseball bat at Jack, widely considered the most takes per scene of any film in history.

“Producing a film can be very expensive, so the goal of this project was to try to make the process more efficient,” says Derek Bradley, a computer scientist at Disney Research in Zurich who helped develop the software.

Disney Research is an international group of research labs focused on the kinds of innovation that might be useful to Disney, with locations in Los Angeles, Pittsburgh, Boston and Zurich. Recent projects include a wall-climbing robot, an “augmented reality coloring book” where kids can color an image that becomes a moving 3D character on an app, and a vest for children that provides sensations like vibrations or the feeling of raindrops to correspond with storybook scenes. The team behind FaceDirector worked on the project for about a year, before presenting their research at the International Conference on Computer Vision in Santiago, Chile this past December.

Figuring out how to synchronize different takes was the project’s main goal and its biggest challenge. Actors might have their heads cocked at different angles from take to take, speak in different tones or pause at different times. To solve this, the team created a program that analyzes facial expressions and audio cues. Facial expressions are tracked by mapping facial landmarks, like the corners of the eyes and mouth. The program then determines which frames can be fit into each other, like puzzle pieces. Each puzzle piece has multiple mates, so a director or editor can then decide the best combination to create the desired facial expression.

To create material with which to experiment, the team brought in a group of students from Zurich University of the Arts. The students acted several takes of a made-up dialogue, each time doing different facial expressions—happy, angry, excited and so on. The team was then able to use the software to create any number of combinations of facial expressions that conveyed more nuanced emotions—sad and a bit angry, excited but fearful, and so on. They were able to blend several takes—say, a frightened and a neutral—to create rising and falling emotions.

The FaceDirector team isn’t sure how or when the software might become commercially available. The product still works best when used with scenes filmed while sitting in front of a static background. Moving actors and moving outdoor scenery (think swaying trees, passing cars) present more of a challenge for synchronization.

By Emily Matchar
smithsonian.com

From Disney Research

We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states. As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production. Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.

 

Download File “FaceDirector- Continuous Control of Facial Performance in Video-Paper”
[PDF, 13.22 MB]

 

Copyright Notice

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

 

Categories
Animation Art Disney Filmmaking Technology

Dali-Disney exhibition uses Virtual Reality

Visitors to a new exhibition at The Dali Museum in St. Petersburg won’t just be looking at art. Thanks to virtual reality, they’ll be exploring a Dali painting in a dreamy, three-dimensional world that turns art appreciation into an unforgettable, immersive experience.

The new exhibition, Disney and Dali: Architects of the Imagination, tells the story of the relationship between Salvador Dali, the surrealist artist, and Walt Disney, the great American animator and theme-park pioneer.

But the museum exhibition’s highlight comes after visitors have seen the Disney-Dali show’s paintings, story sketches, correspondence, photos and other artifacts. As visitors leave the exhibition area, they’ll be invited to don a headset to try the virtual reality experience.

Called “Dreams of Dali,” the VR experience takes viewers inside Dali’s 1935 painting Archeological Reminiscence of Millet’s ‘Angelus.’ The painting depicts two towering stone figures along with tiny human figures in a bare landscape with a moody sky. Users can move around inside the painting, using Oculus Rift headsets to navigate a trippy three-dimensional environment that includes motifs from other Dali works like elephants, birds, ants and his Lobster Telephone sculpture.

Accompanied by a haunting piano soundtrack punctuated by bird cries, the VR visuals also include a crescent moon, a stone tunnel and even an image of rocker Alice Cooper, whom Dali featured in a hologram he created in 1973.

“You actually have a three-dimensional feeling that you’re inside a painting,” said Jeff Goodby, whose firm Goodby Silverstein & Partners created the VR experience. “It’s not just like you’re inside a sphere with things being projected. It’s actually like there are objects closer and further away and you’re walking amidst them. It’s a vulnerable feeling you give yourself up to.”

Disney and Dali met in the 1940s in Hollywood, according to museum director Hank Hine. “Their sensibilities were very connected,” Hine said. “They wanted to take art off the palette, out of the canvas and into the world.” The exhibition looks at the castle motif that became a symbol of Disney parks, along with Dali’s Dream of Venus pavilion from the 1939 World’s Fair, which some consider a precursor of contemporary installation art.

Disneyland castle

This 1955 design for Disneyland castle is explored in a new exhibition at The Dali Museum about artist Salvador Dali’s friendship with Walt Disney. Walt Disney Imagineering Dali Museum.

Disney and Dali also collaborated on a short animated movie, Destino, (below) that was eventually completed by Disney Studios. The six-minute movie, which can be found on YouTube, features a dancing girl with long dark hair, a sundial motif and a song with the line, “You came along out of a dream. … You are my destino.” Clips will be played within the gallery for the Disney-Dali exhibition and the full short will be shown at the museum’s theater.

Archeological Reminiscence of Millet’s “Angelus,” 1933–35, Salvador Dalí. Photo: © Salvador Dalí/Fundació Gala-Salvador Dali/Artist Rights Society (ARS), 2015
Archeological Reminiscence of Millet’s “Angelus,” 1933–35, Salvador Dalí.
Photo: © Salvador Dalí/Fundació Gala-Salvador Dali/Artist Rights Society (ARS), 2015

The show also displays the Dali painting that inspired the VR experience, Archeological Reminiscence of Millet’s ‘Angelus.’

DALI-DISNEY

Virtual Reality Trailer:Dreams of Dali,”

“Dreams of Dali” is part of the museum’s new exhibit, Disney and Dali: Architects of the Imagination, running Jan. 23 through June 12. For more on the virtual reality experience, visit DreamsOfDali.org.

Virtual tour of the Dali Museum.
 

https://plus.google.com/101094337324295273794/posts/4mRBC1yMNHR

 

 

Categories
Technology

‘Fairy Lights’ Touchable Holograms using lasers

This is an amazing technology called ‘Fairy Lights’ that creates touchable holograms using lasers. Notice that the hologram is interactive, it can change state during and after the touch. No glasses or goggles are required. The possibilities of this for film, theater, video games and theme parks are nearly endless.

From IEEE spectrum.

We’ve seen a few holographic technologies that have come close; they rely on optical tricks of one sort or another to make it seem like you’re seeing an image hovering in front of you.

There’s nothing wrong with such optical tricks (if you can get them to work), but the fantasy is to have true midair pixels that present no concerns about things like viewing angles. This technology does exist, and has for a while, in the form of laser-induced plasma displays that ionize air molecules to create glowing points of light. If lasers and plasma sound like a dangerous way to make a display, that’s because it is. But Japanese researchers have upped the speed of their lasers to create a laser plasma display that’s touchably safe.

Researchers from the University of Tsukuba, Utsunomiya University, Nagoya Institute of Technology, and the University of Tokyo have developed a “Fairy Lights” display system that uses femtosecond lasers instead. The result is a plasma display that’s safe to touch.

Each one of those dots (voxels) is being generated by a laser that’s pulsing in just a few tens of femtoseconds. A femotosecond is one millionth of one billionth of one second.  The researchers found that a pulse duration that minuscule doesn’t result in any appreciable skin damage unless the laser is firing at that same spot at one shot per millisecond for a duration of 2,000 milliseconds. The Fairy Lights display keeps the exposure time (shots per millisecond) well under that threshhold:

Our system has the unique characteristic that the plasma is touchable. It was found that the contact between plasma and a finger causes a brighter light. This effect can be used as a cue of the contact. One possible control is touch interaction in which floating images change when touched by a user. The other is damage reduction. For safety, the plasma voxels are shut off within a single frame (17 ms = 1/60 s) when users touch the voxels. This is sufficiently less than the harmful exposure time (2,000 ms).

Even cooler, you can apparently feel the plasma as you touch it:

Shock waves are generated by plasma when a user touches the plasma voxels. The user feels an impulse on the finger as if the light has physical substance. The detailed investigation of the characteristics of this plasma-generated haptic sensation with sophisticated spatiotemporal control is beyond the scope of this paper.

As you can see from the pics and video, these displays are tiny: the workspace encompasses just eight cubic millimeters. The spatiotemporal resolution is relatively high, though, at up to 200,000 voxels per second, and the image framerate depends on how many voxels your image needs.

To become useful as the consumer product of our dreams, the display is going to need to scale up. The researchers suggest that it’s certainly possible to do this with different optical devices. We’re holding out for something that’s small enough to fit into a phone or wristwatch, and it’s not that crazy to look at this project and believe that such a gadget might not be so far away.

For more see Digital Nature Group

Categories
Broadcasting People Technology Television

Akio Morita and the end of Sony Betamax

A typical ad for Sony’s Betamax video recorder. Credit: Flickr/Nesster, CC BY 

Recently, Sony announced that they will stop making Betamax tapes. This made me reflect on how the introduction of the first VCRs were a huge change in the way people watched TV, allowing them to time shift, that is record shows and play them back later. The “format war” between Betamax and VHS caused Betamax to lose market share, even though Betamax was a superior format technically, VHS could record more hours and that made it more popular with consumers. Betamax evolved into Betacam, an analog component format used in news gathering and field production.

Betamax was also the format that started the infamous Sony v. Universal City Studios case that went all the way to the Supreme Court. Fortunately the studios lost, later allowing them to make money off of cassette rentals and sales. Ironically, Universal would later be bought by Matsushita, one of the world’s largest VCR manufacturers at that time.

Akio Morita was a founder, and for many years, the CEO of Sony. He was the Steve Jobs of Japan. During his tenure Sony came up with many consumer electronic advances such as the Trinitron, the Mavica still camera, the SDDS film sound system, DAT, the MiniDisc, the Walkman and along with Philips, the S/PDIF audio interface, the CD and Blu-Ray. (Full disclosure, I used to work at Sony developing HDTV.)

Steve

Akio Morita is not interviewed in part 3, but that segment can be seen here.

From Sony.

The introduction of the home-use VCR had caused the biggest stir and created the greatest expectations for Sony since the launch of the Trinitron. Sony sales branches throughout Japan were buzzing about Betamax, and how to launch it in their regions became their number one priority. From the pre-launch stage, study sessions and training seminars explaining how to connect a Betamax to a television were frequent. At that time, however, annual domestic demand for VCRs was still less than 100,000 units. Morita was brimming with confidence when he made his announcement about the upcoming video age. Would home-use VCRs become popular? The industry had its doubts. At any rate, full-scale production of Betamax looked ready to roll. However, in the same year, something happened which took Sony by surprise.

Categories
Cinematography Filmmaking Technology

Lytro Immerge for VR

From FXGuide.

Most of us know Lytro from its revolutionary stills camera which allowed for an image to be adjusted in post as never before – it allowed focus to be changed. It did this by capturing a Lightfield and it seemed to offer a glimpse into the future of cameras built on a cross of new technology and the exciting field of computational photography.

Why then did the camera fail? Heck, we sold ours about 8 months after buying it.

lytro1

Lightfield technology did allow for the image to be adjusted in terms of depth or focus in post, but many soon found that this was just delaying a decision from on location. If you wanted to send someone a Lytro image you almost always just picked the focus and sent a flat .jpeg. The only alternative was to send them a file which required a special viewer. The problem with the later was simple, someone else ‘finished’ taking your photo for you – you had no control. It was delaying an on set focus decision to the point that you never decided at all! The problem with the former, ie. rendering a jpeg, was that the actual image was not better than one could get from a good Canon or Nikon, actually it was a bit worse as the optics for Lightfield could not outgun your trusty Canon 5D.

In summary: the problem was we did not have a reason to not want to lock down the image. Lightfield was a solution looking for a problem. We needed somewhere it made sense to not ‘lock down’ the image and keep it ‘alive’ for the end user.

Enter VR – it is the solution that Lightfield solves.

Currently much of the VR that is cutting edge is computer generated – the rigs that incorporate head movement can understand you are moving your head to the side and it renders the right pair of images for your eyes. While a live action capture will allow you to spin on the spot and see in all directions, a live action capture did not (until now) allow you to lean to one side to miss a slow motion bullet traveling right at you the way a CG scene could.

Live action was stereo and 360 but there was no parallax. If you wanted to see around a thing…you couldn’t. There are some key exceptions such as 8i which have managed to capture video from multiple cameras and then allow a live action playback with head tracking, parallax and the full six degrees of motion, thus becoming dramatically more immersive. However, 8i is a specialist rig which is effectively a concave wall or bank of cameras around someone, a few meters back from them. The new Immerge from Lytro is different – it is a ball of cameras on a stick.

Lytro Immerge seems to be the world’s first commercial professional Lightfield solution for cinematic VR, which will capture ‘video’ from many points of view at once and thereby provide a more lifelike presence for live action VR through six degrees of freedom. It is built from the ground up as a full workflow, camera, storage and even NUKE compositing to color grading pipeline. This allows the blending of live action and computer graphics (CG) using Lightfield data, although details on how you will render your CGI to match the Lightfield captured data is still unclear.

With this configurable capture and playback system, any of the appropriate display head rigs should support the new storytelling approach, since at the headgear end, there is no new format, all the heavy lifting is done earlier in the pipeline.

How does it work?

The only solution dynamic six degrees of freedom is to render the live action and CGI as needed, in response to the head units render requests. In effect you have a render volume. Imagine a meter square box within which you can move your head freely. Once the data is captured the system can solve for any stereo pair anywhere in the 3D volume. Conceptually, this is not that different from what happens now for live action stereo. Most VR rigs capture images from a set of camera and then resolve a ‘virtual’ stereo pair from the 360 overlapping imagery. It is hard to do but if you think of the level 360 panorama view as a strip that is like a 360 degree mini-cinema screen that sits around you like a level ribbon of continuous imagery, then you just need to find the right places to interpolate between camera view.

lytro3

Of course, if the cameras had captured the world as a nodal pan there would be no stereo to see. But no camera rig does this – given the physical size of cameras all sitting in a circle… a camera to the left of another sees a slightly different view and that offset, that difference in parallax, is your stereo. So if solving off the horizontal offset around a ring is the secret to stereo VR live action, then the Lytro Immerge does this not just around the outside ring but anywhere in the cube volume. Instead of interpolating between camera views it builds up a vast set of views from its custom lenses and then virtualizes the correct view from anywhere.

Actually it even goes further. You can move outside the ‘perfect’ volume, but at this point it will start to not have previously obstructed scene information. So if you look at some trees, and then move your head inside the volume, you can see perfectly around one to another. But if you move too far there will be some part of the back forest that was never captured and hence can’t be used or provided in the real time experience, in a sense you have an elegant fall off in fidelity as you ‘brake the viewing cube’.

VR was already a lot of data, but once you move to Lightfield capture it is vastly more, which is why Lytro has developed a special server, which will feed into editing pipelines and tools such as NUKE and which can record and hold one hour of footage. The server has a touch-screen interface, designed to make professional cinematographers feel at home. PCmag reports that it allows for control over camera functions via a panel interface, and “even though the underlying capture technology differs from a cinema camera, the controls—ISO, shutter angle, focal length, and the like—remain the same.”

Doesn’t this seem like a lot of work just for head tracking?

The best way to explain this is to say, it must have seemed like a lot of work to make B/W films become color…but it added so much there was no going back. You could see someone in black and white and read a good performance, but in color there was a richer experience, closer to the real world we inhabit.

With six degrees of freedom, the world comes alive. Having seen prototype and experimental Lightfield VR experiences all I can say is that it does make a huge difference. A good example comes from an experimental piece done by Otoy. Working with USC-ICT and Dr Paul Debevec they made a rig that effectively scanned a room. Instead of rows and rows of cameras in a circle and stacked on top of one another virtually, the team created a vast data set for Lightfield generation by having the one camera swung around 360 at one height – then lifted up and swung around again, and again all with a robotic arm. This sweeping meant a series of circular camera data sets that in total added up to a ball of data.

 


Unlike the new Lytro approach, this works only on a static scene, a huge limitation compared to the Immerge, but still a valid data set. This ball of data is however conceptually similar to the ball of data that is at the core of the Lytro limitation, but unlike the Lytro this was an experimental piece and as such was completed earlier this year. What is significant is just how different this experience is over a normal stereo VR experience. For example, even though the room is static, as you move your head the specular highlights change and you can much more accurately sense the nature of the materials being used. In a stereo rig, I was no better able to tell you what a bench top was made of than looking at a good quality still, but in a Lightfield you adjust your head, see the subtle spec shift and break up and you are immediately informed as to what something might feel like. Again spec highlights seem trivial but it is one of the key things we use to read faces. And this brings us to the core of why the Lytro Immerge is so vastly important, people.

VR can be boring. It may be unpopular to say so but it is the truth. For all the whizz bang uber tech, it can lack story telling. Has anyone ever sent you a killer timelapse show reel? As a friend of mine once confessed, no matter how technically impressive, no matter how much you know it would have been really hard to make, after a short while you fast forward through the timelapse to the end of the video. VR is just like this. You want to sit still and watch it but it is not possible to hang in there for too long as it just gets dull – after you get the set up…amazing environment, wow…look around…wow, ok I am done now.

What would make the difference is story, and what we need for story is actors – acting. There is nothing stopping someone from filming VR now, and most VR is live action, but you can’t film actors talking and fighting, punching and laughing – and move your head to see more of what is happening – you can only look around, and then more often than not, look around in mono.

The new Lytro Immerge.
The new Lytro Immerge.

The new Lytro Immerge and the cameras that will follow it offer us professional kit that allows professional full immersive storytelling.

Right now an Oculus Rift DK2 is not actually that sharp to the eye. The image is OK but the next generation of head set gear have vastly better screens and this will make the Lightfield technology even more important. Subtle but real spec changes are not relevant when you can’t make out a face that well due to low res screens, but the prototype new Sony, Oculus and Valve systems are going to scream out for such detail.

Sure they’ll be expensive, but then an original Sony F900 HDCAM was $75,000 when it came out and now my iPhone does better video. Initially, you might only even think about buying one if you had either a stack of confirmed paid work, or a major rental market to service, but hopefully the camera will validate the approach and provide a much needed professional solution for better stories.

How much and when?

No news on when the production units will actually ship, many of the images released for the launch are actually concept renderings, but the company has one of the only track records for shipping actual Lightfield cameras so the expectation is very positive about them pulling the Immerge off technically and delivering.

In Verge, Vrse co-founder and CTO Aaron Koblin commented that “light field technology is probably going to be at the core of most narrative VR” When a prototype version comes out in the first quarter of 2016, it’ll cost “multiple hundreds of thousands of dollars” and is intended for rental.

Lytro CEO Jason Rosenthal says the new cameras actually contain “multiple hundreds” of cameras and sensors and went on to suggest that the company may upgrade the camera quarterly.