Categories
Animation Art Disney Filmmaking Technology

Dali-Disney exhibition uses Virtual Reality

Visitors to a new exhibition at The Dali Museum in St. Petersburg won’t just be looking at art. Thanks to virtual reality, they’ll be exploring a Dali painting in a dreamy, three-dimensional world that turns art appreciation into an unforgettable, immersive experience.

The new exhibition, Disney and Dali: Architects of the Imagination, tells the story of the relationship between Salvador Dali, the surrealist artist, and Walt Disney, the great American animator and theme-park pioneer.

But the museum exhibition’s highlight comes after visitors have seen the Disney-Dali show’s paintings, story sketches, correspondence, photos and other artifacts. As visitors leave the exhibition area, they’ll be invited to don a headset to try the virtual reality experience.

Called “Dreams of Dali,” the VR experience takes viewers inside Dali’s 1935 painting Archeological Reminiscence of Millet’s ‘Angelus.’ The painting depicts two towering stone figures along with tiny human figures in a bare landscape with a moody sky. Users can move around inside the painting, using Oculus Rift headsets to navigate a trippy three-dimensional environment that includes motifs from other Dali works like elephants, birds, ants and his Lobster Telephone sculpture.

Accompanied by a haunting piano soundtrack punctuated by bird cries, the VR visuals also include a crescent moon, a stone tunnel and even an image of rocker Alice Cooper, whom Dali featured in a hologram he created in 1973.

“You actually have a three-dimensional feeling that you’re inside a painting,” said Jeff Goodby, whose firm Goodby Silverstein & Partners created the VR experience. “It’s not just like you’re inside a sphere with things being projected. It’s actually like there are objects closer and further away and you’re walking amidst them. It’s a vulnerable feeling you give yourself up to.”

Disney and Dali met in the 1940s in Hollywood, according to museum director Hank Hine. “Their sensibilities were very connected,” Hine said. “They wanted to take art off the palette, out of the canvas and into the world.” The exhibition looks at the castle motif that became a symbol of Disney parks, along with Dali’s Dream of Venus pavilion from the 1939 World’s Fair, which some consider a precursor of contemporary installation art.

Disneyland castle

This 1955 design for Disneyland castle is explored in a new exhibition at The Dali Museum about artist Salvador Dali’s friendship with Walt Disney. Walt Disney Imagineering Dali Museum.

Disney and Dali also collaborated on a short animated movie, Destino, (below) that was eventually completed by Disney Studios. The six-minute movie, which can be found on YouTube, features a dancing girl with long dark hair, a sundial motif and a song with the line, “You came along out of a dream. … You are my destino.” Clips will be played within the gallery for the Disney-Dali exhibition and the full short will be shown at the museum’s theater.

Archeological Reminiscence of Millet’s “Angelus,” 1933–35, Salvador Dalí. Photo: © Salvador Dalí/Fundació Gala-Salvador Dali/Artist Rights Society (ARS), 2015
Archeological Reminiscence of Millet’s “Angelus,” 1933–35, Salvador Dalí.
Photo: © Salvador Dalí/Fundació Gala-Salvador Dali/Artist Rights Society (ARS), 2015

The show also displays the Dali painting that inspired the VR experience, Archeological Reminiscence of Millet’s ‘Angelus.’

DALI-DISNEY

Virtual Reality Trailer:Dreams of Dali,”

“Dreams of Dali” is part of the museum’s new exhibit, Disney and Dali: Architects of the Imagination, running Jan. 23 through June 12. For more on the virtual reality experience, visit DreamsOfDali.org.

Virtual tour of the Dali Museum.
 

https://plus.google.com/101094337324295273794/posts/4mRBC1yMNHR

 

 

Categories
Technology

‘Fairy Lights’ Touchable Holograms using lasers

This is an amazing technology called ‘Fairy Lights’ that creates touchable holograms using lasers. Notice that the hologram is interactive, it can change state during and after the touch. No glasses or goggles are required. The possibilities of this for film, theater, video games and theme parks are nearly endless.

From IEEE spectrum.

We’ve seen a few holographic technologies that have come close; they rely on optical tricks of one sort or another to make it seem like you’re seeing an image hovering in front of you.

There’s nothing wrong with such optical tricks (if you can get them to work), but the fantasy is to have true midair pixels that present no concerns about things like viewing angles. This technology does exist, and has for a while, in the form of laser-induced plasma displays that ionize air molecules to create glowing points of light. If lasers and plasma sound like a dangerous way to make a display, that’s because it is. But Japanese researchers have upped the speed of their lasers to create a laser plasma display that’s touchably safe.

Researchers from the University of Tsukuba, Utsunomiya University, Nagoya Institute of Technology, and the University of Tokyo have developed a “Fairy Lights” display system that uses femtosecond lasers instead. The result is a plasma display that’s safe to touch.

Each one of those dots (voxels) is being generated by a laser that’s pulsing in just a few tens of femtoseconds. A femotosecond is one millionth of one billionth of one second.  The researchers found that a pulse duration that minuscule doesn’t result in any appreciable skin damage unless the laser is firing at that same spot at one shot per millisecond for a duration of 2,000 milliseconds. The Fairy Lights display keeps the exposure time (shots per millisecond) well under that threshhold:

Our system has the unique characteristic that the plasma is touchable. It was found that the contact between plasma and a finger causes a brighter light. This effect can be used as a cue of the contact. One possible control is touch interaction in which floating images change when touched by a user. The other is damage reduction. For safety, the plasma voxels are shut off within a single frame (17 ms = 1/60 s) when users touch the voxels. This is sufficiently less than the harmful exposure time (2,000 ms).

Even cooler, you can apparently feel the plasma as you touch it:

Shock waves are generated by plasma when a user touches the plasma voxels. The user feels an impulse on the finger as if the light has physical substance. The detailed investigation of the characteristics of this plasma-generated haptic sensation with sophisticated spatiotemporal control is beyond the scope of this paper.

As you can see from the pics and video, these displays are tiny: the workspace encompasses just eight cubic millimeters. The spatiotemporal resolution is relatively high, though, at up to 200,000 voxels per second, and the image framerate depends on how many voxels your image needs.

To become useful as the consumer product of our dreams, the display is going to need to scale up. The researchers suggest that it’s certainly possible to do this with different optical devices. We’re holding out for something that’s small enough to fit into a phone or wristwatch, and it’s not that crazy to look at this project and believe that such a gadget might not be so far away.

For more see Digital Nature Group

Categories
Film Editing Filmmaking Interview People

Walter Murch at CAMERIMAGE 2015

by Sven Mikulec

After the Camerimage international film festival’s special screening of The Talented Mr. Ripley, Anthony Minghella’s wonderful and haunting 1999 film with Matt Damon in the starring role, I had the unique pleasure and honor of seeing and listening to probably the greatest film editor and sound designer of the last half a century. Walter Murch, the living legend of the filmmaking business whose career was built on films such as Apocalypse Now, The Conversation and The Godfather trilogy, was invited to Bydgoszcz, Poland to receive the festival’s Special Award to Editor with Unique Visual Sensitivity. This is the first time I’ve ever had the chance to see him in person and, besides coming off as a very nice and humble human being, to listen to him talk about filmmaking, editing and the history of film was incredibly inspiring and satisfying.

Sitting at a small table on stage, with a glass of water at his side, Walter Murch engaged the audience and the crowded theater—mind you, many of the audience are filmmakers themselves—bombarded him with questions, seeking his advice and wanting to soak up as much wisdom as possible. Murch briefly discussed his relationship with Minghella, calling him an extremely collaborative director who wanted and accepted input from his crew (but “still had strong vision and ideas”), recalling how they met and how Minghella explained to him that, when he found a perfect T-shirt, he’d buy hundreds of them, never to have to set out on the risky task of finding new clothes. The message was clear—if Murch proved to be a capable editor, Minghella would want to work with him for the rest of his life. They did three films together (The English Patient, The Talented Mr. Ripley, Cold Mountain), and would definitely collaborate again had it not been for the director’s tragic death in 2008.

One of the most interesting parts of the conversation was when Murch explained one the things that inevitably changed with the rise of digital technology and its use in filmmaking. Back in the good old days, after a hard day’s work on set, the crew would gather and watch the ‘dailies,’ the material they filmed that specific day. With minds clear and concentrated on the film, they would immerse in their footage and have discussions on the material. Dailies became a part of history, as there’s no need for them when the crew can monitor what’s being filmed on set simultaneously on their screens. Since during filming people have tons of things on their mind and can hardly relax in front of the screen, Murch believes dailies should be brought back into practice, as they proved very useful in the past.


Walter Murch mixing Apocalypse Now in 1979

On the unsurprising question of what you need to have to be a good editor, Murch said you needed to be ready to spend 16 hours a day in a small, stuffy room with no windows, being repeated the same things over and over again like torture. Furthermore, a good editor has to have a good sense of rhythm because, after all, editing is basically choreographing a line of images. The other important thing is to be able to anticipate the audience’s reaction. According to Murch, the editor is the only representative of the audience in a film crew: his job is to predict how the viewer will respond to the movie, and to do so, he has to place himself in their shoes. Therefore, Murch tends to avoid seeing any part of filming, he visits the set only if really necessary, believing too much information would prove to be a burden, as it will distance him from the position of the viewer, who will see the film without any knowledge of the size of the set or the sort of sandwiches served in breaks. The editor, Murch continues, is one of the few people on set with great effect on the film who can completely isolate himself if he wants to.

What I did not know was that Murch had some influence on the script for The Talented Mr. Ripley. As he was sent the screenplay six months prior to filming, he made a couple of suggestions regarding the way the film should open and how it should end, and Minghella listened. But it’s not strange, Murch says, that editors get the screenplay months, or even a year, in advance: it’s actually common practice nowadays.

Needless to say, I left the theater impressed like a school boy, as I should be in the presence of a professional of such caliber. This made me a little more nervous during our interview, but it turned out there was no need whatsoever to feel uncomfortable. That’s who Murch is—an editing genius capable of making you feel as if he’s your friend from elementary school.


Fellow USC alums Walter Murch and George Lucas

In an interesting interview you recently gave to Indiewire, you said that films are called motion pictures, but that they could be easily called emotion pictures since the point of every film should be to cause an emotional response in the audience. Do you think this should be top priority in any film?
Yes, with the proviso that it should be the correct emotion. Films are very good at stirring up emotion but you have to be careful about which emotion you’re stirring up. So in a sense the filmmakers, from the directors to anybody else, have to really say—what emotion are we going for here and why are we going for it? And how does that emotion relate to what we had in the previous and will have in the following scene? And can we also track not only the emotion but the logic of everything that’s happening, basically is the story understandable? So this dance between intellect and emotion, which is kind of basic to what human beings are, is something that we have to be very careful about. In a film, for instance, you could stage a murder in a very brutal way which would stir up emotions in the audience, but is that going to confuse things later on in the story?

You also talked about over-intentionality in movies, how it’s easy for the audience to feel manipulated into feeling something if things are edited in a certain way. How difficult is it for you not to cross that border, to cause an organic feeling in a viewer rather than a manipulated one?
It’s very difficult. Because films are evolving under our fingers, so to speak. And we want to communicate certain things and we’re anxious that the audience understands what we’re trying to say. And so many things are uncertain in a making of a film that you can sometimes hold on to a scene as being important, but you can learn later that, in fact, by removing that scene in a strange, sometimes mystifying way the whole film relaxes, and the audience gets everything you’re saying even without this very definite moment. I remember many years ago working on a film with Fred Zinnemann called Julia. These arrows began to point at one scene in particular at the beginning of the film. Maybe we should lose this scene, because again, there was this over-intentionality to it. And so we, meaning Fred and I, said let’s take it out. So I was undoing the splices, back in the day when we made physical splices, and he observed, you know, when I read the script of this project, when I read this scene, I knew that I should do this film. In other words, the very scene he connected with was the scene we are now taking out. So I asked myself, am I removing the heart of the movie? Or am I removing the umbilical cord of the movie? This scene was important to connect Fred with the film, but let’s say, once the nutrients have flowed into the whole film, not only now can you remove the umbilical cord, you have to remove it. We walk around with the belly button, but not with the umbilical cord. So there are scenes like that that deliver their message very particularly, but you should be suspicious of those very scenes and wonder if this film can ride the bike without these training wheels.

A lot of big American movies these days treat the viewers as if they are incapable of connecting the dots, explaining far too much in the process. Do you see that trend in American cinema today?
Yeah, I think so. I think that’s partly down to everything we’ve just been talking about. It’s also that, in quotes, American cinema is also global cinema, in that American cinema is more than Chinese cinema, more than Indian cinema, more than European cinema. It’s the one cinema that goes all the way around the world so it has to be understandable by the Chinese, Africans, South-Americans, Europeans. Inevitably, there is a coarsening of the message there because of trying to adapt to all these different sensibilities and different ways of thinking on the different continents of the globe. But very often it’s simply lazy filmmaking. It’s hard to make it the other way because of the uncertainty of it all, because it’s risky. I find it much more interesting to make things this way precisely because it does involve the audience in the film. And really the last creative act of any film is viewing by the audience. The audience are really the ones who are creating the film, it doesn’t really exist on the screen, it exists in a kind of penumbra between the audience and the screen, the interaction of those two things. And exactly what you’re saying allows that interaction to take place. Otherwise, the audience is just blasted by the things coming from the screen, and they just have to sit there and take it.

Since Return to Oz wasn’t a critical or commercial success, the film practically blocked your potential directorial path. But it must be nice to see what happened to the film in the decades that followed. How do you feel about the project now?
I’m very happy that it has this afterlife. The film was made in the early 1980s, really at the dawn of home cinema. VHS had just come in at that point, I think. So I made it not knowing everything that was going to happen in the next thirty years with DVDs, Blu-rays, streaming and all of these other things that allowed people to see the film in a variety of different circumstances. On the other hand, it has to be good enough for the people to want to see it. So I’m very pleased to see it has this afterlife to it. Ironically, one of the things that happened is that the studio, Disney, at the time of the release of the film had changed management, and the new management really had no interest in Return to Oz at all, really. It was kind of abandoned, but that meant ironically that I had more control over it because if they hadn’t abandoned it, they would have been far more aggressive with me, trying to bend it this way or that, kind of like what happened with Orson Welles on The Touch of Evil. The finished film is as much as any film pretty much as I wanted to make it.

But you said you had some projects you wanted to make, but you were force to abandon it. You stated one of the movies you wanted to make was about Nikola Tesla. Why him?
I’m just fascinated with him as a character. I discovered him in the process of doing research for Return to Oz because the inspiration for the Emerald City, this fantastic place, was the Columbia World’s Fair in Chicago in 1893. And that was the fair that Tesla appeared at, and he was the one that electrified the fairs. This was the first World’s Fair to be electrified with Tesla’s alternating current, and he was at the fair giving demonstrations. So he was arguably the living wizard of that festival, and he was called The Wizard. So I think L. Frank Baum, the author, who lived in Chicago, went to the fair and saw Tesla and Tesla was the wizard. But the more I learned about Tesla and his story, the more fascinated I became. I wanted to do a kind of Mozart-Salieri story on the tension between Tesla and Edison, who were two very, very different personalities, both competing in the same territory.

This story might have made for a great film.
Yeah.

You’ve worked with a lot of great filmmakers in your career. Which collaboration holds a special place in your heart?
It has to be Francis Coppola because the first feature film I’ve worked on was his film, The Rain People in 1969. And I worked with him in 2009 on Tetro, the last film. Which is… how many? Four decades of working together? And on some remarkable films. There’s a gap between Apocalypse Now and Apocalypse Now Redux. But he and I share many sensibilities and he gives a great deal of control to the people who work with him. Working with Francis, I was astonished how much control he gave. We was, like, just go and do something.

A lot of trust.
Yes, a lot of trust, but the surprising thing about trust is, if you’re given all of this trust, you repay it, you know how much he has given you and so you are anxious to fulfill and more the trust he has given you. And that works in opposite way with directors who are always controlling everything, did you do this, I want this, I want that… At a certain point you say, OK, let’s all do what you want. But this other way of working, the Francis way, is a wonderful way of working.

When we compare what editing used to be to editing today, with the development of technology and the trend that movies resemble music videos, what would you say about contemporary, modern editing?
There is a shift. On the other hand, also if you look at the decades, the fastest editing ever in a motion picture was Man with a Movie Camera, Dziga Vertov’s film from 1929. Well, not the whole film, but there’s a section of the film that’s so rapidly cut that you just kind of had to stand back the way you look at fireworks. We, meaning in the larger sense, are investigating the borderline between effect and comprehensibility. And it’s clear that, to achieve a certain effect, this kind of fireworks in editing—you can do that, but you lose comprehensibility. Things are happening on the screen and maybe you’ll capture a thing here or there. For briefs periods of time this is fine in any film. But as a general principle, it’s something to be wary of. Without question, music videos and commercials and even videos you see in clothing stores on video-screens, have all affected the way we see edited images, and they’ve worked their way into the theaters. And we’re looking at films on very different mediums, on iPhones or 20-meter screens in a movie palace, or on virtual reality goggles. So all of those are very different formats, and yet at the moment we have to edit as if they are all the same. This creates dissonances with the rate of cutting.

For example, the videos on screens in clothing stores. They are rapidly cut with lots of moving, so as to make you look at them. So you’re in a store that’s mostly static, people moving fairly slowly, and yet over here there’s a screen going like this (waves his hand frantically), forcing you to look at it. Taking that sensibility though and transposing it into a movie palace, where that’s the only thing we’re looking at and the screen is sixty feet wide, can create undesirable side effects, people get sick looking at it. In the long term, we’ll figure all this out, and it does change from decade to decade. Dialogue, for instance, in the 1930s and 1940s was said much quicker than it is today. The cutting was slower, but people talked much faster, quick, quick, quick. His Girl Friday, for instance. Films just don’t sound like that today. That’s the dialogue equivalent to quick cutting. You can’t see that today. The closest thing would probably be The Social Network, those scenes very quickly paced in terms of dialogue.

The experience of watching feature motion pictures in theaters is barely one hundred years old. Birth of a Nation came out in 1915, and it’s 2015. And I’ve been working in films for half that time. (laughs) We’re still learning how to do this, and adapting to different circumstances, so it’s natural for the pendulum to swing far in one direction, and then far in the opposite direction. Inarritu’s film last year had no edits in it, at all, there were technically concealed edits in there, but the experience of watching it was that there were no cuts whatsoever.


Francis Ford Coppola and editor/re-recording mixer Walter Murch (back) in the Philippines during the shoot of Apocalypse Now in March 1977. Photo by Richard Beggs. Courtesy of Walter Murch

Would you say that The Apocalypse Now was the most troublesome project you ever worked on?
It was troubled, but in a good way. Meaning, it’s a very contentious subject matter, especially at that time. And we were investigating all the possible ways to tell this story. It was turbulent and maybe troublesome, but in a good, creative way. In any film you’re working on, there’s a great deal of uncertainty. Can we do this, is this going to work, do we have time to do this… Everyone is wondering how it is going to work. But it was certainly the longest postproduction of any film I worked on, I was on it for two years, Richie Marx was on it even a year longer. It was a long period and you have to also gage your own energy level and focus on something that lasts that long. That was another kind of an invisible challenge for all of us involved.

You mean coming back to ordinary life?
Sure, that’s an occupational hazard of any film, it completely occupies a great deal of real estate in your brain as you’re working on it, and then suddenly it’s over and all of that real estate is available, empty, and now you have to re-program your brain to get to normal. It’s the equivalent, I think, to a kind of sea sickness. You know you’re finished objectively, but you’re body is still working on something, but there’s nothing to work on. The collision between those two things, what you objectively know and what you feel… it usually takes from two or three weeks to two or three months for these things to come back in alignment.

How long a pause did you have to take after Apocalypse Now?
After that, I started writing a screenplay, one of the projects I was going to direct. So… six months. But at the end of those six months I started writing, which is different than making films, a different rhythm. So after Apocalypse, the next thing I did was Return to Oz. We began preproduction in 1983, so it was almost four years since Apocalypse. So, first I wrote an unproduced screenplay, then Return to Oz.

What was the screenplay about?
It was about an archaeologist in Egypt, a kind of a ghost story, but more along the lines of what you were talking about earlier, one that was ambiguous. There were not a lot of special effects in it, it was about a personality change. Was that down to an accident that happened, or did something spiritual happen to this person? But it ended up in a drawer somewhere.

Mr. Murch, thanks for your time. It was a pleasure.
Thank you.


Sound montage associate Mark Berger, left, Francis Ford Coppola and sound montage/re-recording mixer Walter Murch mixing The Godfather II in October 1974. Photo courtesy of Walter Murch

From Cinephilia & Beyond

I’ve seen Return to Oz and the audience expectations of the Disney name and the original MGM film were much different than what Murch did. It is a gloomy cult film, but not bad.

Here are some books about Walter Murch and Editing that I recommend.

Categories
Cinematography Filmmaking Technology

Lytro Immerge for VR

From FXGuide.

Most of us know Lytro from its revolutionary stills camera which allowed for an image to be adjusted in post as never before – it allowed focus to be changed. It did this by capturing a Lightfield and it seemed to offer a glimpse into the future of cameras built on a cross of new technology and the exciting field of computational photography.

Why then did the camera fail? Heck, we sold ours about 8 months after buying it.

lytro1

Lightfield technology did allow for the image to be adjusted in terms of depth or focus in post, but many soon found that this was just delaying a decision from on location. If you wanted to send someone a Lytro image you almost always just picked the focus and sent a flat .jpeg. The only alternative was to send them a file which required a special viewer. The problem with the later was simple, someone else ‘finished’ taking your photo for you – you had no control. It was delaying an on set focus decision to the point that you never decided at all! The problem with the former, ie. rendering a jpeg, was that the actual image was not better than one could get from a good Canon or Nikon, actually it was a bit worse as the optics for Lightfield could not outgun your trusty Canon 5D.

In summary: the problem was we did not have a reason to not want to lock down the image. Lightfield was a solution looking for a problem. We needed somewhere it made sense to not ‘lock down’ the image and keep it ‘alive’ for the end user.

Enter VR – it is the solution that Lightfield solves.

Currently much of the VR that is cutting edge is computer generated – the rigs that incorporate head movement can understand you are moving your head to the side and it renders the right pair of images for your eyes. While a live action capture will allow you to spin on the spot and see in all directions, a live action capture did not (until now) allow you to lean to one side to miss a slow motion bullet traveling right at you the way a CG scene could.

Live action was stereo and 360 but there was no parallax. If you wanted to see around a thing…you couldn’t. There are some key exceptions such as 8i which have managed to capture video from multiple cameras and then allow a live action playback with head tracking, parallax and the full six degrees of motion, thus becoming dramatically more immersive. However, 8i is a specialist rig which is effectively a concave wall or bank of cameras around someone, a few meters back from them. The new Immerge from Lytro is different – it is a ball of cameras on a stick.

Lytro Immerge seems to be the world’s first commercial professional Lightfield solution for cinematic VR, which will capture ‘video’ from many points of view at once and thereby provide a more lifelike presence for live action VR through six degrees of freedom. It is built from the ground up as a full workflow, camera, storage and even NUKE compositing to color grading pipeline. This allows the blending of live action and computer graphics (CG) using Lightfield data, although details on how you will render your CGI to match the Lightfield captured data is still unclear.

With this configurable capture and playback system, any of the appropriate display head rigs should support the new storytelling approach, since at the headgear end, there is no new format, all the heavy lifting is done earlier in the pipeline.

How does it work?

The only solution dynamic six degrees of freedom is to render the live action and CGI as needed, in response to the head units render requests. In effect you have a render volume. Imagine a meter square box within which you can move your head freely. Once the data is captured the system can solve for any stereo pair anywhere in the 3D volume. Conceptually, this is not that different from what happens now for live action stereo. Most VR rigs capture images from a set of camera and then resolve a ‘virtual’ stereo pair from the 360 overlapping imagery. It is hard to do but if you think of the level 360 panorama view as a strip that is like a 360 degree mini-cinema screen that sits around you like a level ribbon of continuous imagery, then you just need to find the right places to interpolate between camera view.

lytro3

Of course, if the cameras had captured the world as a nodal pan there would be no stereo to see. But no camera rig does this – given the physical size of cameras all sitting in a circle… a camera to the left of another sees a slightly different view and that offset, that difference in parallax, is your stereo. So if solving off the horizontal offset around a ring is the secret to stereo VR live action, then the Lytro Immerge does this not just around the outside ring but anywhere in the cube volume. Instead of interpolating between camera views it builds up a vast set of views from its custom lenses and then virtualizes the correct view from anywhere.

Actually it even goes further. You can move outside the ‘perfect’ volume, but at this point it will start to not have previously obstructed scene information. So if you look at some trees, and then move your head inside the volume, you can see perfectly around one to another. But if you move too far there will be some part of the back forest that was never captured and hence can’t be used or provided in the real time experience, in a sense you have an elegant fall off in fidelity as you ‘brake the viewing cube’.

VR was already a lot of data, but once you move to Lightfield capture it is vastly more, which is why Lytro has developed a special server, which will feed into editing pipelines and tools such as NUKE and which can record and hold one hour of footage. The server has a touch-screen interface, designed to make professional cinematographers feel at home. PCmag reports that it allows for control over camera functions via a panel interface, and “even though the underlying capture technology differs from a cinema camera, the controls—ISO, shutter angle, focal length, and the like—remain the same.”

Doesn’t this seem like a lot of work just for head tracking?

The best way to explain this is to say, it must have seemed like a lot of work to make B/W films become color…but it added so much there was no going back. You could see someone in black and white and read a good performance, but in color there was a richer experience, closer to the real world we inhabit.

With six degrees of freedom, the world comes alive. Having seen prototype and experimental Lightfield VR experiences all I can say is that it does make a huge difference. A good example comes from an experimental piece done by Otoy. Working with USC-ICT and Dr Paul Debevec they made a rig that effectively scanned a room. Instead of rows and rows of cameras in a circle and stacked on top of one another virtually, the team created a vast data set for Lightfield generation by having the one camera swung around 360 at one height – then lifted up and swung around again, and again all with a robotic arm. This sweeping meant a series of circular camera data sets that in total added up to a ball of data.

 


Unlike the new Lytro approach, this works only on a static scene, a huge limitation compared to the Immerge, but still a valid data set. This ball of data is however conceptually similar to the ball of data that is at the core of the Lytro limitation, but unlike the Lytro this was an experimental piece and as such was completed earlier this year. What is significant is just how different this experience is over a normal stereo VR experience. For example, even though the room is static, as you move your head the specular highlights change and you can much more accurately sense the nature of the materials being used. In a stereo rig, I was no better able to tell you what a bench top was made of than looking at a good quality still, but in a Lightfield you adjust your head, see the subtle spec shift and break up and you are immediately informed as to what something might feel like. Again spec highlights seem trivial but it is one of the key things we use to read faces. And this brings us to the core of why the Lytro Immerge is so vastly important, people.

VR can be boring. It may be unpopular to say so but it is the truth. For all the whizz bang uber tech, it can lack story telling. Has anyone ever sent you a killer timelapse show reel? As a friend of mine once confessed, no matter how technically impressive, no matter how much you know it would have been really hard to make, after a short while you fast forward through the timelapse to the end of the video. VR is just like this. You want to sit still and watch it but it is not possible to hang in there for too long as it just gets dull – after you get the set up…amazing environment, wow…look around…wow, ok I am done now.

What would make the difference is story, and what we need for story is actors – acting. There is nothing stopping someone from filming VR now, and most VR is live action, but you can’t film actors talking and fighting, punching and laughing – and move your head to see more of what is happening – you can only look around, and then more often than not, look around in mono.

The new Lytro Immerge.
The new Lytro Immerge.

The new Lytro Immerge and the cameras that will follow it offer us professional kit that allows professional full immersive storytelling.

Right now an Oculus Rift DK2 is not actually that sharp to the eye. The image is OK but the next generation of head set gear have vastly better screens and this will make the Lightfield technology even more important. Subtle but real spec changes are not relevant when you can’t make out a face that well due to low res screens, but the prototype new Sony, Oculus and Valve systems are going to scream out for such detail.

Sure they’ll be expensive, but then an original Sony F900 HDCAM was $75,000 when it came out and now my iPhone does better video. Initially, you might only even think about buying one if you had either a stack of confirmed paid work, or a major rental market to service, but hopefully the camera will validate the approach and provide a much needed professional solution for better stories.

How much and when?

No news on when the production units will actually ship, many of the images released for the launch are actually concept renderings, but the company has one of the only track records for shipping actual Lightfield cameras so the expectation is very positive about them pulling the Immerge off technically and delivering.

In Verge, Vrse co-founder and CTO Aaron Koblin commented that “light field technology is probably going to be at the core of most narrative VR” When a prototype version comes out in the first quarter of 2016, it’ll cost “multiple hundreds of thousands of dollars” and is intended for rental.

Lytro CEO Jason Rosenthal says the new cameras actually contain “multiple hundreds” of cameras and sensors and went on to suggest that the company may upgrade the camera quarterly.

Categories
Animation Disney Technology VFX

Disney’s Augmented Reality Characters from Colored Drawings

Photo from the Verge.

A Disney Research team has developed technology that projects coloring book characters in 3D while you’re still working on coloring them. The process was detailed in a new paper called “Live Texturing of Augmented Reality Characters from Colored Drawings,” and it was presented at the IEEE International Symposium on Mixed and Augmented Reality on September 29th. That title’s a mouthful, but it’s descriptive: the live texturing technology allows users to watch as their characters stand and wobble on the page and take on color as they’re being colored in. You can see an example in the video above: the elephant’s pants are turning blue on the tablet screen just as they’re being filled on the page itself.

Coloring books capture the imagination of children and provide them with one of their earliest opportunities for creative expression. However, given the proliferation and popularity of digital devices, real-world activities like coloring can seem unexciting, and children become less engaged in them. Augmented reality holds unique potential to impact this situation by providing a bridge between real-world activities and digital enhancements. In this paper, we present an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device. The drawing is detected and tracked, and the video stream is augmented with an animated 3-D version of the character that is textured according to the child’s coloring. This is possible thanks to several novel technical contributions. We present a texturing process that applies the captured texture from a 2-D colored drawing to both the visible and occluded regions of a 3-D character in real time. We develop a deformable surface tracking method designed for colored drawings that uses a new outlier rejection algorithm for real-time tracking and surface deformation recovery. We present a content creation pipeline to efficiently create the 2-D and 3-D content. And, finally, we validate our work with two user studies that examine the quality of our texturing algorithm and the overall App experience.

Download File “Live Texturing of Augmented Reality Characters from Colored Drawings-Paper”
[PDF, 1.72 MB]

Copyright Notice
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Categories
Animation Disney Technology

Glen Keane ‘drawing sculptures’ in virtual reality with Tilt Brush

Photo from Wired

“I can put goggles on and I just step into the paper and now I’m drawing in it,” Keane says. “Today, all the rules have changed.”

“All directions are open now, just immersing myself in space is more like a dance. What is this amazing new world I just stepped into? When I draw in VR I draw all the characters real life size. They are that size in my imagination. The character can turn […] and even if you take the goggles off, I’m still remembering — she’s right there, she’s real.”

Over nearly four decades at Disney, Glen Keane animated some the most compelling characters of our time: Ariel from The Little Mermaid, the titular beast in Beauty and the Beast, and Disney’s Tarzan, to name just a few. Keane has spent his career embracing new tools, from digital environments to 3D animation to today’s virtual reality, which finally enables him to step into his drawings and wander freely through his imagination. At FoST, he’ll explore how to tap into your own creativity, connecting to emotion and character more directly than ever before.

More on the FoST Summit here.

Futureofstorytelling.org website.

More about Tilt Brush

 

Categories
Film Sound Technology

Lucasfilm, Industrial Light & Magic and Skywalker Sound Launch ILMxLAB

From SoundWorks Collection

Industrial Light & Magic (ILM) and parent company Lucasfilm, Ltd. announce the formation of ILM Experience Lab (ILMxLAB), a new division that will draw upon the talents of Lucasfilm, ILM and Skywalker Sound. ILMxLAB combines compelling storytelling, technological innovation and world-class production to create immersive entertainment experiences. For several years, the company has been investing in real-time graphics – building a foundation that allows ILMxLAB to deliver interactive imagery at a fidelity never seen before. As this new dimension in storytelling unfolds, ILMxLAB will develop virtual reality, augmented reality, real-time cinema, theme park entertainment and narrative-based experiences for future platforms.

 

Click here for an exclusive interview with Rob Bredlow at FX Guide.

ILMxLAB_bg.3