Cinematography Mailing List - CML
    advanced


Compositing 101

Finally, a place for me : maybe this new list can serve as the CML Confessional as well? a place to fill in the weak spots so to speak.

I'm not much on SFX so I thought this might be a good place to ask something too basic for many on the mothership. I once shot a simple bluescreen plate for a spot and it came out well, but I'm afraid I'm unqualified to field questions from producers- let alone design treatments where mattes will be pulled in post.

As I recall the matte I did shoot was pulled on an Avid. since then i've become a basic NLE editor myself (from making my last reel) usually work on Media 100. I'm curious as to how much of a bitch it would be to composite myself with a pro NLE.

I just figure if I work a composite scene out myself I'll understand more intuitively how to approach the photography in future.

So here's a question from an SFX idiot. if I shoot a wide shot of a DP jumping off a ladder (holding an Arri 16Bee El) on blue/green screen and another wide shot of the Golden Gate Bridge- well anyone care to comment on how they would approach this?

Anyone comment on the do-ability of combining the scene on an NLE by a weekend plinker?

thanks in advance,

caleb "10 hail Mary's" Crosby


Re: composting

I believe the process involves stacking layers of organic material, letting it decompose, mixing it regularly until you get a fine silt that you can use in your garden.


OK Caleb,

One point I'm not clear on, is this a POV of the person jumping off the bridge or is it a shot of the bridge with someone jumping off it??

Anyway, if you want to shoot handheld then you have to put some kind of tracking marks on the blue/green screen.

These are clearly defined fixed points, they can be white crosses but you'll find it easier to remove them later of they're something like a fluorescent pink :-). You wont need to worry about what colour they are if the foreground object never passes over them as they can be garbage matted
out.

The compositing software then locks onto these foreground reference points and moves the background picture in sync. It's so easy to say that isn't it :-) a lot harder to do!

A number of the budget NLE systems now have these facilities built into them, After Effects 4, Commotion and the lowest priced Discreet software are all capable of this, but don't hold your breath waiting for them they'll take hours if not days to render scenes.

Cheers

Geoff


Hi there!

I wait in eager anticipation for somebody to answer Caleb's question about compositing. I for one, have never been lucky enough to shoot any matte shots, ever, I am however hoping to learn how to do this one day, as I see it as an absolutely necessary skill.

In addition to what Caleb has asked, I would also like to add this:

I have seen many examples in books, of the classic "glass shot" effect, where, say for this example, the top three stories of a building are painted onto a pane of glass and positioned in front of the camera to add the additional stories to a standing set of the ground floor. Now, if I wanted to use a computer generated image, instead of a glass shot, would I produce the cgi before shooting the shot, and use some form of image overlay to compose the shot? Also, if I wanted to move the camera as well, would this entail using a motion control rig to feed the cgi, or would I produce the cgi, then have the motion control rig move according to the cgi specs?

I guess for someone who hasn't shot all that much, this seems overly complicated, but at the same time, its something that completely fascinates me, and I can't wait for you explanations!

Yours
Christian Lau
"The best way to start shooting films, is to start shooting films..."
Video Assist/Camera Assistant


>Now, if I wanted to use a computer generated image, instead of a glass shot, would I >produce the cgi before shooting the shot, and use some form of image overlay to >compose the shot?

Generally you would design the shot as a storyboard, then shoot the background plate first in order to have something to match your digital artwork to.

In the old days of the glass shot, the painting would be created first, of course, but usually for a predetermined time of day, probably with a photograph of the building as a guide -- at which point they would return at the correct time of the day so that the light in the painting matched the light on the building (I'm talking about day exterior shots of course.)
There are a few cases of the painting being done quickly on the spot, but that was usually for a minor adjustment to the image.

Jack Cardiff has talked about using glass paintings to quickly alter a shot, like adding a new sky with a sun in the frame, created by reflecting a lamp in the glass (this was for Vidor's "War & Peace".) I think he did the painting himself on set with a can of spray paint.

>Also, if I wanted to move the camera as well, would this entail using a motion control >rig to feed the cgi, or would I produce the cgi, then have the motion control rig move >according to the cgi specs?

Usually you would shoot the motion control shot first and then use that info to create the CGI shot. You could NOT use motion control, but you would end up having to manually plot the move (called "match-moving") in a laborious process almost like rotoscoping. Lucas, on "The Phantom Menace", felt that motion control would be too time-consuming for some of the shots of the skeletal C3P0 robot, which had a human operator standing behind the puppet, so the camera operator just attempted to recreate his camera moves manually after the take in order to provide ILM with a clean background plate of the set. This was almost next to useless and cost them a lot of time carefully digitally painting out the puppet operator frame-by-frame.

Matching later live-action to pre-created CGI animation is nearly impossible. There was some attempt for "The Phantom Menace" to design entire shots, camera moves and all, in a computer and then use the computer information to recreate that move on a miniature -- but it was discovered that real life is too unpredictable and that adjustments always have to be made when filming physical reality.

I remember in "Cinefex #1" back in 1980, Doug Trumball talking about why the idea of pre-programming a move on a miniature didn't work that well -- because part of the art of miniature photography is making adjustments by eye. Also, the computer would have a tendency to crash the camera into the miniature because someone failed to take into consideration some minor detail like the height of the camera mag...

Obviously, it is a lot easier to create a matte painting in a static shot than in a moving shot.

There are a number of other old-fashioned techniques that still work quite well, sometimes better than any CGI work -- like hanging miniatures (a.k.a. forced perspective miniatures). Sometimes combined using mirrors (the Shuftan process.)

And you'd be surprised to learn that a number of landscape & city shots in "The Phantom Menace" still relied on old-fashioned models and were not digitally created from scratch, only digitally touched-up or combined with CGI elements.

David Mullen
Cinematographer / L.A.


Am I correct then, in saying that the basic principle is to use the computer's inherent flexibility to adapt a digital image to the live shot?

What are the most important issues when you are shooting the live stuff? I'm assuming the presence of a vfx supervisor throughout this process...surely the capabilities of the digital artist and the machine being used to generate the image, have major implications for how you shoot the live stuff? Any rules of thumb? Colours? Light types? Movement limitations? Perspective issues?

Are motion control rigs capable of generating information which can be used to manipulate a digital image. I mean, that if I want to track around the actor, with a bluescreen in the background, and add a cgi building in behind him, does the motion control rig give me any information of use (digital information, used in some or other graphics rendering machine). Like how fast the different walls of the building should be made visible/invisible. Isuppose this information would be very useful to altering perspectives and shadow/light movement in a cgi.

Forgive the many questions! (prompted by your great explanation)

Christian
Video Assist/Camera Assist


The motion control rig can generally output information as to all it's axis, these can then be tracked in 3D by the software.

It's great in theory, and does work, but it can take a lot of effort to get it to work in the first place

Geoff


Much appreciated Geoff,

I'm printing your reply and taking it with me to Oklahoma in the morning. director called this evening, fired the DP in middle of shoot, hired me on a recco from a CMLer. (remember Troy?) all I know is that I'll be shooting from a cam truck in a herd of horses with KKK (read klanners mounted) at 11 am in the morning.

Darndest thing is he asked about shooting blue screen on ext. location - and this was a few hours after I innocently posted my question. weirdness.

Well sorry for my abusive language to Pat, I hope at least Pat noticed the smile in my sign off. I never used to swear - but then I started.

See y'all in 2 weeks.

Got a green director and crew, a 5 ton and an XTR- nothing digital in sight,

ahhhhh, caleb


David mentioned :

>There are a few cases of the painting being done quickly on the spot, but that was >usually for a minor adjustment to the image.

I did six of these last year and believe me it is very difficult to do one of these off set. the way I was taught is that the camera is locked off so that no one can touch it. The glass matte painter then looks through the camera to see the shot then does the painting. Of course this is a lot easier with video assist. The painter would have an assistant helping with the sketch-in. We did one off set due to cold weather but was only able to do it after the artist had done all the sketching and had tested the colors.
Our frame was built to pivot to allow the glass to be easily pulled and put back in place with little movement. This one ended up as one of my least favorites.

-JR Allen


>Are motion control rigs capable of generating information which can be used to >manipulate a digital image.

I'm not sure whether you understand how CG animation programs work. They basically create an artificial "world", in which there is a camera that simulates a real world camera and objects and lights which simulate real world objects and lights. The "trick" is to match the parameters of the camera to the real world camera used for the live action, and the lighting (or at least the visible effects of the lighting) to the real world lighting. In nearly every CG program, there are setup parameters for the camera that include field of view, aspect ratio of the visible area, lens length, and position in 3 dimensional space (height, pan, tilt, rotation (dutch angle), and an x-y-z coordinate location). To match any camera moves, the CG camera must be moved identically. This is done by a combination of methods, some automated, some manual. Usually in large features a set survey is taken so that 3D geometry can be constructed for set features such as ground plane, buildings, etc. In smaller productions (and sometimes even in large ones) items are either placed in the shot as references or set pieces are measured and used for the same purpose. Camera information is taken to create a virtual matched camera. The CG camera is then moved so that the 3D geometry lines up with the real thing.

There are various automated methods of doing this now available. If a mocon rig is used, the data can be translated to something the CG program understands. However, this is almost never as "automatic" as you might think, it almost always has imperfections that must be fixed by hand. A newer method is 3D tracking software, such as the built in 3D Studio Max camera tracker, 3D Equalizer, Match Mover, or Maya Live. These programs all deduce movement in 3D space by tracking 2D points on the image and using a combination of mathematical techniques, figuring out where the camera was and its characteristics. The more set information you can supply, the more accurate these programs are. Except for the 3D Studio Max tracker, which requires some set measurements between tracking points, they all work purely from the image itself. The results, although rarely perfect, are pretty impressive nonetheless. You should also be aware that these programs actually require the camera to move in order to do their job, they will not help you if the camera is static.

In many, many cases, the shot can be set up in a way that 3D tracking is not really required, where 2D tracking (via Flame, After Effects, Digital Fusion, Chalice, or almost any other compositing program) can largely do the trick. Shots designed for this would avoid serious perspective moves in favor of long lenses and pan and tilt moves only. This can also be effective for adding 3D "enhancements" to objects in a shot, such as a 3D prosthetic.

By doing only the rotational moves in the 3D software (hand tracking) and doing the X-Y manipulation using motion tracking in 2D, a very accurate track can be created. I've added 3D noses and tongues this way numerous times.

I hope I haven't confused you.

Mike Most


>Also, if I wanted to move the camera as well, would this entail using a motion control >rig to feed the cgi, or would I produce the cgi, then have the motion control rig move >according to the cgi specs?

Unless you are doing a really serious perspective move, you can usually employ 2D tracking techniques to accomplish many set replacements and extensions. I did an entire scene (over 40 shots) in which we had to take actors shot on a baseball field in L.A. and put them into Boston's Fenway Park. We never locked down the camera, did all the normal rack focusing, and basically shot as if we were in Fenway. We then shot static plates in Boston and married the whole thing using 2D tracking exclusively. The results were totally convincing. No motion control in sight.

The key to all this was the use of long lenses to flatten out the perspective shifts, but it was admittedly a situation where this was the right shooting choice anyway due to the subject material. But certainly for pan and tilt moves on landscapes, or anything similar, motion control is completely unnecessary.

Mike Most


With regard to hanging miniatures and forced perspective sets and the like, you can often do completely manual camera moves as long as you are using a nodal head. The nodal head gets rid of parallax clues that give away the trick, so that you can pan and tilt (and even sometimes dolly) on a set that has as part of its elements hanging miniatures etc. Presumably if you were trying to use CGI elements to track in instead, your life might be made easier by using a nodal head for your foreground element photography so that you do not have to worry about the parallax shifts that would have to match in the painted in portion.

When I talk about dollying, by the way, I would be talking about tracking through a piece of 100% real set, stopping, and then nodal panning or tilting onto the part of the set that includes the hanging miniature or forced perspective stuff.

Mark"i just built a nodal bracket for a vista vision camera on this job" Weingartner


>I'm not sure whether you understand how CG animation programs work.

I had no clue, thanks for all that info!

If I understand it correctly, when you put the live image together with the CG image you then fine tune it by eye?

Christian


>If I understand it correctly, when you put the live image together with the CG image >you then fine tune it by eye?

Basically, yes. But you do the fine tuning primarily during compositing.

Mike Most


Mike Most writes :

>newer method is 3D tracking software, such as the built in 3D Studio Max camera >tracker, 3D Equalizer, Match Mover, or Maya Live...

Just shot some tests for Equalizer. 180 degree arcing move around a subject. It gave an amazingly accurate 3D track without any manual adjustments. We had 6 tracking points in rame, all strewn on the floor all on one plane). No camera data or measurements taken, but I saw the 3D guys sneak back on stage later on to take a few quick surveys.

Did I mention that the shot was handheld (real rough - a Mitchell Fries underslung at my waist). :-)

I think the lens distortion near the edges sometimes causes the the "straight" 3D CGI world to _slip_ a bit...amongst other reasons. But the 3D operators didn't think they'd need too many man-hours to fix this.

Mark Doering-Powell


Actually, all of the 3D tracking programs really like handheld, because they depend on interframe movement to determine parallax between tracking points.

Except for Max (which requires measurements between points, although no actual camera information is needed), these programs only work if there iscamera movement. I do find it interesting that all your tracking points wereon one plane, because these programs usually want at least one point to beon a different ground plane (at least in the case of Max and Match Mover, which is what Maya Live is based on).

>I think the lens distortion near the edges sometimes causes the the "straight" 3D >CGI world to _slip_a bit...

In most cases you would probably attempt to "fix" the distortion in the plate photography using 2D methods, composite the CG, then recreate the lens distortion.

Mike Most


Hello,

I'm a student in cinematography and I did some research about blue/green key compositing this year.

My teacher recently showed me an advertisement from KinoFlo (in the formof a newsletter). That paper was about a special way of lighting bluescreens. The screen is made of a fluorescent material, and is lighted by special KinoFlo tubes that emit ultra-violet radiations. According to this ad, this system greatly reduce blue-spill problems, to the point that glossy subjects could be filmed in front of the screen without prejudice to the matte (a picture of a miniature plane made of unpainted metal was illustrating that issue)

My teacher assumed that the light coming from the screen and reflecting on glossy objects was of such a nature that it wouldn't register on film. If find this hard to believe. I guess/assume that the UV light coming from the KinoFlos hit the fluorescent screen, are changed tovisible radiation and reflected back to the camera, so the spill would also be made of visible light, and therefore could register on film. But then, what makes them say that this system reduces blue spill ?

I'd like to hear your comments on that question, especially of course if some of you had the opportunity of using that system. What are the advantages of that system ?

Finally, I want to say that I'm not sure whether this post really belongs to the CML-101 category, but I'm sure I do so I thought it would be less intrusive, as a student and quiet listener to the CML lists, to send this post in the CML-101 list.

Donat Van Bellinghen
Studying at IAD - Image section


I have used UV light to light orange screens, but the blue and green screen work that we do is generally done with narrow spectrum fluorescent tubes that radiate in the visible light range. The blue tubes may have a bit of UV (though they claim to be UV free) but the energy is predominately at the blue end of the visible light range.

>My teacher assumed that the light coming from the screen and reflecting on glossy >objects was of such a nature that it wouldn't register on film.

UV light DOES register on film...unless you one uses a UV filter on the lens. As you surmise, the screen itself fluoresces in response to the UV radiation, and the light reflected back to the camera is indeed visible and will register on film. Think of the screen as the light source and the UV radiation as the power that runs it...In a way, you are just putting the camera and the model inside the fluorescent tube.

So what's the spill savings? The big help this approach offers is in making it easier to light the screen itself without spilling front-light on the miniature while lighting the screen. This is especially useful with wide angle shots where there is not room to get the matte screen back far enough to light it while keeping the miniature unlit.

The UV that hits the model and lights it in the UV spectrum is filtered out by the UV filter on the lens. The best way to minimize the visible contamination caused by the reflection of the screen on the surface of the miniature is to mask off the screen so that only as much of it as is needed to show the edge through the camera move is visible.

You never see that in the photos because it does not look as interesting or impressive


There's a page on the cml web site with frame grabs demonstrating the Kino Blue and Green tubes at different exposure levels.

I can't remember the address of it the brains a bit frazzled.

You still get spill, but as you're working at a much lower level AND it's such a clearly delineated spectral thing it's easy to remove.

Geoff


If a person is sitting or lying with g-screen background, is there a way to get rid or at least minimize the spill in the shadows. Do I pile the subject up with green boxes or what?

Also advice on how to minimize the spill in extreme low key lighting situations is very welcome. F.ex. if there is only one lighting source from 90deg angle or behind.

If anyone has some basic rules for Green/blue screen lighting/filming (video) I'd like to hearém

Regards

Claus Lee Frederiksen


>If a person have to sit or lie at the g-screen, is there a way to get rid or at least >minimize the spill in the shadows...

I would investigate putting your subject onto mirrored plexi (on top of a 50cm platform) which reflects the bluescreen behind them. No spill / contamination problems, just need to roto their reflection (which tends tobe more static).

Also: put them on a clear plexi platform that's raised above a blue floor. I just did this when shooting 17 large rattlesnakes on bluescreen.

Mark



Copyright © CML. All rights reserved.