Cinematography Mailing List - CML
    advanced

Build Your Own AVATAR Virtual Camera Rig

 

Based on my own limited understanding of the Cameron Virtual Camera rig, and not having access to more material on the subject, here is a post that may inspire directors with a lower budget to do something similar

>
Build your own...


http://bit.ly/8hcdlA

I was reminded kindly by a member that this is CML and I’m posting about using a gaming engine as a core for doing CGI movie "directing".

Actually, you would be surprised at where technology has gone.


The aim of using a gaming engine as powerful as CryEngine or Unrealengine3, is to help previs and do proof of concept for Hybrid CGI-Live action movies, before spending millions on R&D or having Cameron sized budgets. Even finished rendered scenes in some cases can be output (under license)where extreme close-up shots are not needed.

Do take a look, the same concepts can then be used if you insist on using "standard" offline rendering tools like Vue7 for environments and AutoDesk MotionBuilder.

best wishes.


Clyde DeSouza
Real Vision
Dubai, UAE
www.realvision.ae/blog


As the guy who ran the virtual camera on set for the entire three years of filming, I will just say - no, not even close.

The virtual camera was WAY more than just a window on the world.

WAY, WAY more.

David Stripinis


Hi David,


Thanks for replying. I can imagine that it was way more than what I've described. (I've not seen it documented anywhere as yet)

I'm just providing food for thought a "seed idea" as stated.
Three years ago was a long time. Much has changed in technology since then.

Would love to hear more or at least a bit on how much more was done with the Virtual Camera, if it's not "classified"

Regards
Clyde DeSouza
Real Vision,
UAE


I'd be really interested to know how the camera was tracked even in the live action scenes. Even in the greenscreen shots and command room set sequences I've seen on the making-of featurettes, I couldn't see any tracking markers and the matching was perfect.

Cheers,

Jon Rennie
VFX
Cardiff, UK


After much searching I did manage to find an interview with Cameron, where he speaks about the "simulcam". At around 19minutes into the video here http://www.youtube.com/watch?v=Aao0YSITuxc

The Camera has infra-red markers, so there would not interfere with the set lighting (previously they were using retro reflective markers) This needed 3 months to figure this out? There are many other kinds of trackers Opto electric, Inertia based etc. and as I stated, 3 years is a lot of time. You can use an Xsens tracker for this.

Do take a look at the article written : http://bit.ly/8hcdlA
and let me know "how much more" the Simul cam was and areas that I have missed out on.

Forgive me for saying this, and I don't direct it at anyone in particular, - but Hollywood does have a knack of glorifying anything it does!

Regards
Clyde DeSouza
Real Vision,
UAE


There are infrared LEDs on the camera being picked up by MotionAnalysis cameras running Giant Studios Realtime software. This solves for the camera and is fed out realtime to Motionbuilder to
provide the CG element, and the realtime key is then displayed to the camera operator and video village.

There are tracking markers for the final solve, just subtle.

David Stripinis
Double Negative


The nearest I've found to an explanation is this :


http://www.fxguide.com/article583.html

which involves tracking markers placed on the ceiling. You could also do this :


http://www.boardsmag.com/screeningroom/tvfilm/8445.html

which involves putting tracking markers on the camera and using references cameras placed
around the set. I think the former would be more accurate and cost-effective but not as much use in a studio if you need to have a covered set.

Jon Rennie
VFX
Cardiff, UK


Thanks David, for letting us know this.

This is exactly what I had already worked out and included in my article after finding that video interview with Cameron that I posted in this thread. I'm really surprised that the tech people needed that long to figure out that they needed to replace retro-reflective passive markers with IR emitters!

With the kind of budgets available to them, an Xsens tracking system could have been placed in within a day!

There really is not much "way more" than what I have explained in my article to the Virtual Cam. Nothing earth shattering was done (nor that I’m saying any claims were made by Avatar)

Virtual Cameras were being used to do cinematic "cut scenes" a long time before Hollywood picked up on this!

The credit that I will give is that Cameron had the independent insight to actually use this for filming in a Hollywood movie. and chromakeying the live action with the background stereo CG plates.

@the member that mailed me off-list.. Your welcome, I'm glad the article was of help. No it won't be a few years before people shoot like this, it can already be done today and has been done (non stereoscopically) for the past couple of years at Gaming studios!
Thanks for the kind words though, and wish you the best!

Regards

Clyde DeSouza
Real Vision,
UAE


You guys are thinking about ONE aspect - tracking the camera. Not about syncing the performances, driving moco rigs, or the fact these were not static scenes, but dynamic, with the ability to change and
modify every detail in the scene live.

You want to know more, you have to wait till I'm available and hire me. But a bunch of the smartest people I've ever worked with spent 3+ years coming up with the hardware, software and workflows for all this, and to think you can replicate it based on off the shelf solutions and reading a few paragraphs of an article is a friggin joke and pure arrogance and Hollywood hate.

You know, I listen to that red centre podcast, maybe I'll go build a 5K camera this weekend. Shouldn't be hard. Nah, it's Christmas, I'm busy.

David Stripinis


>>"You guys are thinking about ONE aspect - tracking the camera. Not about syncing the >>performances, driving moco rigs, or the fact these were not static scenes, but dynamic, with the >>ability to change and modify every detail in the scene live."...

What Part of these did I not address in the article?? can you point them out? YES DYNAMIC is what the Engines are all about!


They import pre-made scenes from Industry standard 3D modelling packages and you do rapid Scene blocking, camera paths, check for stereoscopic disparities ALL IN REAL TIME. Then you send these "paths" back to your render farm to output in "cinematic" quality.

David, with all due respect, if one bunch of people spend 3+ years it does not mean that they know more. I have spent 15+ years in stereoscopy, Real time graphics, add that experience to more than 3 years with LIDAR - that’s laser scanning of architectural buildings, CAVE and Powerwall stereoscopic rooms and you can see where "experience" is coming from.

The Difference is I choose to make the knowledge free and if nothing else, it does not detract but actually encourages others to BUILD on the ideas and tangible hardware/software that I mention.

This is not Hollywood bashing, this is arrogance and ego worship not being practiced that’s all.

Kind Regards
Clyde DeSouza
Real Vision
UAE,


> This is not Hollywood bashing, this is arrogance and ego worship not  being practiced that’s all.

Gentlemen,

This is also the second list I'm on which reached its "Jane, you ignorant  slut" moment today (to quote Dan Ackroyd's classic shtick). Time for a  secret Santa?

Tim Sassoon
SFD
Santa Monica, CA


I get really fed up with threads like this.

Happily they don't happen that often.

If you're so bloody good and far ahead of everyone else why aren’t you rich and famous?

Ah wait, it's the conspiracy that prevents you.

Doesn't matter which conspiracy, there are plenty to choose from.

Cheers

Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7831 562877
www.gboyle.co.uk
www.cinematography.net


Wow Geoff,
So "rich and famous" is your definition of life?

I make enough money doing something else. How do you know I'm not rich? Do you think richness is only by being involved in films?

FYI - my money is in Real estate. technology and stereocopy are my passion.

And if you take the time to read the article, you will learn something. Others have already sent their thanks and YES they are from Hollywood.

The world is bigger and more diverse that what you see in a camera's viewfinder. I wish you'd show some courtesy before you comment with such impunity

Regards
Clyde
Real Vision
UAE


I like the term 'Hollywood hate'! I wasn't trying to trivialise the achievement here.

Personally, I'm very interested in the camera tracking, particularly for the live action scenes, because this is the area not really addressed in the Avatar behind-the-scenes movies (at least, not until the DVD release). The motion capture stage is featured a lot but not the live action stuff. I spend a lot of time trying to match cameras for 3D effects, not always successfully, and I'm keen to understand how Avatar dealt with this for such a large number of composites for the base and the command centre. If the camera was tracked on every set using IR markers and then that camera data (including all the lens metadata) was parcelled up for each shot to the VFX studio, I'd very much like to see how I can replicate some of that, albeit on a much more basic level. It seems to me that this is what RED is inching towards by using the Cooke /I data and soon the Arri motion head data. I'd like to know how it all worked throughout post production.

The example articles I mentioned before were only a guide for how this workflow can be used without relying on post tracking. Avatar's is certainly a generation beyond that, but it is also out of reach for many smaller productions.

Jon Rennie
VFX
Cardiff, UK


> So "rich and famous" is your definition of life?

No, it was a flippant and glib throwaway line.

I have read the article.

Cheers

Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7831 562877
www.gboyle.co.uk
www.cinematography.net


>>Avatar's is certainly a generation beyond that, but it is also out of reach for many smaller >>productions.

I started in CGI and migrated over to live action, so a lot of this is familiar - but it's been so long since I have been in that world, I was wondering what it would take in terms of dollars for a smaller outfit to get into something like this. Say for starters - 3D Modeller, Game Engine, and some type of tracking apparatus / software. Anyone care to venture a guess?

After visiting some of the sites, I am lead to believe that it would be a somewhat expensive proposition to do correctly (read - without hacking). Still it does sound intriguing and the idea of creating wildly fantastic worlds far beyond the limits imposed on the film maker of modest means does whet my appetite.

However, there seems to be some issue in regard to asset copyrights and usage that are still present a large grey area. I cite the below listed Wikipedia article. Scroll down to legal issues.

http://en.wikipedia.org/wiki/Machinima

If that's the case, then it seems that anything other than a fan based film / student film would be faced with a huge potential outlay in either time (to build their own assets) and / or money (have assets built, or license them) which then contradicts the idea that this provides an inexpensive alternative to the average indie film maker.

Brent Reynolds
Tampa, FL
DP Producer


Hi All,


without wanting to sound more controversial and as a know it all, I would like to offer some thoughts.

The good thing about working with game engines is that they really are state of the art in what they do today. I am only familiar (and not an expert yet) with CryEngine and it's real time Sandbox. The bonus is this comes free when you but the DVD game at the local high street store. Same goes for any game using Unreal Engine.

Now it's true that you need a license to use the final output, There is no restriction as far as I know to do in house proto-typing for the indie developer.  "Blocking shots", testing for stereo window violations, assigning stereo depth budget etc.. all in real-time and with the live talent chroma/green keyed over it.

Yes you would need to get a tracker for your physical camera rig, but they are available on rent/instalments from many Virtual Reality equipment providers, or not that expensive (dependent of course on how indie you are)

As an update - This is from Variety. Even Cameron viewed only "crude" realtime models of the virtual characters : http://www2.variety.com/avatar/avatar.html

This kind of visualization (virtual Camera) has been done for a few years now BY cinematographers who produce "cut-scenes" in today's multi-million dollar games titles.

While the credit again does go to Cameron for taking a green-screen over CG preview, into an intuitive device to main stream Hollywood shooting, and giving Directors a tool they are familiar with - It still is (without the intention of sounding condescending) not ground breaking.

I see and respect where established Hollywood talent is coming from. The only difference that I am suggesting is, that now Film making has moved beyond the tools that Hollywood has been familiar with for many years - CG, Cameras, Lighting etc...... and is in a position to be integrated with other tools that scientists, geologists and stereoscopy professionals have been using for years.

Stereoscopic tracking and superimposition of CG in realtime is not new to scientists and visualizers in the Manufacturing (airplane, vehicles) and Oil and Gas industries.  They use Multi-pipe "node rendering" machines to visualize Complex data-sets in giga bytes, much much more than the pandora world - In real time and with accuracy as needed in these fields.

My tip is: These scientists and archaeologists are ignorant about the making of Hollywood films, they do not know the finer points of Camera angles, creative story telling etc. BUT if astute Hollywood producers/crew get in touch with them, you'll be surprised at really how easy the problems get solved. I'm talking days not months - for example what Cameron had to face in his video interview when he spoke about how retro-reflective markers needed to be swapped for IR emitters. To a CAVE designer this is back-of-the-hand knowledge.

Getting back to tracking a camera and stability, yes the motion path from most tracking will have to be refined to reduce for jitter etc. But that's the beauty and luxury of post production. You get all the time to smoothen the curve.

in realtime, even Cameron had a 2 to 3 frame Lag on his Simulcam. So why would the Indie producer complain?


The good thing about the gaming community is that there is a vast resource out there of passion driven people... people who will develop assets just for the heck of it, and the coolness factor. Money is not the motivation for them. But of-course they are not dumb either and fairness always wins their support.

It's today’s 17 - 25 yr olds who know what real-time is all about. Marry them with the skills of a Veteran Cinematographer and you have Hybrid films that will be ground breaking.

P.s I have updated the article with a valuable PDF from auto-desk on Virtual Film making, if it does interest anyone.


http://bit.ly/8hcdlA

Regards,
Clyde DeSouza
Real Vision
Dubai, UAE

(Thank you Geoff for hosting this forum, I will be more toned down in my opinions in the future)


Clyde wrote:

>>his kind of visualization (virtual Camera) has been done for a few years now BY cinematographers >>who produce "cut-scenes" in today's multi-million dollar games titles.

Another overnight success:-) There have been a number of real-time  visualization systems in use over the years... I was involved in a  small way in getting the Encodacam system on its feet for I, Robot a number of years ago.... we did not use IR  or passive retro-refl tracking - that particular project was dependent  on encoded live action heads, dollies, cranes, etc.... but the fact is a significant portion of the film had the camera  operator framing on CG buildings and features that would not exist at  full res until months later.... but this was real-time manipulation of 3D sets with only a frame or  two lag.... the operators could work with it without their heads  exploding and the director could see what he was doing in a heavily  green-screen world.

We even puppeteered some CG only elements - controlling them in the  virtual reality viewing system real-time with encoder wheels... a  useful tool for the director and operators.

Mind you this was a number of years ago... The EncodaCam system and  other real time 3D CG systems like this have been used since, on Speed  Racer and many other pictures. This is a constantly evolving field  and successive iterations of this type of capability move the ball
down the field, but they are evolutions. I am always slightly amused  every time someone INVENTS as brand new something that we have been  doing in a more primitive way before....
This goes for so many innovations in the industry... we are all  fortunate in that we stand on the shoulders of giants, midgets, and  nerds... those who came before us.

I would love to take credit for all the cool shit that I designed over  the years, but I have always seen these things as clever engineering -  good system integration and applied cleverness... what we do for a  living, not miracles....

I am fortunate that people hire me for being a clever boy with an idea of where we can apply something that has already been invented... so I  don't have to claim that I am an inventor.

Mark H. Weingartner
LA-based VFX DP/Supervisor

http://schneiderentertainment.com/dirphoto.htm


Mark H. Weingartner wrote:

>> I am fortunate that people hire me for being a clever boy with an idea of where we can apply >>something that has already been invented... so I  don't have to claim that I am an inventor

I thought you invented typing with that last post.


Steven Gladstone
New York Based Cinematographer
Gladstone films
http://www.gladstonefilms.com
917-886-5858


>>I am fortunate that people hire me for being a clever boy with an idea of where we can apply >>something that has already been invented... so I don't have to claim that I am an inventor:

Well stated. A good note to end on.

Jonathan Flack
http://www.TwoFourO.com
+1 310 359-3510


Hi all,


maybe some of you have seen this, but I am sure others haven't, so I hope it is informative.

The SCP Camera project was presented in Siggraph 2007 "Emerging technologies". It's a project from a French university that I am quite sure turned into a commercial product.

http://www.youtube.com/watch?v=CB5_TF7nS28

Regarding the tracking system, I used it in 1999 for realtime virtual set applications (by then with Accom Elset virtual set -now part of Orad- and the IR cameras, encoders and reflective balls by Thoma Filmtechnik thoma.de

The "sensors" for the camera are passive reflectant balls, that work with a group of high speed IR cameras which detect the position of camera in XYZ. There are encoders on the lenses that track Focus, Zoom and Iris. More than one camera can be tracked by using different ball configurations. Then a virtual set software understands the captured data to compose live action footage with green screen and the CG images generated. There was a 2 frame delay on average for such configuration (you needed a FD line on the architecture obviously).

Regarding the bases of the previz works like those of ILM for I Robot, Star Wars III, and A.I., there is a company called Brainstorm Multimedia brainstorm.es < http://www.brainstorm.es > that despite I don't think they are authorized to talk about it due to confidentiality agreements, I am quite
sure they or Orad < http://www.orad.tv > have been the main technology of virtual set visualization behind those works (not many others are capable of doing that well by then).

Jordi Alonso


Editor of cine3D.com < http://www.cine3D.com >,
3DMagazine.com< http://www.3DMagazine.com >and other webs.
Researcher at Mediapro Research (http://research.mediapro.es)


Jordi Alonso wrote:

>> Regarding the bases of the previz works like those of ILM for I Robot, Star Wars III, and A.I., there >>is a company called Brainstorm

Let me clear this up a bit...

I was drafted as the on-set supervisor for the system first used on I,  Robot. This had nothing to do with ILM, but a great deal to do with  Brainstorm.

With the active support of the owners, codewriters, and implementors  of Brainstorm, a then [and now] leading player in real-time 3D CGI  manipulation with a lot of Broadcast experience,

Joe Lewis of General Lift assembled a team and spent a tremendous  amount of time effort, and money developing an on-set real-time  system for melding 3dCG with live action videotap or other video signal.

Members of the team included Paul Lacombe, who had a great deal of  previous experience with 3D CG virtual sets and their implementation,  Tetsuya Kamisawa, a broadcast veteran, and Jeff Platt, a motion  control veteran. I played a part in integration, fine- tuning,  kibbutzing with regard to the custom GUI, and on-set packaging and  operations.

We even dragged Bob Kertesz into the mix - he and I had worked on previous on-set live dx/keying solutions for certain shots on MI:2 and Vanilla Sky and we relied on his expertise as we designed parts of the system.

There were others, and we had great support from vendors and remote head operators in getting positional data out of a number of different remote heads.

Brainstorm is rightly proud of the work they have done in our quaint little backwater of an industry (compared to the Broadcast ocean in which they swim) I think they even mention Encodacam on their web page.

Unlike many systems that people have assembled to try and do this work, this one had in its belly applications and hardware bits that were designed to work in a live-real-time-broadcast world, taking into account little things like sync and accurate clocking.

As processors, and graphic cards have improved, this particular implementation of virtual set work has moved into the HD world as well. I was only involved with Encodacam for its first feature-film project, I,Robot, but the system has gone on to work on a number of other pictures. Versions of it are in use all over the world in broadcast virtual set applications with cameras on peds, jibs and etc
moving all over the place in live and taped broadcast applications.

It's pretty neat stuff and Joe Lewis saw the potential for systems like this a long time ago and pushed this project forwards. This system was working on set nearly seven years ago, providing on-set dx
splits and ultimatte comps for the director and operator, and laying off clean plates for editorial, all without hampering 1st and 2nd units ability to keep moving and shooting their movie.

Arguably not the first system of its type, it may have been the first  user-friendly system of its type, allowing crews to work with a variety of dollies, cranes, and heads that they wanted to work with.

back then, the efficacy of installing IR mo-cap systems on all the different stages on which we were working, stages filled with lightning strikes and other overpowering RF & IR sources was not
realistic.

These days, use of non hard-wired systems for camera position  information makes Encodacam and systems like it workable with hand- held and steadicam cameras as well....

It ain't cheap, but it is do-able.

and as always, this is a brand new idea that various people have been  implementing successfully for a looooooong time with varying degrees of success...... Joe's team got it to work really really well and  very reliably.

Usual disclaimer - I have no financial ties with any of the people named, but enjoy working with them when the opportunities arise.

Mark H. Weingartner
LA-based VFX DP/Supervisor
Erstwhile Encodacam Supervisor (one long job a long time ago)
occasional meal-sharer with some of the above mentioned clever people

http://schneiderentertainment.com/dirphoto.htm


In response to further the accolades of Encodecam and Brainstorm, shortly after I-Robot, Joe Lewis, of General Lift, Ray MacMillan, of Red-D-Mix and Encodecam/Braisntorm Operator Jeffrey R. Cassidy, set out to accomplish a children television series by the name of "Wilbur". TLC/Discovery Kids/CBC and more carry the series.

Simply put, live action, 3' puppets, being operated by three puppeteers in blue suites, against Erland Digital Blue Screens. Five roughly textured 3D sets, with encoded objects in each set file. At the time, the sets were all Maya based. We had a 30' Elouva Crane, dolly, and Camera Zoom, and new at the time a Sony XD camera, all mapped and encoded. The output of the workflow, will allow for a 2K,4K final rendered and keyed composite image, to be delivered as well. Five days prep, seven weeks in total principal photography, and 2.5 seasons of completed. Unedited footage perhaps, but all tracking of objects, matte paintings, timing, and physical placement of set objects are completed and verified on the day.

Mark is correct in stating that the crew out of "General Lift" is capable in delivering! Having a working in camera Pre-viz, gives the actors, operators and most importantly, the Director, full control as to what is or isn't happening in frame. Marks can be hit correctly, and changes that couldn't be made at one time, are now capable!

This Film technique is here to stay and will grow further. AVATAR is an exceptional example of course, but smaller budget shows will gain from this, as mentioned.

Jeffrey R. Cassidy
Toronto/Orlando
Encodecam/Brainstorm Operator
E-Film 2D/3D Compositor, DIT
HD Engineer, Digital Video Assist


This has become a very interesting read. thank you guys for info on the other, to me, more complex systems that were built quite a few years ago.

Interestingly, I did not account for encoding of variables like camera zoom etc. (On one Orad system, I do know vaguely that they used a calibration grid on the bluescreen)

On Avatar, I don't recall any zoom being used, I've only seen the movie once, and this part admittedly skipped my mind.

As a 3D movie, I would also feel that there would be no zoom used, as it would lead to undesirable side effects such as card boarding etc. If This is the case, the Virtual / Simulcam is that much more easier to accomplish with just X Y Z co-ordinates being tracked.

(Still I’m unsure if lens zoom was being tracked, and will investigate further.) Again thanks for the open information from professionals that worked on movies such as I-Robot. This is the spirit of sharing knowledge.

Clyde DeSouza
Real Vision Dubai
UAE


In fact, this sort of technology has been available for a long time. I was involved in some of the early shoots developing and using it.

In the spring of 2000, I did a five camera shoot for a game show called "Paranoia". The shoot had five cameras total, four on pedestals and one on a crane. The ped cameras were all encoded for pan, tilt, zoom, and focus, and the crane camera was additionally encoded for chassis position, boom, and
telescope. The set was mostly greenscreen, and ran about 175 feet long in a semicircle, and about 60 feet high.

I ran five Ultimattes to do the compositing, and the backgrounds were generated from an Orad system using an SGI computer the size of a largish refrigerator. All the cameras were free to make whatever movement they wished, and all compositing of the virtual set into the backgrounds was done live in real time. We had to dial in a 3-4 frame delay in all the cameras to give the SGI refrigerator a chance to catch up, and it took the camera operators, who were watching the composited image, a little while to get used to that.

This was a LIVE show which ran once a day for an hour for two weeks. And by live, I mean LIVE, up to the satellite, then broadcast, with all five cameras appearing to be in the middle of the spherical game show set with full motion and tracking of the backgrounds, including selective and variable defocus as the lenses zoomed in, and literally dozens of "screens" open on the virtual walls showing live satellite video feeds of contestants from all over the country. All live, ten years ago.

Around the same time, Dave Satin at SMA in New York had a small stage inside his facility using a smaller single camera Orad system with the Orad two-tone blue wall, although he did also encode the lens as backup (the two-tone wall worked, ummm, adequately). Dave did a lot of virtual shoots with that rig, and I worked there several times myself.

Back on this coast, I've been doing virtual set wraparounds with the Brainstorm system generating backgrounds for a show on the FX channel called "DVD On TV" for at least five or six years now. We used to use a fully encoded Technocrane, but a couple of years ago switched to a Technodolly when that became available because we wanted a smaller sized chassis, and didn't need the reach of the Technocrane. I do all the composites live on set, and the editing later consists of doing the occasional cut to an insert (which I also composite live). On a decent HP computer running Windows XP, the Brainstorm can move quite complex backgrounds with many open "monitor windows" showing rolling movie clips, with just a one frame delay in SD and HD.

In May of 2002, I did some camera tests for Robert Zemeckis and Allen Daviau for Polar Express using a system I believe was developed at the BBC with an LED sender/receiver mounted to the camera and a large number of reflective black and white circular targets on the ceiling. The LED package would derive positional info from the light reflected off the targets, and that data was
massaged and used to drive the computer backgrounds as the camera moved.

When I worked on that silly CNN "hologram" on election night 2008, the cameras used in the CNN New York studio were encoded using a modified mocap system with the reflective balls mounted all over the rigs, including on a handheld camera.

In fact, this sort of work originated a long time ago (in tech time). In the fall of 1990, almost 20 years ago, I did what I believe may have been the first on-set composited bluescreen previz, for ZZ Top's "She's Got Legs" music video for Propaganda Films. Tim Clawson, who was head of production at  the time, called me up and asked if it would be possible to see a rough composite on set as they were shooting, and I cobbled together a "gemini" system where the video camera and BL were set up side-by-side on a ubangi, and I was able to show them a decent if somewhat "parallaxed" composite on set using an Ultimatte. I was even able to build multi-layer composites on set for the
multiple moco passes. I eventually developed my own hi-rez tap and offered this as a service for 10 or 15 years before technical advances allowed video assist people to do very rough and crappy comps for a lot less money.

Interesting to see the general production community catching up.

Bob Kertesz
BlueScreen LLC
Hollywood, California


Bob said "Interesting to see the general production community catching up ."

I Know how you feel! . It's strange when suddenly things get "re-invented" or re=adapted and labelled as New-technology and then shouted about.

By itself nothing wrong, but the astounding claims need to be toned down. As you correctly put, in technology years what takes years previously, will take weeks now to recreate.

Nice example of the SGI fridge A single air-cooled mini-Pc now with Intel I7 processor and an Nvidia GPU will do more. A bit of unrelated trivia: One very big reason SGI went belly up was their closed black-box approach to visualization computing. Until a single GPU board from Nvidia with stereoscopic support uprooted the "multi-pipe" rendering architecture of SGI.

The Nvidia cards, high end CPUs and optimized realtime algorithms for tracking and placing virtual props, grass, characters, mountains etc and manipulating them in Real-time in STEREO 3D, with live talent is what is possible today from a system that will fit a half-height rack mount case.

I look forward to the time when all this gets miniaturized to fit on box that gets velcro-ed to the back of a camera soon enough!

Anyhow, I’m going off-topic so will digress.


Thank you all for contributing.

A vote of thanks to Cameron for putting it all together, thereby resurrecting and polishing previously forgotten technology!

Regards
Clyde DeSouza
Real Vision,
UAE


>>Nice example of the SGI fridge A single air-cooled mini-Pc now with Intel I7 processor and an >>Nvidia GPU will do more.

I don't know about that.

Remember, that large SGI rig was able to run five simultaneous backgrounds, rendering each one for full motion plus selective and tracking defocus, with 3 or 4 frames of total delay in the entire system. SGI big iron was the perfect platform for that, and I haven't seen anything commercially available today that can compare.

Also, having seen lots of very expensive nvidia GPU cards in the last couple of years, I am underwhelmed by both their performance and especially their stability. Yes, they're fine for development systems and some previz applications, but put them on a set where real time performance is expected for take after take and day after 14 hour day, and they start to quickly
circle the drain. Not something I would even THINK of using for live or any real time applications where a client with money to spend was involved.

Maybe a few generations from now, when they figure out how to make the HD-SDI outputs good for feeding something other than monitors, not afterthoughts with endless jitter and other "so five years ago" issues, and the genlock circuitry does something besides aging me prematurely.

Bob Kertesz
BlueScreen LLC
Hollywood, California


oh! I'm actually surprised that this is your experience with Nvidia GPU's. I'm not challenging it, and I believe it to be true if you say that this is what you went through.

But I use Nvidia (Quadro's) and even the latest FX 295 range and have not had any hicc-ups with them, even when running 24-7 in stereo render configs from "experience rooms" to powerwall scenarios.

granted they generate heat - a lot, but with ventilation and matching System ram, personally I've never had failures.

Some-time I hope to have budgets to take a look at the Nvidia Tesla. even though it probably won’t do stereo 3D in SLI out of the box. but it's a beast for real-time rendering.

>
A supercomputer the size of a bread box


http://www.nvidia.com/object/tesla_computing_solutions.html

Regards
Clyde DeSouza
Real Vision
UAE


Hi all,
since the topic is hot, I just have to share some info with you, bordering a bit on the geeky side, but the real deal in brief is at 5:40 minutes into the video.


http://www.youtube.com/watch?v=UlZ_IoY4XTg&feature=related

Cost?? US$ 10,000! even if you increase it 400 percent, you see what can be bought for the money.
This is stuff that "normal" companies and the "brains" hired by Hollywood won't tell you, because they know what money there is in feature films.

And for the unbelievers, in this video:


http://www.youtube.com/watch?v=VMygkWmsf2g&feature=related

...is what you see A realtime Pandora type world that can be imported from typical software packages such as Maya etc, and then EACH and every pebble, rock, tree can be shifted about in realtime, WHILE they are animated.

This is what you would then use, to do Scene blocking, Moco premapped to characters (or live mocap stream with the help of Off-the-shelf programmers who can program in CUDA, Nvidia's version of the common C++ language).

Hope this inspires indie production houses and even big budget Hollywood studios to re-look at what the Visualization community can offer Cinematography.

kind Regards
Clyde DeSouza
Real Vision UAE


It's great to see the discussions about virtual production techniques. At Lightcraft, we build the Previzion virtual studio system that was used on 'V' (http://www.theasc.com/magazine_dynamic/December2009/PostFocus/page1.php). It fits in a suitcase (http://www.lightcrafttech.com/), uses a combination of optical and inertial tracking to correctly track the camera position and orientation, and machine vision techniques to fully map the zoom and focus characteristics of a modern cine lens.

We can do a very rapid integration with a given camera, lens, and stage, but it took us several years of heavy programming in order to be able to do so. My honest opinion of real time camera tracking and CG scene integration is that it is a very difficult problem that looks misleadingly straightforward. The human eye is very, very good at seeing motion mismatches between foreground and background, and to get everything to work cleanly, we had to write a custom keying, tracking, and rendering software application from scratch (Previzion), invent several new technologies for accurately calibrating cameras in stage environments, build our own high resolution gyro systems (Airtrack) to handle both high speed handheld tracking and slow pans, and finally fit it all into airplane carry-on luggage space. If there's someone out there who can do this in a few weeks, I'd like to talk to them about future employment :)

My cofounder and I have pretty extensive robotics backgrounds (I was the lead mechanical engineer on the iRobot Roomba, and Phil was the lead software engineer), and I frankly thought we'd have a complete solution in a couple of years. It actually took nearly 6 years of full time, 70+ hour weeks of engineering work to get there. The systems have to be adaptable enough to handle the myriad last minute changes that hit a production -- new lenses, whatever jib arm/remote head shows up, running handheld shots, Steadicam work, etc. The depth of field has to be artifact-free and photographically accurate. The rigging and setup have to be fast enough that the production doesn't really notice that you are there.

Just handling 'minor' details, like engineering the data transfer into external applications, requires a lot of painstaking work. We wrote a complete separate application just to transfer our Collada-formatted camera motion takes into Maya/Nuke/3ds max/After Effects, as each uses a separate input format, and users need to be able to automatically extract a small number of frames from a 5000+ frame take, with both a GUI and command line interface for scripting.

Our web site needs some updating, but it's easy to see the systems running live in Los Angeles (we're in Venice, and have compatible stages set up in Burbank.) It's not that hard to set up demonstrations outside of L.A. either, given the system portability.

I'm quite glad that people are paying more attention to this area. The cost savings in walking away from a set with timecode-matched, final quality tracking data, and foregrounds that are properly lit to match virtual backgrounds, is quite significant in a VFX-heavy production.

The best part, of course, is the creative freedom. With a real time composited output running into an on-board camera monitor, the camera teams can simply operate just as they would on a traditional set, and everyone gets to be on the same page visually. It's been great to see that become a reality.

Eliot Mack
Lightcraft Technology

Disclaimer -- I obviously have a direct business interest in this area, but I think my perspective is fairly accurate, given that I spent the last 6 years focusing entirely on this problem.


Hi Eliot,


thank you for the informative post. I have updated the article to provide a link to the excellently done Previzion system.

I can imagine that perfecting such a system when initial work started a few years ago would mean working against time, with available technology and with ingenuity.

There is no way to take up the challenge of doing or replicating something like that in a few weeks :-) The more non-cinematographic geeks you throw at such a problem, the easier it gets to fabricate such a system. This is the intention of the article written... to generate a "seed" ideas of what all diverse technology is out there, and how it can today be fit together, where previously it took years and technology limitations to overcome.

To put this into perspective, about 6 years ago if you told someone they could be doing tera-flop computing and an HPC (high performance computing) from a machine that would sit on your desk at under $ 10,000/- they would send you to an asylum.

Regards,
Clyde DeSouza
Real Vision
Dubai, UAE




Copyright © CML. All rights reserved.