Tools for VFX On-Set Acquisition


With the brunt amount of VFX occurring in the back-end of a production’s lifecycle, many people forget that a great deal of importance should be paid to making sure that the assets gathered during the physical-production phase lend themselves well and correctly to the assets that will be created during the post-production phase.

Ultimately, the end-goal for any visual effect should be the seamless incorporation of the non-existing CG assets into an existing background plate of photography. In the olden days of Hollywood, this was usually accomplished through the use of back or front projection or elaborate tricks of camera placement that truly earned the moniker of “Hollywood Magic”.

Nowadays, however, mirrors, mixed-scale matte paintings and light-projections have given way to other forms of manipulation; but now it is of a digital kind. Today, the projections are onto geometry in 3D programs and the matte paintings are completed almost entirely in off-the-shelf programs like Photoshop and Nuke. However, in order to ensure that the CG objects are properly built and composited together, there are many digital and tangible takeaways that the VFX Supervisor and team need to measure and bring away from the set.

HDRs and Image Based Light References

The lighting and rendering of a CG object is one of, if not the most, important aspect of its integration into a scene. If its lighting does not exactly match its surroundings than it will immediately stand out as being foreign and fake.

Therefore, steps need to be taken during the physical production to capture data that indicates what the lighting conditions were on the set. To do this, the VFX team will take what is called an HDR – or a series of High Dynamic Range images. The way that Venture typically goes about doing this is by entering the scene immediately after the final shot, before any of the lights are struck. Typically, we work with a standard Canon 5d Mark II or III with a Sigma 8mm Fisheye lens, so that we can get a really wide-angle view with each shot. With that gear setup, we take several photos, making sure to get a full 360 degree view around the set, including straight up and straight down. Each photo is taken at various shutter speeds to ensure that we have the image at various brightness ranges, both overexposed, perfectly exposed and underexposed.

Then, in post, we stitch all of these photographs together, creating a perfect 360-degree panorama of the set.   And, since we have the full lighting spectrum from the under-exposed photos all the way to the over-exposed photos, we, essentially, have a complete lighting reference of all the available light, at the point of the camera, at that given point of time. From there, we can, in the simplest terms, “wrap” a CG element such as a Robot or a desk lamp in the light from that room and make it look like it was lit from the exact same light, at the exact same time. This gives us a great basis from which to start lighting the scene for extra drama or realism.

This also works great for matching reflections in objects like shiny cars as you can “wrap” the car in reflection of the environment from the scene. You can also apply HDRs from completely other environments to CG objects to give them different moods. HDRs open up a completely new creative territory when working with 3D and CG assets.

LIDAR and 3D SCANNERS

Now, lets say that you are shooting in a major downtown metropolitan area and you want to have aliens blow up the buildings… or extend a street with futuristic architecture… something cool. To accomplish these tasks, normally, you would have to build all of assets from scratch in a 3D software package like Maya or 3D Studio Max. However, with Lidar (Light Detection and Ranging) technology, you can send out highly sensitive lasers that will detect the active geometry at a given distance and, in essence, build a point cloud of the surroundings for you.

Therefore, you can import the data from the Lidar scan into your 3D program and have a rough build of your buildings and their surroundings already built for you. From there you can add on and tidy up to your hearts content.

With Lidar one can make rough models of all types of large and complex structures.

Using the same scanning technology, you can also scan people so that you can create digital human characters to takeover for actual actors when the script calls for action that may be too dangerous or impossible to do on-set.

These represent just a couple of the tools that Venture uses on-set to capture the data we bring back to post-production. Additionally, it is also imperative that all of the camera data, including lens information(aperture, focal-length, etc), shutter-speed, height, angle, and more be recorded. If there is camera movement, it is important to lay down trackers in appropriate places. Most importantly, it is imperative that a representative from the VFX team always be present to advise and monitor to ensure that no costly oversights are being made.

However, when correctly gathered assets from the physical-production are combined with artistically created materials from the post-production, the results are truly where the magic of Visual Effects lie.

To learn more or talk to us about ways we bring some VFX polish to your production, check out www.theventure.tv