Digital tutors stereoscopic 3d in after effects download




















Defining Stereoscopic 5. Explaining the different types of filters 6. Controlling Zero Parallax and Interaxial Separations 7. Covering the rules of Stereo 8. Utilzing Stereo Display Controls 9. Creating custom safe Stereo guide Working with a converged Stereo camera Using the parallel Stereo camera In After Effects, it is much easier to change the convergence point of your stereoscopic 3D camera rig because you can change where the cameras are pointing quite easily.

Increasing this value pushes the convergence point away from the camera, so all objects in the scene pop out toward you when you view it on a 3D monitor. You can set where the cameras converge by changing the Converge To property. Usually, it is easiest to have the left and right cameras converge to your master camera's point of interest default.

But it is useful to change it to camera position plus, for example, the focus distance as an offset when trying to match convergence point and depth of field. Likewise, you can tie the convergence point to the zoom to automatically keep the convergence the same while doing a perspective shift changing the field of view of the camera while doing a dolly in. See the section on depth of field for more information. You can also use parallel virtual cameras. This technique is useful if you need to match live footage and add digital elements to that scene.

Keeping the virtual camera orientations consistent with the cameras used in the footage helps to keep the perspectives of the digital elements and that of the stereo footage aligned.

Changing the convergence plane with live footage is as simple as changing the horizontal alignment of the left and right images. Conceptually it makes sense; each object in the left and right image has a different horizontal offset depending on their depth due to parallax.

If you align the left and right images so a specific object in your footage appears in the exact same location when overlapped. Your convergence point is now located at the depth that matches how far that object was from the camera when you shot your footage or however far that object is from your virtual cameras.

You can change the 3D Glasses effect's Scene Convergence property to change the convergence plane of parallel cameras. Keep in mind, though, that because it simply offsets the final images, it acts as an additional change to the convergence if you have already converged using the Converge Cameras property with an offset.

In general, only change the 3D Glasses effect's Scene Convergence property when using live footage or when Converge Cameras is turned off. Increasing the Scene Convergence property moves the convergence plane farther away from the camera. Everything in the scene pops out from the screen toward the viewer. In general, your convergence plane with parallel cameras should ideally be at your camera's zoom distance.

However, when your cameras are parallel, there is an offset to take into account. The cameras are spaced apart, and the two perspectives are also spaced apart. To have the correct convergence plane you must change the scene convergence to counteract the separation of the cameras. Subtracting the stereo scene depth interaxial separation will do this and keep the convergence point from moving when using parallel cameras and virtual 3D elements.

However, don't this when using converged cameras. Set an expression on the 3D Glasses effect's Scene Convergence property to automatically account for this. After doing this, you can change the Stereo Scene Depth property, and your scene convergence doesn't change.

You should not see the black areas move back and forth—only the separation of the objects in front or behind them. With the following expression for parallel cameras, and the value of Scene Convergence set to 0, the convergence plane will be at the zoom distance of the camera. When working with converged cameras, it is much easier to know how far away your convergence plane is.

You have direct access to set the convergence point and offset. See the section on converged cameras to see how. When dealing with parallel cameras, it is difficult to tell how deep in the scene the convergence plane is.

Objects that are aligned turn black. Any objects that are aligned are on the convergence plane. If you then change the Scene Convergence property by dragging the property value, you should see a darker band move through the scene. This band is the convergence plane moving back and forth through the scene.

If you switch to the 3D view and put on your glasses, objects on this convergence plane appear to be on the plane of the TV screen. A good thing to remember is that normally our eyes are about 6 to 6. This fact is useful if you are trying to match camera separation from another program, like Maya. If you import cameras or nulls from Maya and they are not lining up with the stereo rig camera positions, try adding the following expression to the interaxial separation Stereo Scene Depth property to handle the conversion to After Effects units.

In this case, the Maya default units are in cm, and they are dealing with absolute units. It's necessary to counteract the composition width percentage calculation. However, it possible you have to rework any keyframes if you change your output size. Using this equation allows you to drag the property value as you normally would. It takes that value and modifies it as needed.

If your cameras are in the wrong location, make sure to verify where the master camera from Maya is in relationship to the left and right. Remember that you can change the configuration in your Stereo 3D Controls effect in After Effects such that the master camera is centered between the left and right cameras, or in the same location as the left hero left , or the same location as the right hero right.

To get any sort of realistic scene, you usually want to add depth of field, though usually it is subtle unless you are using a telephoto or macro lens. Usually, you want your focus to match the convergence plane of the cameras. With parallel cameras, it is more difficult, and a little bit of eyeballing is required See the section on ETLAT and previewing convergence plane with parallel cameras for more information.

When working with converged cameras, it is very easy to match your focus distance and convergence planes. Here are a few methods. If you want your focus distance to simply follow your point of interest, use the new command by right-clicking the camera layer in the timeline. Then make sure that your Stereo 3D Controls effect properties are set to converge to camera point of interest with a 0 offset.

Set an expression on the Convergence Z Offset property to match the focus distance of the camera. Now your convergence point follows your focus distance. Make sure to replace YourCompName with the correct name for your main composition.

If you have keyframed your Convergence Z Offset property, you can set an expression on your focus distance to match the convergence z offset. Make sure to remember where your convergence point anchor point is.

If you are converging to camera position, no further work is required beyond linking your focus distance to point of interest as described earlier. However, if you are converging to the camera point of interest, add the distance between the camera point of interest and the camera position to the z offset for the focus distance expression using the length function. If you are converging to the camera zoom, add the cameras zoom value to the z offset for the focus distance expression.

Make sure to replace YourCompName with the correct name for your stereoscopic 3D composition. You can work with real footage and integrate 3D elements in After Effects. The workflow currently requires a little bit of manual work. In general, use your stereo footage as a background plate, and then composite your 3D elements on top of it. The reverse could be the case if for example you are trying to put a stereoscopic video like a TV screen replacement into a virtual stereoscopic scene and the convergence of the scene needs to be different than the convergence of the footage.

For the purpose of simplicity, here is the workflow using stereoscopic footage as a background plate. Import your stereoscopic left-eye and right-eye footage items. Drag your left-eye footage item into your Left Eye Comp composition and your right-eye footage item into your Right Eye Comp composition at the very bottom of your layer stack and leave them as 2D layers.

Now, if you switch to your stereo 3D view, you should see your 3D elements composited with your stereoscopic 3D footage.

One final thing needs to be done in order to truly control the convergence of the footage. Set an expression on the X position of the left and right footage layers. The left layer adds the slider value converted into percentage of composition width, and the right layer subtracts it. Expression to set on the left-eye footage layer's X Position property: transform.

Expression to set on the right-eye footage layer's X Position property: transform. Now you can drag your footage convergence slider to change the convergence plane of your stereoscopic 3D footage, and use the Stereo 3D Controls effect to control the convergence of your 3D elements.

It is best to try and get the convergence planes to match as close as possible in this situation. Doing so would involve changing the interaxial separation of the cameras and shooting the footage with new perspectives for each camera.

It is very difficult to get different perspectives from an image that has already been recorded though there is research happening in this area. Your best option is to set the Stereo Scene Depth property of your 3D elements to match as closely as possible the separation of the cameras that were used on the shoot.

Matching it might be somewhat difficult. Normally, cameras are spaced 6. But depending on the camera size, it can vary especially if the body of the camera is wider and it is not possible to place the cameras that close together. It's necessary to do some sort of calculation to compensate for the dimensions of the footage.

Also take into account correct units as mentioned previously since After Effects operates in units of pixels, not centimeters. It can be easiest just to manually adjust it in this situation.

Remember that to get the convergence point of the footage to match the camera zoom value, it's necessary to subtract the cameras' separation amount from your footage convergence. Using the difference mode is probably the easiest and fastest way to align the object you want to be on the convergence plane.

To have the best composite possible and least painful , make sure to match the convergence plane of your 3D elements with that of your stereo footage. For a good explanation of different types of camera rigs that can be used and more stereo 3D workflows in Premiere Pro, check out this great video by David Helmly:.

When editing with stereoscopic 3D, it is usually invaluable to be able to see what exactly is happening and how the parameters you are changing affects your stereoscopic 3D rig.

There is a simple way to get a sense of this in After Effects:. At this point, you should be able to see three cameras: your master camera, as well as your left and right ones. Changing your settings under Stereo 3D Controls should update the cameras in your initial scene. Try changing the Stereo Scene Depth property to see the cameras separating or tweak your convergence options to see where the cameras are pointing.

This technique is especially useful when debugging problems, and when trying to match your depth of field to the convergence distance. Both the focus distance and the convergence point are shown when the cameras are converging. With parallel cameras, you can still see your focus distance or point of interest and you can see how this lines up with the perceived convergence point in your final output using the difference mode technique as described earlier. It's pretty simple to edit while previewing the stereoscopic 3D effects that you are changing.

Anaglyph mode is an inexpensive way to do this. If you happen to have a 3D TV accessible, follow these steps to see your composition and edit in stereoscopic 3D live.

Building a center camera rig 7. Cleaning up the project and adding render comps 8. Applying Safe Stereo rules to control maximum disparity 9.

Building a Safe Stereo visualization tool Creating a visualization representation of the Safe Zone Cleaning up the final Stereo rig for easier animation Defining Stereoscopics 1;22 Analyzing the different filter types Covering the rules of Safe Stereo



0コメント

  • 1000 / 1000