Preaching to the Converged

S3D isn't here to make bad stories tolerable. It's here to make good stories transcendent.

Stereoscopyis a great example of an old technology that went through a few misguidedresurrections before its current incarnation as an important entertainmentmedium. But, as with any technology,before making an investment, it’s vital to cut through the hype andmisinformation and go back to the fundamentals, theory and foundations.

At itsbest, Stereoscopic 3D (S3D) brings audiences so much more than the third dimension.It can take us on a journey into another world; allow us to fly above the band;get us the best seat at a rugby match; even become part of our hero’s story. Atits worst, S3D is a headache-creating, nausea-inducing, stormy ride.

The basicsof stereoscopy can be broken down into a few essential categories: physiology;geometry; optics; plus a dash of physics. We must understand human physiology -how our eyes do and don’t work. We must at least understand that our world iscurved; essential when figuring out how to integrate computer-generated imageryinto captured photographs. Additionally, we must understand that we’re dealingwith actual objects: cameras; lenses; glass, mirrors; physical space; actors.


Forinstance, fusing a pair of two-dimensional pictures into a single stereoscopicimage requires a few elements. Two views of the same image must have beenphotographed at slightly different angles, which must share enough commonimagery for the brain to blend the right and left views into a single threedimension image.

The distance between our eyes (the interocular) is between 60 and 70mm. If functioning correctly, our extraocular muscles allow our eyes to turn inwards to each other, but prevent them from diverging more than a few degrees. As such, a quick move from an esotropic (cross-eyed) to an exotropic (wall-eyed) position is painful, so this is best avoided. Apply this to S3D, and you can see why we are most comfortable watching images that maintain relatively reasonable degrees of parallax—images that don’t come flying off the screen at us before cutting abruptly to images of spacecraft soaring into a divergent distance.

Avoiding this kind of pain is pretty simple. The images on screen shouldn’t be much more separated than the distance between our eyes (please note: this is not a rule. It’s just a way of understanding. The maths get much more sophisticated if you want truly accurate numbers). With that simple thought we can understand that when objects diverge 10% of a 70 foot screen (or a 46” monitor, for that matter), it will probably hurt, especially if we’ve been given no time to get accustomed to that separation.


Therefore, we’re wise to keep our parallax at about 1%, regardless of the intended screen size (staying conservative allows you to project in both situations). Also, keep in mind that as we don’t actually see stereo depth beyond about 20 metres, our perception of depth comprises more than the distance between cameras. Monocular cues such as shadows, occlusion, desaturation, and relative size play as great a role as the binocular ones we use to understand depth in the real world; cues that have been used to place objects in space in projected images for centuries.

As creators, we must consider other facts as well. We must remember that our eyes are fixed horizontally and only move vertically together. When presented with vertical misalignments – the left and right images at different heights – we experience discomfort and are removed from our story (or maybe a really frustrating football match). We experience mismatched focus, field of view, luma, and colour differences as momentary discomforts, but unless they’re extreme, we stay in the story. If, however, an object comes into negative space (off the screen into our laps) and hits - or is bisected by - the edge of the screen, we are pulled right out of the experience.

Just remembering, and honouring, these few things can make the difference between giving an audience an enjoyable, or torturous, experience.

As creators of live action S3D, we are bound by the physical world (until the VFX artists get hold of our work, anyway). Whether we’re shooting fiction, or live-action sports, we are bound by walls, depth and light. It’s important to consider that we see the world through wide-angle lenses. Our eyes are prime lenses: they don’t zoom, while cameras are much more flexible. Understanding our physiology helps us realise why wide-angle lenses seem to give more flexibility than trying to capture depth at the tight end of a telephoto lens.


So what about optics? It is essential to understand the way cameras capture light; the way that light travels through lenses to the imager; the way each lens has its own distinct profile. When shooting with a single camera, these are non issues. Add a second camera, another lens, position them on a scaffolding (a highly-tuned 3D rig), project them through a highly-polished, half-mirrored piece of glass (beam splitter), and you’re in a whole new universe.

You still need to capture a single image, but your parameters have completely changed. Try to simultaneously zoom two cameras and the images will change sizes all along the way until their final stop where they will not only be a slightly different size, but they won’t even be centred on your target. Additionally, you might notice that your edges look different; or that one image looks super sharp, while the other is softly out of focus; or that one horizon seems to list a little to the left.

All of these elements indicate a necessary correction to your alignment, because even the most minor of differences register in the brain as a problem. Most audiences will be kept in the story despite these annoyances (though they may well leave the theatre or turn off the TV wondering why their head hurts). Whether subtle or extreme, misaligned equipment compromises our experience, and if we’re creators, this only adds one more S3D sceptic who will think carefully before donning the glasses.


Once you’ve integrated all the biology, physics, and optics into the process, it’s time to get creative. We humans have varying tolerances for the change from negative to positive parallax (in your lap and in the distance). We want to be coddled, nurtured, given time to adjust to the changes. Mainly, we want to be in the story.

You don’t need a lot of distance between your cameras to give the sense of immersion.  Controlling and managing your equipment is only as useful as your understanding of depth. Finally, don’t forget everything you’ve learned in the years you’ve been shooting, now that you’re using two cameras instead of one.

Of course, we should now venture into the world of projection, but I’ll leave that for another time. For now, consider all the components of S3D. Learn to align your gear using really sophisticated analytical tools. Check your S3D with, and more importantly, without glasses, on big and small screen.  Remember we all just want fun, immersive experiences that keep us hooked into a great narrative.

S3D isn't here to make bad stories tolerable.  It's here to make good stories transcendent.


Jill Smolin is 3ality Digital’s Director of Education and travels extensively to teach and spread the S3D gospel to content makers everywhere.  She is based in Burbank, California, where she develops S3D curriculum and materials.  Jill may be contacted at