It took me a long time to recognize the appeal of video shooting. Even in a job where I have to use a camera’s video features, it was only fairly recently that I moved beyond just taking short clips (essentially stills with a little bit of movement in them) and started to think in terms of using video and editing to tell stories.
Given that most modern cameras offer at least rudimentary video tools, I wanted to share my experiences and perhaps encourage others to start thinking about shooting at 24 or more frames per second.
The good news is that a lot of the things you learn as a photographer are immediately useful as you take your first steps in video shooting. But, as I discovered, at almost every stage I encountered differences and additional factors to consider. Many of which I wished someone had told me when I started…
Stop shaking the camera, you’re making me feel sick
The first thing that became apparent when shooting video for the first time was the need to keep the camera steady. I remember my Dad teaching me how to keep my camera steady and be aware of my breathing when shooting relatively long exposures, but no amount of good breathing technique or bracing the camera against a pillar is enough to give steady video.
|Even if your camera is hand-holdable, don’t expect that to mean you’ll shoot it hand-held.|
This makes sense, of course: most stills shooting only requires you to hold your camera steady for fractions of a second whereas video lets the viewer see how steady you’ve been for seconds or minutes at a time.
What I’ve learned is that in-camera stabilization can be enough to stop your footage looking unwatchably juddery, but unless you’re aiming for a ‘run-and-gun’ aesthetic, you’ll need to use a tripod or some sort of stabilization rig.
Exposing some limitations
Exposure is another area where the lessons I’d learned from stills photography are useful but incomplete. You still get to control the same variables, but the range of control you have is somewhat restricted. It’s still a question of managing light, but with a greater risk of finding yourself with too much of the stuff.
For me it’s a question of shutter speed, which has a more obvious impact on the appearance of your footage than is usually the case in stills shooting. A fast shutter speed in stills photography will freeze motion, a slow one will allow the subject to blur but there’s often a large range in between these two extremes. In video, there’s a narrower range before the viewer starts to notice the difference.
The 180 degree shutter ‘rule,’ where you use a shutter speed that’s half the duration of each frame (so 1/48th seconds for 24 fps shooting) isn’t an inviolable law, but the further you stray from it, the more jarring or muddled your footage will look. This can be a creative choice, of course, but only counts as such if you’ve consciously made it.
This made me think back to when I was first experimenting with stills photography, and getting a feel for the boundaries set by the longest shutter speed I could hand-hold, the widest aperture I had available and the highest ISO setting I found acceptable. Once I was familiar with these, one of the first purchases I made was a faster lens (that’s right: a 50mm F1.8) to get more light to extend these capabilities.
With video and the further restriction over the fastest shutter speed I’m willing to use, it’s a decent ND filter I need to buy, to reduce the light level to fit your boundaries.
|A neutral density (ND) filter allows you to use use wide apertures and the relatively slow shutter speeds that a lot of videographers favor. An adjustable ND filter provides even more flexibility.|
A return to JPEGs
Added to these exposure limitations has been another throw-back to my first days as a photographer: having to revert to an 8-bit, compressed shooting format. Having spent some time learning the distinctions between video file formats, the main lesson has been that none of the ones I’m likely to encounter are anything like Raw.
Once you’ve been spoiled by the seemingly endless dynamic range that can fit in a 14-bit Raw file and the ability to set and adjust the white balance at the ending stage, it’s a shock to go back to having to get exposure and white balance perfect when you shoot.
Flat tone curves and Log profiles provide a means of squeezing a bit more useable DR into those 8-bit files, but this can make it even harder to judge correct exposure. I’d highly recommend shooting some test footage and trying to grade it back into something useful, before committing yourself to the flattest tone curve you can find.
With modern cameras it’s easy to take autofocus for granted, even as someone who started shooting with a manual focus camera. Focus in video presents two challenges to the idea of pressing the shutter and expecting all to be well.
The first problem is one of time: the fast focus you need for stills would look starling if it occurred in the middle of a video clip. The other challenge is that most autofocus in video is terrible: contrast detection is inherently based on hunting and there are currently few decisive phase-detection systems that have a sensible way to specify or change focus position.
|Most cameras’ autofocus during video is terrible and you’ll often find manual focus gives you a better result. Focus peaking is extremely helpful in this respect but a well-marked distance scale can work, too.|
Consequently I’ve found myself doing a lot of manual focusing, learned to really appreciate focus peaking and magnification during capture, and learned to loathe speed-sensitive focus-by-wire lenses. Mainly, though, I’ve learned to plan, position and stop down to minimize the need to refocus and, wherever possible, to reserve refocusing as an effect to draw the viewer’s attention.
Composition and movement
More than any of these changes, though, there are some things that photographic experience can only begin to help with. The basics of what works in terms of composition remain the same, however, there are several key additional considerations.
Firstly, I had to consider how my subjects move within the composition. Even when I’m shooting something whose movement I can’t control, I still get to choose where to position myself and how to frame the action, at least. That’s only the beginning, though.
You also have a choice over how your camera moves, or appears to. Static shots are easiest but may not be the right choice for every subject. Moving the camera brings many challenges of its own but can give a sense of motion to a shot. However, while they may seem easy, zooming or excessive panning often look fairly unprofessional, even if you can do them slowly and smoothly.
|A simple ‘dolly’ (which can be something as rudimentary as a skateboard), will allow you to add smooth movement to your footage. Movement that will look a lot more professional than panning the tripod head, in most cases.|
The other option, of course, is to take a series of static shots from different positions, then cut them together so that the viewer gets a sense of the movement from one shot to the next, without you actually showing it.
This, of course, leads into one of the biggest differences between stills and video shooting: I’m increasingly finding myself thinking about multiple shots needed for the film’s narrative. Instead of taking a single image, I’m trying to imagine a sequence that can be edited together. And that’s led me to realize that as well as choices relating to shooting style, there is vast scope for creativity when it comes to how the footage is edited together.
The entirely unfamiliar
One area that stills photography gave me no insight into was the importance of audio recording. In the same way that audiences are much more aware of shooting technique than they realize, they’re also incredibly sensitive to poor audio.
|A Lavalier microphone (often just called a ‘Lav’ mic), lets you capture your subject’s speech with minimal background interference. The foam cover protects from wind noise, which can ruin your whole project if you only discover it at the edit stage.|
Bad or inconsistent audio is one of the best ways of distracting your audience or distancing them from the experience your carefully edited footage.
It’s the thing I continue to get wrong most often, but it’s something I put more thought into every time I prepare for a shoot. It’s also the reason we pay close attention to whether cameras include a mic input (which is essential for decent audio capture) and headphone socket for monitoring the results and making sure I don’t arrive back at my computer with clipped sound, wind noise or the combined works of the aircraft and emergency service ensemble providing an unnoticed soundtrack.
The differences don’t end when you stop recording
Like audio, editing is another area where photographic experience doesn’t really prepare you. This is true when it comes to pre-visualizing the way I’m going to edit things together but also in terms of the software I’ve had to learn.
Whereas still image software has developed from image processing and darkroom metaphors, video editing software has developed independently and often has tools designed for people with a totally different background.
It means many of the familiar tools you’re likely to be used to: curves and white balance, for instance, are likely to be absent or hidden. However, it also means encountering some unfamiliar but useful tools, such as waveforms and vectorscopes, which are powerful ways of interpreting what’s going on in your footage.
|Vectorscopes and waveforms will be unfamiliar to most stills shooters but they give a useful insight into the tonal and chromatic distribution in your footage.|
Some of these tools prove to be hugely useful, but I for one am looking forward to the day when editing software makers consider including the best tools from both workflows into a single piece of software, even if that means building some software that’s less geared toward their respective industries.
As it stands, the learning curves required to understand Final Cut Pro or Premiere are every bit as steep as the one I faced when I first encountered Photoshop, but both give that same impression of almost unlimited capability, once you start to find your feet.
A chance to challenge myself (even if I’m not expecting Oscars)
However masochistic though it may sound, I’m hugely enjoying the challenge of learning a totally new set of skills, and watching my results leap forward with each attempt. It would have been easy to get downhearted that my photographic knowledge didn’t immediately turn into videographic capability but I’ve been lucky enough to have some very experienced colleagues to help me make progress.
My own efforts may be somewhat faltering at present, but I’ve been hugely enjoying the opportunity to experiment with shots and with editing. And, while I personally don’t like zooms, pans or high shutter speed (small shutter angle) footage, I’ve learned enough so that I can use them as creative choices if I wanted to.
Beyond all of these lessons, the main one I’ve learned is the need to plan. With so many additional factors to think about, the need to plan the shots I need to get, plan how I’m going to capture the audio. It’s one thing to chase the blue or golden hours to get a single photo, but it takes an order of magnitude more preparation if there are a number of video shots you need to get during that brief window.
Filmmakers often talk about film making as having grammar and, although I’m only starting to work out how to string very basic sentences together, it is fascinating to explore a new means of expression. If you’ve got a camera with a [REC] button and you can get access to some editing software, I can wholly recommend giving it a try: set yourself a project, try, fail, improve.