An interesting question got raised on Facebook today.
What shutter speed should I use if I am shooting at 50p so that my client can later convert the 50p to 50i? Of course this would also apply to shooting at 60p for 60i conversion.
Lets first of all make sure that we all understand that what’s being asked for here is to shoot at 50(60) progressive frames per second so that the footage can later be converted to 25(30) frames per second interlace – which has 50(60) fields.
If we just consider normal 50p or 60p shooing the the shutter speed that you would chooses on many factors including what you are shooting and how much light you have and personal preference.
1/48 or 1/50th of a second is normally considered the slowest shutter speed at which motion blur in a typical frame no longer significantly softens the image. This is why old point and shoot film cameras almost always had a 1/50th shutter, it was the slowest you could get away with.
Shooting with a shutter speed that is half the duration of the cameras frame rate is also know as using a 180 degree shutter, a very necessary practice with a film movie camera due to the way the mechanical shutter must be closed while the film is physically advanced to the next frame. But it isn’t essential that you have the closed shutter period with an electronic camera as there is no film to move, so you don’t have to use a 180 degree shutter if you don’t want to.
There is no reason why you can’t use a 1/50th or 1/60th shutter when shooting at 50fps or 60fps, especially if you don’t have a lot of light to work with. 1/50(1/60) at 50fps(60fps) will give you the smoothest motion as there are no breaks in the motion between each frame. But many people like to sharpen up the image still further by using 1/100th(1/120th) to reduce motion blur. Or they prefer the slightly steppy cadence this brings as it introduces a small jump in motion between each frame. Of course 1/100th needs twice as much light. So there is no hard and fast rule and some shots will work better at 1/50th while others may work better at 1/100th.
However if you are shooting at 50fps or 60fps so that it can be converted to 50i or 60i, with each frame becoming a field, then the “normal” shutter speed you should use will be 1/50th or 1/60th to mimic a 25fps-50i camera or 30fps-60i camera which would typically have it’s shutter running at 1/50 or 1/60th. 1/100th(120th) at 50i(60i) can look a little over sharp due to an increase in aliasing due to the way a interlace video field only has half the resolution of the full frame. Particularly with 50p converted to 50i as there is no in-camera anti-aliasing and each frame will simply have it’s resolution divided by 2 to produce the equivalent of a single field. When you shoot with a “real” 50i camera line pairs on the sensor are combined and read out together as a single field line and this slightly softens and anti-aliases each of fields. 50i has lower vertical resolution than 25p. But with simple software conversions from 50p to 50i this anti-aliasing does not occur. If you combine that with a faster than typical shutter speed the interlaced image can start to look over sharp and may have jaggies or color moire not present in the original 50/60p footage.
If using a LUT to judge the exposure of a camera shooting log or raw it’s really important that you fully understand how that LUT works.
When a LUT is created it will expect a specific input range and convert that input range to a very specific output range. If you change the input range then the output will range will be different and it may not be correct. As an example a LUT designed and created for use with S-Log2 should not be used with S-Log3 material as the the higher middle grey level used by S-Log3 would mean that the mid range of the LUT’s output would be much brighter than it should be.
Another consideration comes when you start offsetting your exposure levels, perhaps to achieve a brighter log exposure so that after grading the footage will have less noise.
Lets look at a version of Sony’s 709(800) LUT designed to be used with S-Log3 for a moment. This LUT expects middle grey to come in at 41% and it will output middle grey at 43%. It will expect a white card to be at 61% and it will output that same shade of white at a little over 85%. Anything on the S-Log3 side brighter than 61% (white) is considered a highlight and the LUT will compress the highlight range (almost 4 stops) into the output range between 85% and 109% resulting in flat looking highlights. This is all perfectly fine if you expose at the levels suggested by Sony. But what happens if you do expose brighter and try to use the same LUT either in camera or in post production?
Well if you expose 1.5 stops brighter on the log side middle grey becomes around 54% and white becomes around 74%. Skin tones which sit half way between middle grey and white will be around 64% on the LUT’s input. That’s going to cause a problem! The LUT considers anything brighter than 61% on it’s input to be a highlight and it will compresses anything brighter than 61%. As a result on the output of your LUT your skin tones will not only be bright, but they will be compressed and flat looking. This makes them hard to grade. This is why if you are shooting a bit brighter it is much, much easier to grade your footage if your LUT’s have offsets to allow for this over exposure.
If the camera has an EI mode (like the FS7, F5, F55 etc) the EI mode offsets the LUT’s input so you don’t see this problem in camera but there are other problems you can encounter if you are not careful like unintentional over exposure when using the Sony LC709 series of LUTs.
Sony’s 709(800) LUT closely matches the gamma of most normal monitors and viewfinders, so 709(800) will deliver the correct contrast ie. contrast that matches the scene you are shooting plus it will give conventional TV brightness levels when viewed on standard monitors or viewfinders.
If you use any of the LC709 LUT’s you will have a miss-match between the LUT’s gamma and the monitors gamma so the images will show lower contrast and the levels will be lower than conventional TV levels when exposed correctly. LC709 stands for low contrast gamma with 709 color primaries, it is not 709 gamma!
Sony’s LC709 Type A LUT is very popular as it mimics the way an Arri Alexa might look. That’s fine but you also need to be aware that the correct exposure levels for this non-standard LC gamma are middle grey at around 41% and white at 70%.
An easy trap to fall into is to set the camera to a low EI to gain a brighter log exposure and then to use one of the LC709 LUT’s and try to eyeball the exposure. Because the LC709 LUT’s are darker and flatter it’s harder to eyeball the exposure and often people will expose them as you would regular 709. This then results in a double over exposure. Bright because of the intentional use of the lower EI but even brighter because the LUT has been exposed at or close to conventional 709 brightness. If you were to mistakenly expose the LC709TypeA LUT with skin tones at 70%, white at 90% etc then that will add almost 2 stops to the log exposure on top of any EI offset.
Above middle grey with 709(800) a 1 stop exposure change results in an a 20% change in brightness, with LC709TypeA the same exposure change only gives a just over 10% change, as a result over or under exposure is much less obvious and harder to measure or judge by eye with LC709. The cameras default zebra settings for example have a 10% window. So with LC709 you could easily be a whole stop out, while with 709(800) only half a stop.
Personally when shooting I don’t really care too much about how the image looks in terms of brightness and contrast. I’m more interested in using the built in LUT’s to ensure my exposure is where I want it to be. So for exposure assessment I prefer to use the LUT that is going to show the biggest change when my exposure is not where it should be. For the “look” I will feed a separate monitor and apply any stylised looks there. To understand how my highlights and shadows, above and below the LUT’s range are being captured I use the Hi/Low Key function.
If you are someone that creates your own LUT’s an important consideration is to ensure that if you are shooting test shots, then grading these test shots to produce a LUT it’s really, really important that the test shots are very accurately exposed.
You have 2 choices here. You can either expose at the levels recommended by Sony and then use EI to add any offsets or you can offset the exposure in camera and not use EI but instead rely on the offset that will end up in the LUT. What is never a good idea is to add an EI offset to a LUT that was also offset.
This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.
Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.
The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar. If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.
This post follows on from my previous post about sensors and was inspired by one of the questions asked following that post.
While sensor size does have some effect on low light performance, the biggest single factor is really the lens. It isn’t really bigger sensor that has revolutionised low light performance. It’s actually the lenses that we can use that has chnge our ability to shoot in low light. When we used to use 1/2″ or 2/3″ 3 chip cameras for most high end video production the most common lenses were the wide range zoom lenses. These were typically f1.8 lenses, reasonably fast lenses.
But the sensors were really small, so the pixels on those sensors were also relatively small, so having a fast lens was important.
Now we have larger sensors, super 35mm sensors are now common place. These larger sensors often have larger pixels than the old 1/2″ or 2/3″ sensors, even though we are now cramming more pixels onto the sensors. Bigger pixels do help increase sensitivity, but really the biggest change has been the types of lenses we use.
Let me explain:
The laws of physics play a large part in all of this.
We start off with the light in our scene which passes through a lens.
If we take a zoom lens of a certain physical size, with a fixed size front element and as a result fixed light gathering ability, for example a typical 2/3″ ENG zoom. You have a certain amount of light coming in to the lens.
When the size of the image projected by the rear of the lens is small it will be relatively bright and as a result you get an effective large aperture.
Increase the size of the sensor and you have to increase the size of the projected image. So if we were to modify the rear elements of this same lens to create a larger projected image (increase the image circle) so that it covers a super 35mm sensor what light we have. is spread out “thinner” and as a result the projected image is dimmer. So the effective aperture of the same lens becomes smaller and because the image is larger the focus more critical and as a result the DoF narrower.
But if we keep the sensor resolution the same, a bigger sensor will have bigger pixels that can capture more light and this makes up for dimmer image coming from the lens.
So where a small sensor camera (1/2″, 2/3″) will typically have a f1.8 zoom lens when you scale up to a s35mm sensor by altering the projected image from the lens, the same lens becomes the equivalent of around f5.6. But because for like for like resolution the pixels size is much bigger, the large sensor will be 2 to 3 stops more sensitive, so the low light performance is almost exactly the same, the DoF remains the same and the field of view remains the same (the sensor is larger, so DoF decreases, but the aperture becomes smaller so DoF increases again back to where we started). Basically it’s all governed by how much light the lens can capture and pass through to the sensor.
It’s actually the use of prime lenses that are much more efficient at capturing light has revolutionised low light shooting as the simplicity of a prime compared to a zoom makes fast lenses for large sensors affordable. When we moved to sensors that are much closer to the size of sensors used on stills cameras the range and choice of affordable lenses we could use increased dramatically. We were no longer restricted to expensive zooms designed specifically for video cameras.
Going the other way. If you were to take one of todays fast primes like a common and normally quite affordable 50mm f1.4 and build an optical adapter of the “speedbooster” type so you could use it on a 2/3″ sensor you would end up with a lens the equivalent of a f0.5 10mm lens that would turn that 2/3″ camera into a great low light system with performance similar to that of a s35mm camera with a 50mm f1.4.
Every now and again I get asked how to adjust the color matrix in a video camera. Back in 2009 I made a video on how to adjust the color matrix in the Sony’s EX series of cameras. This video is just as relevant today as it was then. The basic principles have not changed.
The exact menu settings and menu layout may be a little different in the latest cameras, but the adjustment of the matrix setting (R-G, G-R etc) have exactly the same effect in the latest camera that provide matrix adjustments (FS7, F5, F55 and most of the shoulder mount and other broadcast cameras). So if you want a better understanding of how these settings and adjustment works, take a look at the video.
I’ll warn you now that adjusting the color matrix is not easy as each setting interacts with the others. So creating a specific look via the matrix is not easy and requires a fair bit of patience and a lot of fiddling and testing to get it just right.
Please don’t take this post the wrong way. I DO understand why some people like to try and emulate film. I understand that film has a “look”. I also understand that for many people that look is the holy grail of film production. I’m simply looking at why we do this and am throwing the big question out there which is “is it the right thing to do”? I welcome your comments on this subject as it’s an interesting one worthy of discussion.
In recent years with the explosion of large sensor cameras with great dynamic range it has become a very common practice to take the images these cameras capture and apply a grade or LUT that mimics the look of many of todays major movies. This is often simply referred to as the “film look”.
This look seems to be becoming more and more extreme as creators attempt to make their film more film like than the one before, leading to a situation where the look becomes very distinct as opposed to just a trait of the capture medium. A common technique is the “teal and orange” look where the overall image is tinted teal and then skin tones and other similar tones are made slightly orange. This is done to create colour contrast between the faces of the cast and the background as teal and orange are on opposite sites of the colour wheel.
Another variation of the “film look” is the flat look. I don’t really know where this look came from as it’s not really very film like at all. It probably comes from shooting with a log gamma curve, which results in a flat, washed out looking image when viewed on a conventional monitor. Then because this look is “cool” because shooting on log is “cool” much of the flatness is left in the image in the grade because it looks different to regular TV ( or it may simply be that it’s easier to create a flat look than a good looking high contrast look). Later in the article I have a nice comparison of these two types of “film look”.
Not Like TV!
Not looking like TV or Video may be one of the biggest drivers for the “film look”. We watch TV day in, day out. Well produced TV will have accurate colours, natural contrast (over a limited range at least) and if the TV is set up correctly should be pretty true to life. Of course there are exceptions to this like many daytime TV or game shows where the saturation and brightness is cranked up to make the programmes vibrant and vivid. But the aim of most TV shows is to look true to life. Perhaps this is one of the drivers to make films look different, so that they are not true to life, more like a slightly abstract painting or other work of art. Colour and contrast can help setup different moods, dull and grey for sadness, bright and colourful for happy scenes etc, but this should be separate from the overall look applied to a film.
Another aspect of the TV look comes from the fact that most TV viewing takes place in a normal room where light levels are not controlled. As a result bright pictures are normally needed, especially for daytime TV shows.
But What Does Film Look Like?
But what does film look like? As some of you will know I travel a lot and spend a lot of time on airplanes. I like to watch a film or 2 on longer flights and recently I’ve been watching some older films that were shot on film and probably didn’t have any of the grading or other extensive manipulation processes that most modern movies go through.
Lets look at a few frames from some of those movies, shot on film and see what they look like.
The all time classic Lawrence of Arabia. This film is surprisingly colourful. Red, blues, yellows are all well saturated. The film is high contrast. That is, it has very dark blacks, not crushed, but deep and full of subtle textures. Skin tones are around 55 IRE and perhaps very slightly skewed towards brown/red, but then the cast are all rather sun tanned. But I wouldn’t call the skin tones orange. Diffuse whites typically around 80 IRE and they are white, not tinted or coloured.
When I watched Braveheart, one of the things that stood out to me was how green the foliage and grass was. The strong greens really stood out in this movie compared to more modern films. Overall it’s quite dark, skin tones are often around 45 IRE and rarely more than 55 IRE, very slightly warm/brown looking, but not orange. Again it’s well saturated and high contrast with deep blacks. Overall most scenes have a quite low peak and average brightness level. It’s quite hard to watch this film in a bright room on a conventional TV, but it looks fantastic in a darkened room.
Raiders of the Lost Ark does show some of the attributes often used for the modern film look. Skin tones are warm and have a slight orange tint and overall the movie is very warm looking. A lot of the sets use warm colours with browns and reds being prominent. Colours are well saturated. Again we have high contrast with deep blacks and those much lower than TV skin tones, typically 50-55IRE in Raiders. Look at the foliage and plants though, they are close to what you might call TV greens, ie realistic shades of green.
A key thing I noticed in all of these (and other) older movies is that overall the images are darker than we would use for daytime TV. Skin tones in movies seem to sit around 55IRE. Compare that to the typical use of 70% zebras for faces on TV. Also whites are generally lower, often diffuse white sitting at around 75-80%. One important consideration is that films are designed to be shown in dark cinema theatres where white at 75% looks pretty bright. Compare that to watching TV in a bright living room where to make white look bright you need it as bright as you can get. Having diffuse whites that bit lower in the display range leaves a little more room to separate highlights from whites giving the impression of a greater dynamic range. It also brings the mid range down a bit so the shadows also look darker without having to crush them.
Side Note: When using Sony’s Hypergammas and Cingeammas they are supposed to be exposed so that white is around 70-75% with skin tones around 55-60%. If used like this with a sutable colour matrix such as “cinema” they can look quite film like.
If we look at some recent movies the look can be very different.
The Revenant is a gritty film and it has a gritty look. But compare it to Braveheart and it’s very different. We have the same much lower skin tone and diffuse white levels, but where has the green gone? and the sky is very pale. The sky and trees are all tinted slightly towards teal and de-saturated. Overall there is only a very small colour range in the movie. Nothing like the 70mm film of Laurence of Arabia or the 35mm film of Braveheart.
In the latest instalment of the Pirates of the Caribbean franchise the images are very “brown”. Notice how even the whites of the ladies dresses or soldiers uniforms are slightly brown. The sky is slightly grey (I’m sure the sky was much bluer than this). The palm tree fronds look browner than green and Jack Sparrow looks like he’s been using too much fake tan as his face is border line orange (and almost always also quite dark).
Wonder woman is another very brown movie. In this frame we can see that the sky is quite brown. Meanwhile the grass is pushed towards teal and de-saturated, it certainly isn’t the colour of real grass. Overall colours are subdued with the exception of skin tones.
These are fairly typical of most modern movies. Colours generally quite subdued, especially greens and blues. The sky is rarely a vibrant blue, grass is rarely a grassy green. Skin tones tend to be very slightly orange and around 50-60IRE. Blacks are almost always deep and the images contrasty. Whites are rarely actually white, they tend to be tinted either slightly brown or slightly teal. Steel blues and warm browns are favoured hues. These are very different looking images to the movies shot on film that didn’t go through extensive post production manipulation.
So the film look, isn’t really about making it look like it was shot on film, it’s a stylised look that has become stronger and stronger in recent years with most movies having elements of this look. So in creating the “film look” we are not really mimicking film, but copying a now almost standard colour grading recipe that has some film style traits.
BUT IS IT A GOOD THING?
In most cases these are not unpleasant looks and for some productions the look can add to the film, although sometimes it can be taken to noticeable and objectionable extremes. However we do now have cameras that can capture huge colour ranges. We also have the display technologies to show these enormous colour ranges. Yet we often choose to deliberately limit what we use and very often distort the colours in our quest for the “film look”.
HDR TV’s with Rec2020 colour can show both a greater dynamic range and a greater colour range than we have ever seen before. Yet we are not making use of this range, in particular the colour range except in some special cases like some TV commercials as well as high end wild life films such as Planet Earth II.
This TV commercial for TUI has some wonderful vibrant colours that are not restricted to just browns and teal yet it looks very film like. It does have an overall warm tint, but the other colours are allowed to punch through. It feels like the big budget production that it clearly was without having to resort to the modern defacto restrictive film look colour palette. Why can’t feature films look like this? Why do they need to be dull with a limited colour range? Why do we strive to deliberately restrict our colour pallet in the name of fashion?
What’s even more interesting is what was done for the behind the scenes film for the TUI advert…..
The producers of the BTS film decided to go with an extremely flat, washed out look, another form of modern “film look” that really couldn’t be further from film. When an typical viewer watches this do they get it in the same way as we that work in the industry do? Do they understand the significance of the washed out, flat, low contrast pictures or do they just see weird looking milky pictures that lack colour with odd skin tones? The BTS film just looks wrong to me. It looks like it was shot with log and not graded. Personally, I don’t think it looks cool or stylish, it just looks wrong and cheap compared to the lush imagery in the actual advert (perhaps that was the intention).
I often see people looking for a film look LUT. Often they want to mimic a particular film. That’s fine, it’s up to them. But if everyone starts to home in on one particular look or style then the films we watch will all look the same. That’s not what I want. I want lush rich colours where appropriate. Then I might want to see a subdued look in a period piece or a vivid look for a 70’s film. Within the same movie colour can be used to differentiate between different parts of the story. Take Woody Allen’s Cafe Society, shot by Vittorio Storaro for example. The New York scenes are grey and moody while the scenes in LA that portray a fresh start are vibrant and vivid. This is I believe important, to use colour and contrast to help tell the story.
Our modern cameras give us an amazing palette to work with. We have the tools such as DaVinci Resolve to manipulate those colours with relative ease. I believe we should be more adventurous with our use of colour. Reducing exposure levels a little compared to the nominal TV and video – skin tones at 70% – diffuse whites at 85-90%, helps replicate the film look and also leaves a bit more space in the highlight range to separate highlights from whites which really helps give the impression of a more contrasty image. Blacks should be black, not washed out and they shouldn’t be crushed either.
Above all else learn to create different styles. Don’t be afraid of using colour to tell your story and remember that real film isn’t just brown and teal, it’s actually quite colourful. Great artists tend to stand out when their works are different, not when they are the same as everyone else.
Electronics and water are two things that just don’t match. We all know this and we all know that dropping a camera into a river or the sea probably isn’t going to do it a great deal of good. But one of the very real risks with any piece of electronics is hidden moisture, moisture you can’t see.
Most modern high definition or 4K pro video cameras have fans and cooling systems designed to keep them operating for long periods. But these cooling systems mean that the camera will be drawing in air from the outside world into the cameras interior. Normally this is perfectly fine, but if you are operating in rain or a very wet environment such as high humidity, spray, mist, fog etc it will mean a lot of moisture circulating through the camera and this can be a cause of problems.
If the camera is warm relative to the ambient temperature then generally humid air will simply pass through the camera (or other electronics) without issue. But if the camera is colder than the airs dewpoint then some of the moisture in the air will condense on the cameras parts and turn into water droplets.
A typical dangerous scenario is having the camera in a nice cool air conditioned car or building and then taking the camera out of the car/building to shoot on a warm day. As the warm air hits the slightly colder camera parts moisture will form, both on the outside and the inside of the cameras body.
Moisture on the outside of the camera is normally obvious. It also tends to dry off quite quickly, but moisture inside the camera can’t be seen, you have no way of knowing whether it’s there or not. If you only use the camera for a short period the moisture won’t dry out and once the fans shut down the cameras interior is no longer ventilated and the moisture stays trapped inside.
Another damaging scenario is a camera that’s been splashed with water, maybe you got caught in an unexpected rain shower. Water will find it’s way into the smallest of holes and gaps through capillary action. A teeny, tiny droplet of water inside the camera will stay there once it gets inside. Get the camera wet a couple of times and that moisture can start to build up and it really doesn’t take a lot to do some serious damage. Many of the components in modern cameras are the size of pin heads. Rain water, sea water etc contain chemicals that can react with the materials used in a cameras construction, especially if electricity is passing through the components or the water and before you know it the camera stops working due to corrosion from water ingress.
Storing you delicate electronics inside a nice waterproof flight case such as a Pelicase (or any other similar brand) might seem like a good idea as these cases are waterproof. But a case that won’t let water in also won’t let water and moisture out. Put a camera that is damp inside a wateproof case and it will stay damp. It will never dry out. All that moisture is gong to slowly start eating away at the metals used in a lightweight camera body and some of the delicate electronic components. Over time this gets worse and worse until eventually the camera stops working.
So What Should You Do?
Try to avoid getting the camera wet. Always use a rain cover if you are using a camera in the rain, near the sea or in misty, foggy weather. Just because you can’t see water flowing off your camera it doesn’t mean it’s safe. Try to avoid taking a cold camera from inside an air conditioned office or car into a warmer environment. If you need to do this a lot consider putting the camera in a waterproof bag ( a bin bag will do) before taking the camera into the warmer environment. Then allow the camera to warm up in the bag before you start to use it. If driving around in a car from location to location consider using less air conditioning so the car isn’t so cold inside.
Don’t store or put away a damp camera. Always, always throughly dry out any camera before putting it away. Consider warming it up and drying it with a hairdryer on a gentle/low heat setting (never let the camera get too hot to handle). Blow warm dry air gently into any vents to ensure the warm air circulates inside to remove any internal moisture. Leave the camera overnight in a warm, dry place with any flaps or covers open to allow it to dry out throughly.
If you know you camera is wet then turn it off. Remove the battery and leave it to dry out in a warm place for 24 hours. If it got really wet consider taking it to a dealer or engineer to have it opened up to make sure it’s dry inside before adding any power.
If you store your kit in waterproof cases, leave the lids open to allow air to circulate and prevent moisture building up inside the cases. Use Silica Gel sachets inside the cases to absorb any unwanted moisture.
If you live or work in a warm humid part of the world it’s tough. When I go storm chasing going from inside the car to outside in the warm to shoot is not healthy for the camera. So at the end of each day take extra care to make sure the camera is dry. Not just any obvious moisture on the outside but dry on the inside. So this normally means warming it up a little (not hot, just warm). Again a hair drier is useful or leave the camera powered up for a couple of hours in an air conditioned room (good quality aircon should mean the air in the room is dry). I keep silica gel sachets in my camera bags to help absorb any surplus moisture. Silica gel sachets should be baked in an oven periodically to dry them out and refresh them.
Fogged Up Lens?
Another symptom of unwanted moisture is a fogged up lens. If the lens is fogged up then there will almost certainly be moisture elsewhere. In the case of a fogged up lens one thing that sometimes helps (other than a hairdryer) is to zoom in and out a lot if it’s a zoom or change the focus a lot. Moving the lens elements backwards and forwards inside the lens helps to circulate air inside the lens and can speed up the time it takes to dry out.
Once upon a time the meaning of ISO was quite clear. It was a standardised sensitivity rating of the film stock you were using. If you wanted more sensitivity, you used film with a higher ISO rating. But today the meaning of ISO is less clear and we can’t swap our sensor out for more or less sensitive ones. So what does it mean?
ISO is short for International Standards Organisation. And they specify many, many different standards for many different things. For example ISO 3166 is for country codes, ISO 50001 is for energy management.
But in our world of film and TV there are two ISO standards that we have blended into one and we just call it “ISO”.
ISO 5800:2001 is the system used to determine the sensitivity of color negative film found by plotting the density of the film against exposure to light.
ISO 12232:2006 specifies the method for assigning and reporting ISO speed ratings, ISO speed latitude ratings, standard output sensitivity values, and recommended exposure index values, for digital still cameras.
Note a key difference: ISO 5800 is the measurement of the actual sensitivity to light of film. ISO 12232 is a standardised way to report the speed rating, it is not a direct sensitivity measurement.
Within the digital camera ISO rating system there are 5 different standards that a camera manufacturer can use when obtaining the ISO rating of a camera. The most commonly used method is the Recommended Exposure Index (REI) method, which allows the manufacturer to specify a camera model’s EI or base ISO arbitrarily based on what the manufacturer believes produces a satisfactory image. So it’s not really a measure of the cameras sensitivity, but a rating that if used with a standard external calibrated light meter to set the exposure will give a satisfactory looking image. This is very different to a sensitivity measurement and variations in the opinion as to what is a satisfactory image will vary from person to person. So there is a lot of scope for movement as to how an electronic camera might be rated.
As you cannot change the sensor in a digital camera, you cannot change the cameras efficiency at converting light into electrons (which is largely determined by the materials used and the physical construction). So you cannot change the actual sensitivity of the camera to light. But we have all seen how the ISO number of most digital cameras can normally be increased (and sometimes lowered) from the base ISO number.
Raising and lowering the ISO in an electronic camera is normally done by adjusting the amplification of the signal coming from the sensor, typically referred to as “gain” in the camera. It’s not actually a physical change in the cameras sensitivity to light, it like turning up the volume on a radio to make the music louder. Dual ISO cameras that claim not to add gain when switching between ISO’s typically do this by adjusting the way the signal from the sensor is converted from an analog signal to a digital one. While it is true that this is different to a gain shift it does typically alter the noise levels as to make the picture brighter you need to sample the sensors output lower down and closer to the noise floor. Once again though it is not an actual sensitivity change, it does not alter the sensors sensitivity to light, you are just picking a different part of it’s output range.
Noise and Signal To Noise Ratio.
Most of the noise in the pictures we shoot comes from the sensor and the level of this noise coming from the sensor is largely unchanged no matter what you do (some dual ISO cameras use variations in the way the sensor signal is sampled to shift the noise floor up and down a bit). So the biggest influence on the signal to noise ratio is the amount of light you put on the sensor. More light = More signal. The noise remains the same but the signal is bigger so you get a better signal to noise ratio, up to the point where the sensor overloads.
But what about low light?
To obtain a brighter image when there the light levels are low and the picture coming from the sensor looks dark the signal coming from the sensor is boosted or amplified (gain is added). This amplification makes both the desirable signal bigger but also the noise bigger. If we make the desirable picture 2 times brighter we also make the noise 2 x bigger. As a result the picture will be more noisy and grainy than one where we had enough light to get the brightness we want.
The signal to noise ratio deteriorates because the added amplification means the recording will clip more readily. Something that is close to the recordings clip point may be sent above the clip point by adding gain, so the range you can record reduces while the noise gets bigger. However the optimum exposure is now achieved with less light so the equivalent ISO number is increased. If you were using a light meter you would increase the ISO setting on the light meter to get the correct exposure. But the camera isn’t more sensitive, it’s just that the optimum amount of light for the “best” or “correct” exposure is reduced due to the added amplification.
So with an electronic camera, ISO is a rating that will give you the correct brightness of recording for the amount of light and the amount of gain that you have. This is different to sensitivity. Obviously the two are related, but they are not quite the same thing.
Getting rid of noise:
To combat the inevitable noise increase as you add gain/amplification most modern cameras use electronic noise reduction which is applied more and more aggressively as you increase the gain. At low levels this goes largely un-noticed. But as you start to add more gain and thus and more noise reduction you will start to degrade the image. It may become softer, it may become smeary. You may start to see banding ghosting or other artefacts.
Often as you increase the gain you may only see a very small increase in noise as the noise reduction does a very good job of hiding the noise. But for every bit of noise thats reduced there will be another artefact replacing it.
Technically the signal to noise ratio is improved by the use of noise reduction, but this typically comes at a price and NR can be very problematic if you later want to grade or adjust the footage as often you won’t see the artefacts until after the corrections or adjustments have been made. So be very careful when adding gain. It’s never good to have extra gain.
I’ve been shooting with the Fujinon MK18-55mm lens on my PXW-FS7 and PXW-FS5 since the lens was launched. I absolutely love this lens, but one thing has frustrated me: I really wanted to be able to use it on my PMW-F5 to take advantage of the 16 bit raw. Finally my dreams have come true as both Duclos and MTF have started making alternate rear mounts for both the MK18-55mm and the MK50-135mm.
So, when Fujinon contacted me and asked if I would be interested in shooting a short film with these lenses on my F5 I jumped at the chance. The only catch was that this was just over a week ago and the video was wanted for IBC which means it needed to be ready yesterday. And of course it goes without saying that it has to look good – no pressure then!
First challenge – come up with something to shoot. Something that would show off the key features of these beautiful lenses – image quality, weight, macro etc. I toyed with hiring a model and travelling to the Irish or Welsh coast and filming along the cliffs and mountains. But it’s the summer holidays so there was a risk of not being able to get an isolated location all to ourselves, plus you never know what the weather is going to do. In addition there was no story, no beginning, middle or end and I really wanted to tell some kind of story rather than just a montage of pretty pictures.
So my next thought was to shoot an artist creating something. I spent a weekend googling various types of artistry until I settled on a blacksmith. The video was going to be shown in both SDR and HDR and fire always looks good in HDR. So after dozens of emails and telephone calls I found an amazing looking metalwork gallery and blacksmith that was willing for a reasonable fee to have me and another cameraman take over their workshop for a day (BIG thank you to Adam and Lucy at Fire and Iron check out their amazing works of art).
Normally I’d carry out a recce of a location before a shoot to take photos and figure out what kind of lights I would need as well as any other specialist or unusual equipment. But this time there simply wasn’t time. We would be shooting the same week and it was already a very busy week for me.
The next step before any shoot for me is some degree of planning. I like to have a concept for the video, at the very least some outline of the shots I need to tell the story, perhaps not a full storyboard, but at least some kind of structure. Once you have figured out the shots that you want to get you can then start to think about what kind of equipment you need to get those shots. In this case, as we would be shooting static works of art I felt that having ways to move the camera would really enhance the video. I have a small Jib as well as some track and a basic dolly that is substantial enough to take the weight of a fully configured PMW-F5 so these would be used for the shoot (I’m also now looking for a slider suitable for the F5/F55 that won’t break the bank, so let me know if you have any recommendations).
So the first items on my kit list after the camera and lenses (the lenses were fitted with Duclos FZ rear mounts) was the jib and dolly. To achieve a nice shallow depth of field I planned to shoot as close to the lenses largest aperture of T2.9 as possible. This presents 2 challenges. The F5’s internal ND filters go in 3 stop steps – that’s a big step and I don’t want to end up at T5.6 when really I want T2.9, so 1 stop and 2 stop ND filters and my gucchi wood finished Vocas matte box would be needed (the wood look does nothing to help the image quality, but it looks cool). Oh for the FS7 II’s variable ND filter in my F5!
The second problem of shooting everything at T2.9 with a super 35mm sensor is that focus would be critical and I was planning on swinging the camera on a jib. So I splashed out on a new remote follow focus from PDMovie as they are currently on offer in the UK. This is something I’ve been meaning to get for a while. As well as the remote follow focus I added my Alphatron ProPull follow focus to the kit list. The Fujinon MK lenses have integrated 0.8 pitch gears so using a follow focus is easy. I now wish that I had actually purchased the more expensive PDMovie follow focus kit that has 2 motors as this would allow me to electronically zoom the lens as well as focus it. Oh well, another thing to add to my wish list for the future.
One other nice feature of the Fujinon MK’s is that because they are parfocal you can zoom in to focus and then zoom out for the wider shot and be 100% sure that there is no focus shift and that the image will be tack sharp. Something you can’t do with DSLR lenses.
Lighting: This was a daylight shoot. Now I have to say that I am still a big fan of old school tungsten lighting. You don’t get any odd color casts, it gives great skin tones, it’s cheap and the variety and types of lamp available is vast. But as we all know it needs a lot of power and gets hot. Plus if you want to mix tungsten with daylight you have to use correction gels which makes the lights even less efficient. So for this shoot I packed my Light and Motion Stella lamps.
The Stellas are daylight balanced LED lamps with nice wide 120 degree beams. You can then use various modifiers to change this. I find the 25 degree fresnel and the Stella 5000 a particularly useful combination. This is the equivalent to a 650W tungsten lamp but without the heat. The fresnel lens really helps when lighting via a diffuser or bounce as it controls the spill levels making it easier to control the overall contrast in the shot. The Stella lights have built in batteries or can be run from the mains. They are also waterproof, so even if it rained I would be able to have lights outside the workshop shining in through the windows if needed.
I always carry a number of pop-up diffusers and reflectors of various sizes along with stands and arms specifically designed to hold them. These are cheap but incredibly useful. I find I end up using at least one of these on almost every shoot that I do. As well as a couple of black flags I also carry black drapes to place on the floor or hang from stands to reduce reflections and in effect absorb unwanted light.
To check my images on set I use an Atomos Shogun Flame. Rather than mounting it on the camera, for this shoot I decided to pack an extra heavy duty lighting stand to support the Shogun. This would allow my assistant to use the flame to check focus while I was swinging the jib. The HDR screen on the Shogun allows me to see a close approximation of how my footage will look after grading. It also has peaking and a zoom function to help with focussing which was going to be essential when the camera was up high on the jib and being focussed remotely. I also included a TV-Logic LUM171G which is a 17″ grading quality monitor with 4K inputs. The larger screen is useful for focus and it’s colour accuracy helpful for checking exposure etc.
For audio I packed my trusty UWP-D wireless mic kit and a pair of headphones. I also had a shotgun mic and XLR cable to record some atmos.
As well as all the larger items of kit there’s also all the small bits and bobs that help a shoot go smoothly. A couple of rolls of gaffer tape, crocodile clips, sharpies, spare batteries, extension cables etc. One thing I’ve found very useful is an equipment cart. I have a modiffied rock-n-roller cart with carpet covered shelves. Not only does this help move all the kit around but it also acts as a desk on location. This is really handy when swapping lenses or prepping the camera. It can save quite a bit of time when you have a mobile work area and somewhere you can put lenses and other frequently used bits of kit.
The day before the shoot I set everything up and tested everything. I checked the backfocus adjustment of the lenses. Checked the camera was working as expected and that I had the LUT’s I wanted loaded into both the camera and the Gratical viewfinder. With the camera on the jib I made sure I had the right weights and that everything was smooth. I also checked that my light meter was still calibrated against the camera and that the lens apertures matched what I was expecting (which they did perfectly). Color temperature and colorimetry was checked on the TVLogic monitor.
It’s worth periodically checking these things as there would be nothing worse than rocking up for the shoot only to find the camera wasn’t performing as expected. If you rent a cinema camera package from a major rental house it would be normal to set the camera up on a test bench to check it over before taking it away. But it’s easy to get lazy if it’s your own kit and just assume it’s all OK. A full test like this before an important shoot is well worth doing and it gives you a chance to set everything up exactly as it will be on the shoot saving time and stress at the beginning of the shoot day.
On the morning of the shoot I loaded up the car. I drive a people carrier (minivan to my friends in the USA). Once you start including things like a jib, track and dolly, equipment cart, 6x tungsten lights, 4 x LED lights, plus camera, tripods (including a very heavy duty one for the jib) the car soon fills up. A conventional saloon would not be big enough! One word of caution. I was involved in a car crash many years ago when the car rolled over. I had camera kit in the back of the car and the heavy flight cases did a lot of damage crashing around inside the car. If you do carry heavy kit in the car make sure it’s loaded low down below the tops of the seats. You don’t want everything flying forwards and hitting you on the back of your head in a crash. Perhaps consider a robust steel grill to put between the cargo compartment and the passenger compartment.
On arrival at the location, while it’s very tempting to immediately start unloading and setting up, I like to take a bit of a break and have a tea or coffee first. I use this time to chat with the client or the rest of the crew to make sure everyone knows what’s planned for the day. Taking a few minutes to do this can save a lot of time later and it helps everyone to relax a little before what could be a busy and stressful day.
Now it’s time to unpack and setup. I find it’s better to unpack all the gear at this time rather than stopping and starting throughout the day to unpack new bits of kit. Going to the car, unlocking, unpacking, locking, back to the set etc wastes time. This is where the equipment cart can be a big help as you can load up the cart with all those bits and pieces you “might” need… and inevitably do need.
The blacksmiths workshop was a dark space about 6m x 5m with black walls, open on one side to the outside world. Blacksmiths forges (so I learnt) are dark so that the blacksmith can see the glow of the metal as it heats up to gauge it’s temperature. On the one hand this was great – huge amounts of relatively soft light coming from one direction. On the other hand the dark side was very dark which would really push the camera and lenses due to the extreme contrast this would create.
We set the jib up inside the workshop to shoot the various processes used by a blacksmith when working with iron and steel. Apparently there are only 7 different processes and anything a blacksmith does will use just these 7 processes or variations of them.
Most of the shots done on the jib would be shot using the Fujinon MK18-55mm, so that’s the lens we started with. For protection from flying sparks a clear glass filter was fitted to the lens. While the finished film would be a 24p film, most of the filming was 4K DCI at 60fps recording to 16 bit raw. This would give me the option to slow down footage to 24p in post if I wanted a bit of slow motion.
When we did need to do a lens swap it was really easy. The Vocas matte box I have is a swing-away matte box. So by releasing a lever on the bottom of the matte box it swings out of the way of the lens without having to remove it from the rods. Then I can remove the lens and swap it to the other lens. The MK50-135mm is the same size as the MK18-55mm. The pitch gears are also in the same place. So swapping lenses is super fast as the follow focus or any focus motors don’t need to be re-positioned and the matte box just swings back to exactly the same position on the lens. It’s things like this that really separate pro cinema lenses from DSLR and photography lenses.
For exposure I used the cameras built in LUT’s and the 709(800) LUT. I set the camera to 800EI. I used a grey card to establish a base exposure, exposing the grey card at 43% (measuring the 709 level). I used a Zacuto Gratical viewfinder which has a great built in waveform display, much better than the one in the camera. I also double checked my light levels with a light meter. I don’t feel that it’s essential to use a light meter but it’s a useful safety net. The light meter is also handy for measuring contrast ratios across faces etc but again if you have a decent waveform display you don’t have to have a light meter.
For the next 3 hours we shot the various processes seen in the video. Trying to get a variety of different shot. But when each process is quite similar, usually involving the anvil and a large hammer it was difficult to come up with shots that looked different.
In the afternoon we set up to shoot the interview sequence. The reason for doing this was to not only provide the narrative for the film but also to help show how the lenses reproduce skin tones. The Fujinon MK series lenses are what I would describe as “well rounded”. That is, not too sharp but not soft either. They produce beautifully crisp pictures without the pictures looking artificially sharp and this really helps when shooting people and faces. They just look really nice.
For the interview shot I used one of the Stella 5000 lights with the 25 degree fresnel lens aimed through a 1m wide diffuser to add a little extra light to supplement the daylight. This allowed me to get some nice contrast across the blacksmiths face and nice “catch light” highlights in his eyes. In addition the little bit of extra light on his face meant that the back wall of the forge would appear just that bit darker due to the increased contrast between his face and the back wall. This is why we light…. not just to ensure enough light to shoot with, I had plenty of light, if I remember right I had a 1 stop ND in the matte box. But to create contrast, it’s the contrast that gives the image depth, it’s contrast that makes an image look interesting.
The final stage was to shoot the treasure chest and ornate jars that would show off the the lenses macro and close up performance. The treasure chest is a truly amazing thing. It weighs around 80kg. The locking mechanism is quite fascinating and I still struggle to believe that it was all hand made. The small metal jars are made out of folded and welded steel. It’s the folds in the metal that create the patterns that you see.
Once again we used the jib to add motion to the shots. I also used the macro function of both the MK18-55mm and MK50-135mm lenses. This function allows you to get within inches of the object that you are shooting. It’s a great feature to have and it really adds to the versatility of these lenses.
We wrapped at 7pm. Time to pack away the kit. It’s really important not to rush at this stage. Like everyone else I want to get home as quick as I can. But it’s important to pack your kit carefully and properly. There is nothing more annoying than when you start prepping for the next shoot finding that something has been broken or is missing because you rushed to pack up at the end of the previous shoot. Once you have packed everything away don’t forget to do that last walk through all the locations you’ve shot in to make sure you haven’t forgotten something.
I shot a little over hour of material. As it was mostly 60p 4K raw that came to about 1.5TB This was backed up on site using a Nexto-DI NSB25 which is a stand alone device that makes 2 verified copies of everything on 2 different hard drives. The film was edited using Adobe Premiere CC which handles Sony’s raw very easily. Grading was completed using DaVinci Resolve. I spent 2 days editing and a day grading the first version of the film. Then I spent another day re-grading it for HDR and producing the different versions that would be needed. All in, including coming up with the concept, finding the location, prepping, shooting and post took it took about 7 to 8 full work days to put this simple 4 minute film together.
I have been asked whether you should still expose log a bit brighter than the recommended base levels on the Sony PXW-FS5 now that Sony have released new firmware that gives it a slightly lower base ISO. In this article I take a look at why it might be a good idea to expose log (with any camera) a bit brighter than perhaps the manufacturer recommends.
There are a couple of reasons to expose log nice and bright, not just noise. Exposing log brighter makes no difference to the dynamic range. That’s determined by the sensor and the gain point at which the sensor is working. You want the camera to be at it’s native sensitivity or 0dB gain to get that maximum dynamic range.
Exposing brighter or darker doesn’t change the dynamic range but it does move the mid point of the exposure range up and down. Exposing brighter increases the under exposure range but decreases the over exposure range. Exposing darker decreases the under exposure range but increases the over exposure range.
Something that’s important when thinking about dynamic range and big dynamic ranges in particular is that dynamic range isn’t just about the highlights it’s also about the shadows, it isn’t just over exposure, it’s under exposure too, it’s RANGE.
So why is a little bit of extra light often beneficial? You might call it “over exposure” but that’s not a term I like to use as it implies “too much exposure”. I prefer to use “brighter exposure”.
It’s actually quite simple, it’s about putting a bit more light on to the sensor. Most sensors perform better when you put a little extra light on them. One thing you can be absolutely sure of – if you don’t put enough light on the sensor you won’t get the best pictures.
Put more light on to the sensor and the shadows come up out of the sensors noise floor. So you will see further into the shadows. I’ve had people comment that “why would I ever want to use the shadows, they are always noisy and grainy”? But that’s the whole point – expose a bit brighter and the shadows will be much less noisy, they will come up out of the noise. Expose 1 stop brighter and you halve the shadow noise (for the same shadows at the previous exposure). Shadows are are only ever noise ridden if you have under exposed them.
This is particularly relevant in controlled lighting. Say you light a scene for 9 stops. So you have 9 stops of dynamic range but a 14 stop sensor. Open up the aperture, put more light on the sensor, you get a better signal to noise ratio, less noisy shadows but no compromise of any type to the highlights because if the scene is 9 stops and you have 14 to play with, you can bring the exposure up by a couple of stops comfortably within the 14 stop capture range.
Look at the above diagram of Sony’s S-Log2 and S-Log3 curves. The vertical 0 line in the middle is middle grey. Note how above middle grey the log curves are more or less straight lines. That’s because above the nominal middle grey exposure level each stop is recorded with the same amount of data, this you get a straight line when you plot the curve against exposure stops. So that means that it makes very little difference where you expose the brighter parts of the image. Expose skin tones at stop + 1 or stop +3 and they will have a very similar amount of code values (I’m not considering the way dynamic range expands in the scene you shoot as you increase the light in the scene in this discussion). So it makes little difference whether you expose those skin tones at stop +1 or +3, after grading they will look the same.
Looking at the S-Log curve plots again note what happens below the “0” middle grey line. The curves roll off into the shadows. Each stop you go down has less data than the one before, roughly half as much. This mimics the way the light in a real scene behaves, but it also means there is less data for each stop. This is one of the key reasons why you never, ever want to be under exposed as if you are underexposed you mid range ends up in this roll off and will lack data making it not only noisy but also hard to grade as it will lack contrast and tonal information.
Open up by 1 additional stop and each of those darker stops is raised higher up the recording curve by one stop and every stop that was previously below middle grey doubles the amount of tonal values compared to before, so that’s 8 stops that will have 2x more data than before. This gives you a nice fat (lots of data) mid range that grades much better, not just because it has less noise but because you have a lot more data where you really need it – in the mid range.
Note: Skin tones can cover a wide exposure range, but typically the mid point is around 1 to 1.5 stops above middle grey. In a high contrast lighting situation skin tones will start just under middle grey and extend to about 2 stops over. If you accidentally under expose by 1 stop or perhaps don’t have enough light for the correct exposure you will seriously degrade the quality of your skin tones as half of your skin tones will be well below middle grey and in the data roll-off.
Now of course you do have to remember that if your scene does have a very large dynamic range opening up an extra stop might mean that some of the very brightest highlights might end up clipped. But I’d happily give up a couple of specular highlights for a richer more detailed mid range because when it comes to highlights – A: you can’t show them properly anyway because we don’t have 14 stop TV screens and B: because highlights are the least important part of our visual range.
A further consideration when we think about the highlights is that with log there is no highlight roll-off. Most conventional gamma curves incorporate a highlight roll-off to help increase the highlight range. These traditional highlight roll-offs reduce the contrast in the highlights as the levels are squeezed together and as a result the highlights contain very little tonal information. So even after grading they never look good, no matter what you do. But log has no highlight roll-off. So even the very brightest stop, the one right on the edge of clipping contains just as much tonal information as each of the other brighter than middle grey stops. As a result there is an amazingly large amount of detail than can be pulled out of these very bright stops, much more than you would ever be able to pull from most conventional gammas.
Compare log to standard gammas for a moment. Log has a shadow roll-off but no highlight roll-off. Most standard gammas have a strong highlight roll-off. Log is the opposite of standard gammas. With standard gammas, because of the highlight roll-off, we normally avoid over exposure because it doesn’t look good. With Log we need to avoid under exposure because of the shadow roll-off, it is the opposite to shooting with standard gammas.
As a result I strongly recommend you never, ever under expose log. I normally like to shoot log between 1 and 2 stops brighter than the manufacturers base recommendation.
Next week: Why is a Sony camera like the FS7,F5 800 ISO with standard gamma but 2000 ISO in log and how does that impact the image?
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.