Category Archives: cinematography

What’s the difference between Latitude and Dynamic Range?

These two words, latitude and dynamic range are often confused and are often used interchangeably.  Sometimes they can be the same thing (although rare), sometimes they may be completely different. So what is the difference and why do you have to be careful to use the right term.

Lets start with dynamic range as this is the simplest to understand. When talking about a digital camera the dynamic range is quite simply the total range from the darkest shadow to the brightest highlight that the camera can resolve in a single shot. To be included in the dynamic range you must be able to discern visually or measure with a scope a brightness change at both ends of the range. So a camera that can resolve 14 stops will be able to shoot a scene with a 14 stop brightness range and show some information from stop 0 to stop 14. It is not just a measure of the cameras highlight handling, it includes both highlights and shadows. One camera may be very low noise, so see very far into the shadows but not be so good with highlights. While another may be noisy, so not able to see so far into the shadows but have excellent highlight handling. Despite these differences both might have the same dynamic range as it is the range we are looking at, not just one end or the other.

One note of caution with published dynamic range figures or measurements is that while you may be able to discern some picture information in those deepest shadows or brightest highlights, just how useable both ends of the range are will depend on just how the camera performs at it’s extremes. It is not uncommon for the darkest stop to be so close to the cameras noise floor that in reality it’s barely useable, but as it can be measured it will be included in the manufacturers dynamic range figures.

This brings us on to latitude because latitude is a measure of just how flexible you can be with your exposure without significantly compromising the finished picture. The latitude will always be less than the cameras dynamic range. With a film camera, the film stock would have a sensitivity value or ISO. You would then use an exposure meter to determine the optimum exposure. The latitude would then be how much can you over expose or under expose and still have an acceptable result. But what is “an acceptable result”? Here is one of the key problems with determining latitude, what some people may find unacceptable others may be happy with so it can be difficult to quantify the exact latitude of a film stock or video camera precisely. However what you can do is determine which cameras have bigger ranges for example camera “A” has a stop more latitude than camera “B” provide you use a consistent “acceptable quality” assesment.

Anyone that’s shot with a traditional ENG or home video camera will know that you really need to get your exposure right to get a decent looking picture. Take a simple interview shot, expose it right and it looks fine. Overexpose by 1 stop and it looks bad, even if you try to grade it it will still look bad. So in this example the camera would have less than 1 stop of over exposure latitude. But if you underexpose a video camera, the picture gets darker, but after a bit of work in post production it may well still look OK. It will depend in most cases on how noisy the picture becomes when you boost the levels in post to brighten the picture. But typically you might be able to go 1 to 1.5 stops under exposed and still have a useable image. So in this case the camera would have 1.5 stops of underexposure latitude. This then gives a total latitude for our hypothetical camera of around 2 to 2.5 stops.

But what of we increase the dynamic range of the camera or have a camera with a very big dynamic range. Does my latitude increase?

Well the answer is maybe. In some cases the latitude may actually decrease. How can that be possible, surely with a bigger dynamic range my latitude must be greater?

Well, unless your shooting linear raw (more on that in a bit) you will be using some kind of gamma curve. The gamma curve is there to allow you to squeeze a large dynamic range into a small amount of data. It does this by mimicking the way we perceive light in a non linear manner and uses less data in highlights which are perceptually less important to us humans. Even uncompressed video normally has a gamma curve. Without a gamma curve the amount of data needed to record a decent looking picture would be huge as every additional stop of dynamic range actually needs twice as much data as the previous to be recorded faithfully.

With cameras with larger dynamic ranges then things such as knee compression or special gamma curves like Hypergamma, Cinegamma or Log are used. The critical thing with all of these is that the only way to squeeze that greater dynamic range into the same size recording bucket is by adding extra compression to the recorded image.

exposure1This compression is normally restricted to the highlights (which are perceptually less important). Highlight compression now presents us with an exposure problem, because if we over expose the shot then the picture won’t look good due to the compression. This means that even though we might have increased the cameras dynamic range (by squeezing and compressing more information into the highlight range) we may have reduced the exposure latitude as any over exposure places important mid range information into the highly compressed part of the gamma curve. So bigger dynamic range does not mean greater latitude, in fact in many cases it means less latitude.

Here’s the thing. Unless you make the recording data bucket significantly bigger (better codec and more bits, 10 bit 12 bit etc), you can’t put more data (dynamic range or stops) into that bucket without it overflowing or without squashing it. Given that most cameras used fixed 8 bit or 10 bit recording there is a finite limit to what can be squeezed into the codec without making some pretty big compromises.

Compression point with Hypergamma/Cinegamma.
Compression point with Hypergamma/Cinegamma.

With a standard gamma curve white is exposed around 90% to 95%, remember a white card only reflects 90% of the light falling on it not 100%. Middle grey perceptually appears half way between black and white so it’s around 40%-45%. Above 90% is where the knee normally acts to compress highlights to squeeze quite a large dynamic range into a very small recording range, so anything above 90% will be very highly compressed, but below 90% we are OK and we can safely use the full range up to 90%. Expose a face below 90% and it will look natural, above 90% it will look washed out, low contrast and generally nasty due to the squeezing together of the contrast and dynamic range.

But what about a Hypergamma or Cinegamma (or any other high dynamic range gamma curve)? Well these don’t have a knee, instead they start to gradually introduce compression much lower down the gamma curve. A little bit at first and then ever increasing amounts as we go up the exposure range. This allows them to squeeze in a much greater dynamic range in a pleasing way (provided you expose right). But this means that we can’t afford to let faces etc go as high as with the standard gamma because if we do they will start to creep in to the highly compressed part of the curve. So this means that even the slightest over exposure will hurt our image.  So even thought they have greater dynamic range, these curves have less exposure latitude because we really really can’t afford to over expose them. Sony compensate for this to some degree by recommending a lower middle grey point between 32 and 40% depending on the curve you use. This then brings your overall exposure lower so your less likely to over expose, but that now means you have less under exposure range as your already shooting a bit darker (White with the hypergammas tends to fall lower, around 80%, so faces and skin tones that would normally be around 70% will be around 60%).

More highlight compression means exposure is still critical despite greater dynamic range
More highlight compression means exposure is still critical despite greater dynamic range

But what about Log?

Now lets look at S-Log2, S-log3. Most  log curves are also similar, very highly compressed gamma curves with huge amounts of highlight compression to squeeze in an exceptionally large dynamic range. With Slog2 White is designed to be at 59% and middle grey at 32% and with S-log3 middle grey is 41% and white 61%. So faces will need to sit between around 40% and 50% to look their best. Now log is a little bit different. Log shooting is designed to be done in conjunction with LUT’s (Look Up Tables) in post production. These LUT’s convert the signal from Log gamma to conventional gamma. When you apply the correct LUT to correctly exposed Log everything comes out looking good. What about over exposed Log? This is where it can get tricky. If you have a good exposure correction LUT or really know how to grade log properly (which can be tricky) then you can expose Log by one or 2 stops, but no more (in my opinion at least, 2 stops is a lot of over exposure for Log, I would try to stay less than 2 stops over). Over expose too much and the image gets really hard to grade and may start to lack contrast. One thing to note is when I say over-exposed with respect to log, I’m not talking about about a clipped picture, but simply an image much brighter than it should be. For example with Slog3 faces will be around 52%. If you expose faces at 70% your actually just over 2 stops over exposed and grading is going to start to get tricky and you may find it hard to get your skin tones just right. So, when shooting log make sure you know what the recommended levels are for the curve you are using. I’m not saying you can’t over expose a bit, just be aware of what is correct and that level shifts of just a 7 or 8% may represent a whole stop of exposure change.

It’s only when you stop shooting with conventional gamma curves and start shooting linear that the latitude really starts to open up. Cameras like the Sony F5/F55 use linear raw recording that does not have a gamma curve. When you have no gamma curve then there is no highlight compression. So for example you could expose a face anywhere between in conventional terms between say 45% (the point where perhaps it becomes too noisy if you expose any darker) and 100% it will look just fine after grading because at no point does it become compressed. This is a massive latitude increase over a camera using a gamma curve. It gets even better if the camera is very low noise as you can afford to expose at an even lower level and bring it up in post. This is why raw is such a big deal. I find it much easier to work with and grade raw than log because raw just behaves nicely.

In Conclusion:

Dynamic range is the range the camera can see from the deepest darkest shadows to the brightest highlights in the same shot. Latitude is the range within the dynamic range where we can expose and still get a useable image.

A camera with lower noise will allow you to expose darker and bring your levels up in post, this gives an increase in under exposure range.

Most video cameras have a very limited over exposure latitude due to aggressive highlight compression. This is the opposite to a film camera.

Bigger dynamic range does not always mean greater latitude.

Cameras that shoot raw typically have a much greater latitude than a camera shooting with a gamma curve. For example an F5 shooting SLog2/3 has a much smaller exposure latitude than when shooting raw even though the dynamic range is the same in both cases.

 

It’s all in the grade!

So I spent much of last week shooting a short film and commercial (more about the shoot in a separate article). It was shot in raw using my Sony F5, but could have been shot with a wide range of cameras. The “look” for this production is very specific. Much of it set late in the day or in the evening and requiring a gentle romantic look.

In the past much of this look would have been created in camera. Shooting with a soft filter for the romantic look, shifting the white balance with warming cards or a dialled in white balance for a warm golden hour evening look. Perhaps a custom picture profile or scene file to alter the look of the image coming from the camera. These methods are still very valid, but thanks to better recording codecs and lower cost grading and post production tools, these days it’s often easier to create the look in post production.

When you look around on YouTube or Vimeo at most of the showreels and demo reels from people like me they will almost always have been graded. Grading is a huge, modern,  part of the finishing process and it makes a huge difference to the final look of a production. So don’t automatically assume everything you see on-line looked like that when it was shot. It probably didn’t and a very, very big part of the look tends to be created in post these days.

One further way to work is to go half way to your finished look in camera and then finish off the look in post. For some productions this is a valid approach, but it comes with some risks and there are some things that once burnt into the recording can be hard to then subsequently change in post, for example any in camera sharpening is difficult to remove in post as are crushed blacks or skewed or offset white balance.

Also understand that there is a big difference between trying to grade using the color correction tools in an edit suite and using a dedicated editing package. For many, many years I used to grade using my editing software, simply because that was what I had. Plug-ins such as Magic Bullet looks are great and offer a quick and effective way to get a range of looks, but while you can do a lot with a typical edit color corrector, it pales into insignificance compared to what can be done with a dedicated grading tool, for example not only creating a look but then adjusting individual elements of the image.

When it comes to grading tools then DaVinci Resolve is probably the one that most people have heard of. Resolve Lite is free, yet still incredibly capable (provided you have a computer that will run it). There are lots of other options too like Adobe Speed Grade, but the key thing is that if you change your workflow to include lots of grading, then you need to change the way you shoot too. If you have never used a proper grading tool then I urge you to learn how to use one. As processing power improves and these tools become more and more powerful they will play an ever greater role in video production.

So how should you shoot for a production that will be graded? I’m sure you will have come across the term “shoot flat” and this is often said to be the way you should shoot when you’re going to grade. Well, yes and no. It depends on the camera you are using, the codec, noise levels and many other factors. If you are the DP, Cinematographer or DiT, then it’s your job to know how footage from your camera will behave in post production so that you can provide the best possible blank canvas for the colourist.

What is shooting flat exactly? Lets say your monitor is a typical LCD monitor. It will be able to show 6 or 7 stops of dynamic range. Black at stop 0 will appear to be black and whites at stop 7 will appear bright white. If your camera has a 7 stop range then the blacks and whites from the camera will be mapped 1:1 with the monitor and the picture will have normal contrast. But what happens when you then have a camera that can capture double that range, say 12 to 14 stops?. The bright whites captured by the camera will be significantly brighter than before. If you then take that image and try to show it on the same LCD monitor you have an issue because the LCD cannot go any brighter than before, so the much brighter whites from the high dynamic range shot are shown at the same brightness as the original low dynamic range shot. Not only that but the now larger tonal range is now squashed together into the monitors limited range. This reduces the contrast in the viewed image and as a result it looks flat.

That’s a real “shoot flat” image (a wide dynamic range shown on a typical dynamic range monitor), but you have to be careful because you can also create a flat looking image by raising the cameras black level or black gamma or reducing the white level. Doing this reduces the contrast in the shadows and mid tones and will make the pictures look low contrast and flat. But raising the black level or black gamma or reducing the white point rarely increases the dynamic range of a camera, most cameras dynamic range is limited by the way they handle highlights and over exposure, not shadows, dark or white level. So just beware, not all flat looking images bring real post production advantages, I’ve seen many examples of special “flat” picture profiles or scene files that don’t actually add anything to the captured image, it’s all about dynamic range, not contrast range. See this article for more in depth info on shooting flat.

If you’re shooting for grading, shooting flat with a camera with a genuinely large dynamic range is often beneficial as you provide the colourist with a broader dynamic range image that he/she/you can then manipulate so that it looks good on typically small dynamic range TV’s and monitors, but excessively raising the black level or black gamma rarely helps the colourist as this just introduces an area that will need to be corrected to restore good contrast rather than adding anything new or useful to the image. You also need to consider that it’s all very well shooting with a camera that can capture a massive dynamic range, but as there is no way to ever show that full range, compromises must be made in the grade so that the picture looks nice. An example of this would be a very bright sky. In order to show the clouds in the sky the rest of the scene may need to be darkened as the sky is always brighter than everything else in the real world. This might mean the mid tones have to be rather dark in order to preserve the sky. The other option would be to blow the sky out in the grade to get a brighter mid range. Either way, we don’t have a way of showing the 14 stop range available from cameras like the F5/F55 with current display technologies, so a compromise has to be made in post and this should be in the back of your mind when shooting scenes with large dynamic ranges. With a low dynamic range camera, you the camera operator would choose whether to let the highlights over expose to preserve the mid range or whether to protect the highlights and put up with a darker mid range. But now with these high dynamic range cameras that decision is largely moved to post production, but you should still be looking at your mid tones and if needed adding a bit of extra illumination so that the mids are not fighting the highlights.

In addition to shooting flat there is a lot of talk about using log gamma curves, S-Log, S-log2, LogC etc. Again IF the camera and recording codec are optimised for Log then this can be an extremely good approach. Remember that if you choose to use a log gamma curve then you will also need to adjust the way you expose to place skin tones etc in the correct part of the log curve. It’s no longer about exposing for what looks good on the monitor or in the viewfinder, but about exposing the appropriate shades in the correct part of the log curve.  I’ve written many articles on this so I’m not going to go into it here, other than to say log is not a magic fix for great results and log needs a 10 bit codec if your going to use it properly. See these articles on Log: S-Log and 8 bit  or Correct Exposure with Log. Using Log does allow you to capture the cameras full range, it will give you a flat looking image and when used correctly it will give the colourist a large blank canvas to play with. When using log it is vital that you use a proper grading tool that will apply log based corrections to your footage as adding linear corrections in a typical edit application to log footage will not give the best results.

So what if your camera doesn’t have log? What can you do to help improve the way the image looks after post production? First of all get your exposure right. Don’t over expose. Anything that clips can not be recovered in post. Something that’s a little too dark can easily be brightened a bit, but if it’s clipped it’s gone for good. So watch those highlights. Don’t under expose,  just expose correctly. If you’re having a problem with a bright sky don’t be tempted to add a strong graduated filter to the camera to darken the sky. If the colorist tries to adjust the contrast of the image the grad may become more extreme and objectionable. It’s better to use a reflector or some lights to raise the foreground rather than a graduated filter to lower the highlight.

One thing that can cause grading problems is any knee compression. Most video cameras by default use something called the “Knee” to compress highlights. This does give the camera the ability to capture a greater dynamic range, but this is done by aggressively compressing together the highlights and it’s either on or off. If the light changes during the shot and the cameras knee is set to auto (as most are by default) then the highlight compression will change mid shot and this can be a nightmare to grade. So instead of using the cameras default knee settings use a scene file or picture profile to set the knee to manual or use an extended range gamma curve like a Hypergamma or Cinegamma that does not have a knee and instead uses a progressive type of highlight compression.

Another thing that can become an issue in the grading suite is image sharpening. In camera sharpening such as detail correction works by boosting contrast around edges. So if you take an already sharpened image into the grading suite and then boost the contrast in post, the sharpening will become more visible and the pictures may take on more of a video look or become over sharpened. It’s just about impossible to remove image sharpening in post, but to add a bit of sharpening is quite easy. So, if you’re shooting for post consider either turning off the detail correction circuits all together or at the very least reduce the levels applied by a decent amount.

Color and white balance: One thing that helps keep things simple in the grade is having a consistent image. The last thing you want is the white balance changing half way through the shot, so as a minimum use a fixed white balance or preset white balance. I find it better to shoot with preset white when shooting for a post heavy workflow as even if the light changes a little from scene to scene or shot to shot the RGB gain levels remain the same so any corrections applied have a similar effect, the colourist then just tweaks the shots for any white balance differences. It’s also normally easier to swing the white balance in post if preset is used as there won’t be any odd shifts added as can sometimes happen if you have used a grey/white card to white balance.

Just as the brightness or luma of an image can clip if over exposed then so too can the colour. If you’re shooting colourful scenes, especially shows or events with coloured lights then it will help you if you reduce the saturation of the colour matrix by around 20%, this allows you to record stronger colours before they clip. Colour can then be added back in in the grade if needed.

Noise and grain: This is very important. The one thing above all the others that will limit how far you can push your image in post is noise and grain. There are two sources of this, camera noise and compression noise. Camera noise is dependant on the cameras gain and chosen gamma curve. Aways strive to use as little gain as possible, remember that if the image is just a little dark you can always add gain in post, so don’t go adding un-necessary gain in camera. A proper grading suite will have powerful noise reduction tools and these normally work best if the original footage is noise free and then gain added in post, rather than trying to de-noise grainy camera clips.

The other source of noise and grain is compression noise. Generally speaking, the more highly compressed the video stream is then the greater the noise will be. Compression noise is often more problematic than camera noise as in many cases it will have a regular pattern or structure which makes it visually more distracting than random camera noise. More often than not the banding seen in images across the sky or flat surfaces is caused by compression artefacts rather than anything else and during grading any artefacts such as these can become more visible. So try to use as little compression as possible, this may mean using an external recorder but these can be purchased or hired quite cheaply these days. As always, before a big production test your workflow. Shoot some sample footage, grade it and see what it looks like. If you have a banding problem, suspect the codec or compression ratio first, not whether it’s 8 bit or 10 bit, in practice it’s not 8 bit that causes banding, but too much or poor quality compression (so even a camera with only an 8 bit output like the FS700 will benefit from recording on a better quality external recorder).

RAW: Of course the best way of providing the colourist (even if that’s yourself) the best blank canvas is to shoot with a camera that can record the raw sensor data. By shooting raw you do not add any in camera sharpening or gamma curves that may then need to be removed in post. In addition raw normally means capturing the cameras full dynamic range. But that’s not possible for everyone and generally involves working with very large amounts of data. If you follow my guidelines above you should at least have material that will allow you a good range of adjustment and fine tuning in post. This isn’t “fix it in post”, we are not putting right something that is wrong. We are shooting in a way that allows us to make use of the incredible processing power available in a modern computer to produce great looking images. You are making those last adjustments that make a picture look great using a nice big monitor (hopefully calibrated) in a normally more relaxed environment than on most shoots.

The way videos are produced is changing. Heavy duty grading used to be reserved for high end productions, drama and movies. But now it is common place, faster and easier than ever. Of course there are still many applications where there isn’t the time for grading, such as TV news, but grading is going to play an every greater part in more and more productions, so it’s worth learning how to do it properly and how to adjust your shooting setup and style to maximise the quality of the finished production.

Understanding the difference between Display Referenced and Scene Referenced.

This is really useful! Understand this and it will help you understand a lot more about gamma curves, log curves and raw. Even if you don’t shoot raw, understanding this can be very helpful in working out differences in how we see the world, the way the world really is and how a video camera see’s the world.

So first of all what is “Display Referenced”? As the name of the term implies this is all about how an image is displayed. The vast majority of gamma curves are display referenced. Most cameras are setup based on what the pictures look like on a monitor or TV, this is display referenced. It’s all about producing a picture that looks nice when it is displayed. Most cameras and monitors produce pictures that look nice by mimicking the way or own visual system works, that’s why the pictures look good.

Kodak Grey Card Plus.
Kodak Grey Card Plus.

If you’ve never used a grey card it really is worth getting one as well as a black and white card. One of the most commonly available grey cards is the Kodak 18% grey card. Look at the image of the Kodak Grey Card Plus shown here. You can see a white bar at the top, a grey middle and a black bar at the bottom.

What do you see? If your monitor is correctly calibrated the grey patch should look like it’s half way between white and black. But this “middle” grey is also known as 18% grey because it only actually reflects 18% of the light falling on it. A white card will reflect 90% of the light falling on it. If we assume black is black then you would think that a card reflecting only 18% of the light falling on it would look closer to black than white, but it doesn’t, it looks half way between the two. This is because our own visual system is tuned to shadows and the mid range and tends to ignore highlights and brighter parts of the scenes we are looking at. As a result we perceive shadows and dark objects as brighter than they actually are. Maybe this is because in the past the things that used to want to eat us lurked in the shadows, or simply because faces are more important to us than the sky and clouds.

To compensate for this, right now your monitor is only using 18% of it’s brightness range to show shades and hues that appear to be half way between black and white. This is part of the gamma process that makes images on screens look natural and this is “display referenced”

When we expose a video camera using a display referenced gamma curve (Rec-709 is display referenced) and a grey card, we would normally set the exposure level of the grey card at around 40-45%. It’s not normally 50% because a white card will reflect 90% of the light falling on it and half way between black and the white card will be about 45%.

We do this for a couple of reasons. In older analog recording and broadcasting systems the signal is nosier when closer to black, if we recorded 18% grey at 18% it would be possibly be very noisy. Most scenes contain lots of shadows and objects less bright than white, so recording these at a higher level provides a less noisy picture and allows us to use more bandwidth for those all important shadow areas. When the recording is then displayed on a TV or monitor the levels are then adjusted by the monitors gamma curve so that the brightness levels are such that mid-tones appear as just that, mid tones.

So that middle grey recorded at 45% is getting reduced back down so that the display outputs 18% of its available brightness range and thus to us humans it appears to be half way between black and white.

So are you still with me? All the above is “Display Referenced”, it’s all about how it looks.

So what is “Scene Referenced”?

Think about our middle grey grey card again. It reflects only 18% of the light that falls on it, yet appears to be half way between black and white. How do we know this? Well because someone has used a light meter to measure it. A light meter is a device that captures photons of light and from that produces an electrical signal to drive a meter. What is a video camera? Every pixel in a video camera is a microscopic light meter that turns electrons of light into and electrical signal. So a video camera is in effect a very sophisticated light meter.

Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.
Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.

If we remove the cameras gamma curve and just record the data coming off the sensor we are recording a measurement of the true light coming from the scene just as it is. Sony’s F5, F55 and F65 cameras record the raw sensor data with no gamma curve, this is linear raw data, so it’s a true representation of the actual light levels in the scene. This is “Scene Referred”. It’s not about how the picture looks, but recording the actual light levels in the scene. So a camera shooting “Scene Referred” will record the light coming off an 18% grey card at 18%.

If we do nothing else to that scene referred image and then show it on a monitor with a conventional gamma curve, that 18% grey level would be taken down in level by the gamma curve and as a result look almost totally black (remember in Display referenced we record middle grey at 45% and then the gamma curve corrects the monitor output down to provide correct brightness so that we perceive it to be half way between black and white).

This means that we cannot simply take a scene referenced shot and show it on a display referenced monitor. To get from Scene Referenced to Display Referenced we have to add a gamma curve to the Scene Referenced footage. When your working with linear raw this is normally done on the fly in the editing or grading software, so it’s very rare to actually see the scene referenced footage as it really is. The big advantage of using scene referenced material is that because we have recorded the scene as it actually is, any grading we do will not have to deal with the distortions that a gamma curve adds. Grading correction behave in a much more natural and realistic manner. The down side is that as we don’t have a gamma curve to help shift our recording levels into a more manageable range we need to use a lot more data to record the scene accurately.

The Academy ACES workflow is based around using scene referenced material rather than display referenced. One of the ideas behind this is that scene referenced cameras from different manufacturers should all look the same. There is no artistic interpretation of the scene via a gamma curve. A scene referenced camera should be “measuring” and recording the scene how it actually is so it shouldn’t matter who makes it, they should all be recording the same thing. Of course in reality life is not that simple. Differences in the color filters, pixel design etc means that there are differences, but by using scene referred you eliminate the gamma curve and as a result a grade you apply to one camera will look very similar when applied to another, making it easier to mix multiple cameras within your workflow.

 

Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).

There is a lot of confusion and misunderstanding about aliasing and moiré. So I’ve thrown together this article to try and explain whats going on and what you can (or can’t) do about it.

Sony’s FS700, F5 and F55 cameras all have 4K sensors. They also have the ability to shoot 4K raw as well as 2K raw when using Sony’s R5 raw recorder. The FS700 will also be able to shoot 2K raw to the Convergent Design Odyssey. At 4K these cameras have near zero aliasing, at 2K there is the risk of seeing noticeable amounts of aliasing.

One key concept to understand from the outset is that when you are working with raw the signal out of the camera comes more or less directly from the sensor. When shooting non raw then the output is derived from the full sensor plus a lot of extra very complex signal processing.

First of all lets look at what aliasing is and what causes it.

Aliasing shows up in images in different ways. One common effect is a rainbow of colours across a fine repeating pattern, this is called moiré. Another artefact could be lines and edges that are just a little off horizontal or vertical appearing to have stepped or jagged edges, sometimes referred to as “jaggies”.

But what causes this and why is there an issue at 2K but not at 4K with these cameras?

Lets imagine we are going to shoot a test pattern that looks like this:

Test pattern, checked shirt or other similar repeating pattern.
Test pattern, checked shirt or other similar repeating pattern.

And lets assume we are using a bayer sensor such as the one in the FS700, F5 or F55 that has a pixel arrangement like this, although it’s worth noting that aliasing can occur with any type of sensor pattern or even a 3 chip design:

Sensor with bayer pattern.
Sensor with bayer pattern.

Now lets see what happens if we shoot our test pattern so that the stripes of the pattern line up with the pixels on the sensor. The top of the graphic below represents the pattern or image being filmed, the middle is the sensor pixels and the bottom is the output from the green pixels of the sensor:

Test pattern aligned with the sensor pixels.
Test pattern aligned with the sensor pixels.

As we can see each green pixel “see’s” either a white line or a black line and so the output is a series of black and white lines. Everything looks just fine… or is it? What happens if we move the test pattern or move the camera just a little bit. What if the camera is just a tiny bit wobbly on the tripod? What if this isn’t a test pattern but a striped or checked shirt and the person we are shooting is moving around? In the image below the pattern we are filming has been shifted to the left by a small amount.

Test pattern miss-aligned with pixels.
Test pattern miss-aligned with pixels.

Now look at the output, it’s nothing but grey, the black and white pattern has gone. Why? Simply because the green pixels are now seeing half of a white bar and half of a black bar. Half white plus half black equals grey, so every pixel see’s grey. If we were to slowly pan the camera across this pattern then the output would alternate between black and white lines when the bars and pixels line up and grey when they don’t. This is aliasing at work. Imagine the shot is of a person with a checked shirt, as the person moves about the shirt will alternate between being patterned and being grey. As the shirt will be not be perfectly flat and even, different parts of the shirt will go in and out of sync with the pixels so some parts will be grey, some patterned, it will look blotchy. A similar thing will be happening with any colours, as the red and blue pixels will sometimes see the pattern at other times not, so the colours will flicker and produce strange patterns, this is the moiré that can look like a rainbow of colours.

So what can be done to stop this?

Well what’s done in the majority of professional level cameras is to add a special filter in front of the sensor called an optical low pass filter (OPLF). This filter works by significantly reducing the contrast of the image falling on the sensor at resolutions approaching the pixel resolution so that the scenario above cannot occur. Basically the image falling on the sensor is blurred a little so that the pixels can never see only black or only white. This way we won’t get flickering between black & white and then grey if there is any movement. The downside to this is that it does mean that some contrast and resolution will be lost, but this is better than having flickery jaggies and rainbow moiré. In effect the OLPF is a type of de-focusing filter (for the techies out there it is usually something called a birefringent filtre). The design of the OLPF is a trade off between how much aliasing is acceptable and how much resolution loss is acceptable. The OLPF cut-off isn’t instant, it’s a sharp but gradual cut-off that starts somewhere lower than the sensor resolution, so there is some undesirable but unavoidable contrast loss on fine details. The OLPF will be optimised for a specific pixel size and thus image resolution, but it’s a compromise. In a 4K camera the OLPF will start reducing the resolution/contrast before it gets to 4K.

(As an aside, this is one of the reasons why shooting with a 4K camera can result in better HD, because the OLPF in an HD camera cuts contrast as we approach HD, so the HD is never as sharp and contrasty as perhaps it could be. But shoot at 4K and down-convert and you can get sharper, higher contrast HD).

So that’s how we prevent aliasing, but what’s that got to do with 2K on the FS700, F5 and F55?

Well the problem is this, when shooting 2K raw or in the high speed raw modes Sony are reading out the sensor in a way that creates a larger “virtual” pixel. This almost certainly has to be done for the high speed modes to reduce the amount of data that needs to be transferred from the sensor and into the cameras processing and recording circuits when using high frame rates.  I don’t know exactly how Sony are doing this but it might be something like my sketch below:

Using adjacent pixels to create larger virtual pixels.
Using adjacent pixels to create larger virtual pixels.

So instead of reading individual pixels, Sony are probably reading groups of pixels together to create a 2K bayer image. This creates larger virtual pixels and in effect turns the sensor into a 2k sensor. It is probably done on the sensor during the read out process (possibly simply by addressing 4 pixels at the same time instead of just one) and this makes high speed continuous shooting possible without overheating or overload as there is far less data to read out.

But, now the standard OLPF which is designed around the small 4K isn’t really doing anything because in effect the new “virtual” pixels are now much larger than the original 4K pixels the OLPF was designed around. The standard OLPF cuts off at 4K so it isn’t having any effect at 2K, so a 2K resolution pattern can fall directly on our 2K virtual bayer pixels and you will get aliasing. (There’s a clue in the filter name: optical LOW PASS filter, so it will PASS any signals that are LOWer than the cut off. If the cut off is 4K, then 2K will be passed as this is lower than 4K, but as the sensor is now in effect a 2K sensor we now need a 2K cut off).

On the FS700 there isn’t (at the moment at least) a great deal you can do about this. But on the F5 and F55 cameras Sony have made the OLPF replaceable. By loosening one screw the 4K OLPF can be swapped out with a 2K OLPF in just a few seconds. The 2K OLPF will control aliasing at 2K and high speed and in addition it can be used if you want a softer look at 4K. The contrast/resolution reduction the filter introduces at 2K will give you a softer “creamier” look at 4K which might be nice for cosmetic, fashion, period drama or other similar shoots.

Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.
Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.

Fs700 owners wanting to shoot 2K raw will have to look at adding a little bit of diffusion to their lenses. Perhaps a low contrast filter will help or a net or stocking over the front of the lens to add some diffusion will work to slightly soften the image and prevent aliasing. Maybe someone will bring out an OLPF that can be fitted between the lens and camera body, but for the time being to prevent aliasing on the FS700 you need to soften, defocus or blur the image a little to prevent the camera resolving detail above 2K. Maybe using a soft lens will work or just very slightly de-focussing the image.

But why don’t I get aliasing when I shoot HD?

Well all these cameras use the full 4K sensor when shooting HD. All the pixels are used and read out as individual pixels. This 4K signal is then de-bayered into a conventional 4K video signal (NOT bayer). This 4K (non raw) video will not have any significant aliasing as the OLPF is 4K and the derived video is 4K. Then this conventional video signal is electronically down-converted to HD. During the down conversion process an electronic low pass filter is then used to prevent aliasing and moiré in the newly created HD video. You can’t do this with raw sensor data as raw is BEFORE processing and derived directly from the sensor pixels, but you can do this with conventional video as the HD is derived from a fully processed 4K video signal.

I hope you understand this. It’s not an easy concept to understand and even harder to explain. I’d love your comments, especially anything that can add clarity to my explanation.

UPDATE: It has been pointed out that it should be possible to take the 4K bayer and from that use an image processor to produce a 2K anti-aliased raw signal.

The problem is that, yes in theory you can take a 4K signal from a bayer sensor into an image processor and from that create an anti-aliased 2K bayer signal. But the processing power needed to do this is incredible as we are looking at taking 16 bit linear sensor data and converting that to new 16 bit linear data. That means using DSP that has a massive bit depth with a big enough overhead to handle 16 bit in and 16 bit out. So as a minimum an extremely fast 24 bit DSP or possibly a 32 bit DSP working with 4K data in real time. This would have a big heat and power penalty and I suspect is completely impractical in a compact video camera like the F5/F55. This is rack-mount  high power work station territory at the moment. Anyone that’s edited 4K raw will know how processor intensive it is trying to manipulate 16 bit 4K data.

When shooting HD your taking 4K 16 bit linear sensor data, but your only generating 10 bit HD log or gamma encoded data, so the processing overhead is tiny in comparison and a 16 bit DSP can quite happily handle this, in fact you could use a 14 bit DSP by discarding the two LSB’s as these would have little effect on the final 10 bit HD.

For the high speed modes I doubt there is any way to read out every pixel from sensor continously without something overheating. It’s probably not the sensor that would overheat but the DSP. The FS700 can actually do 120fps at 4K for 4 seconds, so clearly the sensor can be read in it’s entirety at 4K and 120fps, but my guess is that something gets too hot for extended shooting times.

So this means that somehow the data throughput has to be reduced. The easiest way to do this is to read adjacent pixel groups. This can be done during the readout by addressing multiple pixels together (binning). If done as I suggest in my article this adds a small degree of antialising. Sony have stated many times that they do not pixel or line skip and that ever pixel is used at 2K, but either way, whether you line skip or pixel bin the effective pixel pitch increases and so you must also change the OLPF to match.

Shimming Nikon to Canon Lens Adapters. Helps get your zooms to track focus.

I use a lot of different lenses on my large sensor video cameras. Over the years I’ve collected quite a collection of Nikon and Canon mount lenses. I like Nikon mount lenses because they still have an iris that can be controlled manually. I don’t like Nikon lenses because most of them focus back-to-front compared to broadcast, PL and Canon lenses. The exception to this is Sigma lenses. The vast majority of Sigma lenses with Nikon mounts focus the right way – anti clockwise for infinity. If you go back just a few years you’ll find a lot of Sigma, Nikon mount lenses that focus the right way and have a manual iris ring. These are a good choice for use on video cameras. You don’t need any fancy adapters with electronics or extra mechanical devices to use these lenses and you know exactly what your aperture is.

But…. Canon lenses have some advantages too. First is the massive range of lenses out there. Then there is the ability to have working optical image stabilisation if you have an electronic mount and the possibility to remotely control the iris and focus. The down side is you need some kind of electronic mount adapter to make most of them work. But as I do own a couple of Canon DSLR’s it is useful to have a few Canon lenses.

So for my F3, initially I used Nikon lenses. Then along came the FS100 and FS700 cameras plus the Metabones adapter for Canon, so I got some Canon lenses. Then came the MTF Effect control box for Canon lenses on the F5 and now I have my micro Canon controller with integrated speed booster for the F5 and F55. This all came to a head when on an overseas shoot I got out one of my favourite lenses to put on my F5, but, the lens was a Nikon lens and I only had my Canon mounts (shame on me for not taking both mounts). Continually swapping mounts is a pain. So I decided to permanently fit all of my Nikon lenses with Nikon to Canon adapters and then only use Canon mounts on the cameras. You can even get Nikon to Canon adapters that will control the manual iris pin on a lens with no iris ring.

Now a problem with a lot of these adapters is that they are a little bit too thin. This is done to guarantee that the lens will reach infinity focus. If the adapter is too thick you won’t be able to focus on distant objects. This means that the focus marks on the lens and the distances your focussing at don’t line up. Typically you’ll be focussed on something 3m/9ft away but the lens markings will be at 1m/3ft. It can mean that the lens won’t focus on close objects when really it should. If your using a zoom lens this will also mean that as you zoom in and out you will see much bigger focus swings than you should. When the lens flange back (distance from the back of the lens to the sensor) is correctly set any focus shifts will be minimised. If the flange back distance is wrong then the focus shifts can be huge.

Remove the 4 small screws as arrowed.
Remove the 4 small screws as arrowed.

So what’s the answer? Well it’s actually quite simple and easy. All you need to do is to split the front and rear halves of the adapter and insert a thin shim or spacer. Most of the lower cost adapters are made from two parts. Removing 4 small screws allows you to separate the two halves. Make sure you don’t loose the little locking tab and it’s tiny spring!

 

 

The adapter split in two. The shim needs to fit just inside the lip arrowed.
The adapter split in two. The shim needs to fit just inside the lip arrowed.

Split the two halves apart. Then use the smaller inner part as a template for a thin card spacer that will go in between the two parts when you put the adapter back together. The thickness of the card you need will depend on the specific adapter you have, but in general I have found card that is about the same thickness as a typical business card or cereal packet to work well. I use a scalpel to cut around the smaller part of the adapter. Note that you will also need to cut a small slot in the card ring to allow for the locking tab. Also note that when you look at the face of the larger half of the adapter you will see a small lip or ridge that the smaller part sits in. Your spacer needs to fit just inside this lip/ridge.

 

The card spacer in place prior to reassembly. Needs a little tidy up at this stage!
The card spacer in place prior to reassembly. Needs a little tidy up at this stage!

With the spacer in place offer up the two halves of the adapter. Then use a fine scalpel to “drill” out the screw holes in the card, a fine drill bit would also work. Then screw the adapter back together. Don’t forget to put the locking tab back in place before you screw the two halves together.

 

 

 

Gently widen the narrow slit between these parts to make the adapter a tight fit on the lens.
Gently widen the narrow slit between these parts to make the adapter a tight fit on the lens.

Before putting the adapter on the lens use a very fine blade screw driver to gently prise apart the lens locating tabs indicated in the picture. This will ensure the adapter is a nice tight fit on the lens. Finally attach the adapter to the lens and then on to your Canon mount and check that you can still reach infinity focus. It might be right at the end of the lenses focus travel, but hopefully it will line up with the infinity focus mark on the lens. If you can’t reach infinity focus then your shim is too thick. If Infinity focus is short of your focus mark then your shim is not thick enough. It’s worth getting this right, especially on zoom lenses as you’ll get much better focus tracking from infinity to close up. Make up one adapter for each lens and keep the adapters on the lenses. You’ll also need to get some Canon end caps to protect you now Canon mount lenses.

How To Become A Better Camera Operator.

….Learn to edit and grade properly!

OK, so if you already shoot and edit then you’ll know this already. But I’m surprised at how many shooters there are that have no idea of how to edit or have never studied the editing and post production process. Very often I’ll hear comments form people like “I shoot this way because it the easiest” or “they can sort it out in post” with clearly little understanding of exactly what the implications for the poor post people are.

Modern post production workflows can be very powerful with the ability to perform many corrections and adjustments, but very often these adjustments only work well when the footage was shot specifically for those kinds of adjustments. Just because you can adjust one type of shot in a particular way, it doesn’t mean that you can do that to any type of shot.  You should shoot in a way that is sympathetic to the post production process that has been chosen for the project.

The other thing that learning to edit brings is an understanding of how a program or film flows. It teaches the camera operator the kinds of shots that are needed to support the main part of an interview or drama scene. Those all important cut-aways that help a scene flow properly. It’s not just a case of shooting a bunch of random shots of the location but thinking about how those shots will interact with the main shots when everything is edited together. If your shooting drama then it is a huge help if you can visualise how cuts between different shots of different characters, scenes or locations will work. How framing and things like camera height can be used to change the tension or intimacy in a scene. I think one of the best way to learn these things is by learning how to edit and I don’t mean just pressing the buttons or randomly dropping stuff in a sequence of clips. Learn how to pace a sequence, how to make a scene flow, understanding these things makes you a better shooter.

But don’t stop at the edit. Follow the post production process through to it’s end. Learn how to grade with a proper grading tool, not just colour corrections in the edit suite (although it’s useful to know the limitations of these) but things like power windows and secondaries. You don’t have to become a colourist, but by understanding the principles and limitations you will be able to adapt the way you shoot to fit within those limits. It is  a good idea to find a friendly colourist that will let you sit in on a session and explain to you how and why he/she is doing the things being done. There is always the Lite version of Resolve which you can download and use for free. If you don’t have anything to edit or grade then go out and shoot something. Find a topic and make a short film about it. Try to include people, interviews or drama. Maybe get in touch with a local drama group and offer to shoot a performance.

Whatever you do, get out there and learn to edit, learn to grade and then experiment and practice. Try a workflow where you create the finished look in-camera, then try shooting a similar project very flat or with log/raw and take that through the post production process and compare the end results. I think the best shooters are normally also competent editors. By full understand the post process you will keep the editor and colourist happy. If you make their lives easy the director and producers will see this when they sit in on the edit or grade and you’ll be more likely to get more work from them in the future.

One final thing. Even if you think you know it all, even if you do know it all you should still speak to the post production people before you shoot whenever possible to make sure everyone is clear about how they want you to deliver your rushes.

Raw is not log (but it might be), log is not raw. They are very different things.

Having just finished 3 workshops at Cinegear and a full day F5/F55 workshop at AbelCine one thing became apparent. There is a lot of confusion over raw and log recording. I overheard many people talking about shooting raw using S-log2/Slog3 or people simply interchanging raw and log as though they are the same thing.

Raw and Log are completely different things!

Generally what is being talked about is either Raw recording or recording using a log format such as S-Log2/S-log3 using component or RGB full colour video. Raw simply records the raw, image data coming off the video sensor, it’s not even a color picture as we know it. It is just the brightness information each pixel on the sensor captures with each pixel sitting beneath a colour filter. It is an image bitmap, but to be able to see a full colour image it will need further extensive processing. This processing is normally done in post production and is called “de-bayering” or “de-mosaicing” and is a necessary step to make the raw useable.

S-Log, S-Log2/3, LogC  or C-Log is a signal created by taking the same sensor output as above, then processing it in to an RGB or YCbCr signal by de-mosiacing in camera and then applying a log gamma curve. It is conventional video but instead of using a “normal” gamma curve such as Rec-709 it uses an alternate gamma and just like any other conventional video format it is full colour. S-Log and other log gammas can be recorded using a compressed codec or uncompressed, but even when uncompressed, it is still not raw, it is component or RGB video.

So why the confusion?

Well, if you tried to view the raw signal from a camera shooting raw in the viewfinder without processing it, it would not be a colour image and it would have a very strange brightness range. This would be impossible to use for framing and exposure. To get around this a raw camera will convert the raw sensor output to conventional video for monitoring. Sony’s cameras will convert the raw to S-Log2/3 for monitoring as only S-Log2/3 can show the cameras full dynamic range. At the same time the camera may be able to record this S-Log2/3 signal to the internal recording media. But the raw recorded by the camera on the AXS cards or external recorder is still just raw, nothing else.

UPDATE: Correction/Clarification. There is room for more confusion as I have been reminded that ArriRaw as well as the latest versions of ProResRaw use Log encoding to compact the raw data and record it in a more efficient way. It is also likely that Sony’s raw uses data reduction for the higher stops via floating point math or similar (as Sony’s raw is ACES compliant it possibly uses data rounding for the higher stops).

ArriRaw uses log encoding for the raw data to minimise data wastage and to squeeze a large dynamic range into just 12 bits, but the data is still unencoded data, it has not been encoded into RGB or YCbCr. To become a useable colour image it will need to be de-bayered in post production. Sony’s S-Log, S-Log2/3 Arri’s LogC,  Canon’s C-Log as well as Cineon are all encoded and processed RGB or YCbCr video.

How big a compromise is using a DSLR zoom on a 4K camera?

This came up as a question in response to the post about my prototype lens adapter. The adapter is based around an electronic Canon EF mount and the question was, what do I think about DSLR zooms?

There is a lot of variation between lenses when it comes to sharpness, contrast and distortions. A zoom will always be a compromise compared to a prime lens. DSLR lenses are designed to work with 24MP sensors. A 4K camera only has around 9MP, so your working well within the design limits of the lens even at 4K. While a dedicated PL mount zoom like an Angenieux Optimo will most likely out perform a similar DSLR zoom. The difference at like for like apertures will not be huge when using smaller zoom ratios (say 4x). But 10x and 14x zooms make more compromises in image quality, perhaps a bit of corner softness or more CA and these imperfections will be better or worse at different focal lengths and apertures. At the end of the day zooms are compromises but for many shoots it may simply be that it is only by accepting some small compromises that you will get the shots you want. Take my storm chasing shoots. I could use primes and get better image quality, but when you only have 90 seconds to get a shot there simply isn’t time to swap lenses, so if you end up with a wide on the camera when a long lens is what is really needed, your just not going to get the shot. Using a zoom means I will get the shot. It might not be the very best quality possible but it will look good. It is going to be better than I could get with an HD camera and a very slightly compromised shot is better than no shot at all.
If the budget would allow I would have a couple of cameras with different prime lenses ready to go. Or I would use a big, heavy and expensive PL zoom and have an assistant or team tasked solely with getting the tripod set up and ready asap. But my budget isn’t that big. I could spend weeks out storm chasing before I get a decent shot, so anything I can do to minimise costs is important.
It’s all about checks and balances. It is a compromise, but a necessary one. It’s not a huge compromise as I suspect the end viewer is not going to look at the shot and say “why is that so soft” unless they have a side by side, like for like shot to compare. DSLR zooms are not that bad! So yes, using a DSLR zoom is not going to deliver quality to match that of a similar dedicated PL zoom in most cases, but the difference is likely to be so small that the end viewer will never notice and thats a compromise I’m prepared to accept in order to get a portable camera that shoots 4K with a 14x zoom lens.

What about DSLR primes and why have I chosen the Canon Mount?

This is where the image performance gap gets even narrower. A high quality DSLR prime can perform just as well as many much more expensive PL mount lenses. The difference here is more about the usability of the lens. Some DSLR lenses can be tiny and this makes them fiddly to use. They are all All sorts of sizes, so swapping lenses may mean swapping Matte boxes or follow focus positions etc. Talking of focus, very often the focus travel on a DSLR lens is very, very short so focussing is fiddly. If the lens has an aperture ring it will probably have click stops making smooth aperture changes mid shot difficult. My prime lenses are de-clicked or never had clicks in the first place (like the Samyang Cine Primes). It’s not so much the issue of requiring a finer step than the one stop click, but more the ability to pull aperture during the shot. It’s not something I need to do often, but if I suddenly find I need to do it, I want a smooth aperture change. That being said, one of the issues with using Canon EF lenses with their electronic iris is that they operate in 1/8th stop steps and this is visible in any footage. Ultimately I am still committed to using the Canon mount lenses simply because there are so many to choose from and they focus in the right direction unlike Nikon lenses which focus back to front. For primes I’m using the excellent and fully manual Samyang T1.5 Cine Primes. I really like these lenses and they produce beautiful images at a fraction of the price of a PL mount lens. My zoom selection is a bit of a mish-mash. One thing about having a Canon mount on the camera is that I can still use Nikon lenses if I fit the lens with a low cost Nikon to Canon adapter ring. If you do this you can only use lenses with an actual iris ring, so generally these are slightly older lenses, but for example I have a nice Sigma 24-70mm f2.8 with a manual iris ring (and it focusses the RIGHT way, like most Sigmas but unlike most Nikon mount lenses). In addition I have a 70-300mm f4 Nikon mount Sigma as well as an Old Tokina 28-70mm f2.6 (lovely lens, a little soft but very nice warm colour). One thing I have found is that most of the Nikon to Canon adapter rings are little bit on the thin side. This prevents any zooms from being Parfocal as it puts the back focus out. Most of the adpaters are made in two parts and it’s quite easy to take the front and back parts apart and add shims made out of of thin plastic sheet or even card between the two halves to correct the back focus distance.

So there you have it. Overall DSLR lenses are not a huge compromise. Of course I would love to own a flight case full of good quality PL mount, 4K ready, glass. Perhaps one day I will, but it’s a serious investment. Currently I use DSLR lenses for my own projects and then hire in better glass where the budget will allow. For any commercials or features this normally means renting in a set of Ultra Primes or similar.  I am keeping a close eye on the developments from Zunow. I like their 16-28mm f2.8 and the prototype PL primes I saw at NAB look very good. I also like the look of the Zeiss 15.5 to 45 light weight zoom. Then of course there is the excellent Fujinon 19-90mm Cabrio servo zoom, but these are all big bucks. Hopefully I’ll get some nice big projects to work on this year that will allow me to invest in some top end lenses.

Choosing the right gamma curve.

One of the most common questions I get asked is “which gamma curve should I use?”.

Well it’s not an easy one to answer because it will depend on many things. There is no one-fits-all gamma curve. Different gamma curves offer different contrast and dynamic ranges.

So why not just use the gamma curve with the greatest dynamic range, maybe log? Log and S-Log are also gamma curves but even if you have Log or S-Log it’s not always going to be the best gamma to use. You see the problem is this: You have a limited size recording bucket into which you must fit all your data. Your data bucket, codec or recording medium will also effect your gamma choice.

If your shooting and recording with an 8 bit camera, anything that uses AVCHD or Mpeg 2 (including XDCAM), then you have 235 bits of data to record your signal. A 10 bit camera or 10 bit external recorder does a bit better with around 940 bits of data, but even so, it’s a limited size data bucket. The more dynamic range you try to record, the less data you will be using to record each stop. Lets take an 8 bit camera for example, try to record 8 stops and that’s about 30 bits per stop. Try to extend that dynamic range out to 11 stops and now you only have about 21 bits per stop. It’s not quite as simple as this as the more advanced gamma curves like hypergammas, cinegammas and S-Log all allocate more data to the mid range and less to highlights, but the greater the dynamic range you try to capture, the less recorded information there will be for each stop.

In a perfect world you would choose the gamma you use to match each scene you shoot. If shooting in a studio where you can control the lighting then it makes a lot of sense to use a standard gamma (no knee or knee off) with a range of up to 7 stops and then light your scene to suit. That way you are maximising the data per stop. Not only will this look good straight out of the camera, but it will also grade well provided your not over exposed.

However the real world is not always contained in a 7 stop range, so you often need to use a gamma with a greater dynamic range. If your going direct to air or will not be grading then the first consideration will be a standard gamma (Rec709 for HD) with a knee. The knee adds compression to just the highlights and extends the over-exposure range by up to 2 or 3 stops depending on the dynamic range of the camera. The problem with the knee is that because it’s either on or off, compressed or not compressed it can look quite electronic and it’s one of the dead giveaways of video over film.

If you don’t like the look of the knee yet still need a greater dynamic range, then there are the various extended range gammas like Cinegamma, Hypergamma or Cinestyle. These extend the dynamic range by compressing highlights, but unlike the knee, the amount of compression starts gradually and get progressively greater. This tends to look more film like than the on/off knee as it tends to roll off highlights much more gently. But, to get this gentle roll-off the compression starts lower in the exposure range so you have to be very careful not to over expose your mid-range as this can push faces and skin tones etc into the compressed part of the curve and things won’t look good. Another consideration is that as you are now moving away from the gamma used for display in most TV’s and monitors the pictures will be a little flat so a slight grade often helps with these extended gammas.

Finally we come to log gammas like S-Log, C-Log etc. These are a long way from display gamma, so will need to be graded to like right. In addition they are adding a lot of compression (log compression) to the image so exposure becomes super critical. Normally you’ll find the specified recording levels for middle grey and white to be much lower with log gammas than conventional gammas. White with S-Log for example should only be exposed at 68%. The reason for this is the extreme amount of mid to highlight compression, so your mid range needs to be recorded lower to keep it out of the heavily compressed part of the log gamma curve. Skin tones with log are often in the 40 – 50% range compared to the 60-70% range commonly used with standard gammas.  Log curves do normally provide the very best dynamic range (apart from raw), but they will need grading and ideally you want to grade log footage in a dedicated grading package that supports log corrections. If you grade log in your edit suite using linear (normal gamma) effects your end results won’t be as good as they could be. The other thing with log is now your recording anything up to 13 or 14 stops of dynamic range. With an 8 bit codec that’s only 17 – 18 bits per stop, which really isn’t a lot, so for log really you want to be recording with a very high quality 10 bit codec and possibly an external recorder. Remember with a standard gamma your over 30 bits per stop, now were looking at almost half that with log!

Shooting flat: There is a lot of talk about shooting flat. Some of this comes from people that have seen high dynamic range images from cameras with S-Log or similar which do look very flat. You see, the bigger the captured dynamic range the flatter the images will look. Consider this: On a TV, with a camera with a 6 stop range, the brightest thing the camera can capture will appear as white and the darkest as black. There will be 5 stops between white and black. Now shoot the same scene with a camera with a 12 stop range and show it on the same TV. Again the brightest is white and black is black, but the original 6 stops that the first camera was able to capture are now only being shown using half of the available brightness range of the TV as the new camera is capturing 12 stops in total, so the first 6 stops will now have only half the maximum display contrast. The pictures would look flatter. If a camera truly has greater dynamic range then in general you will get a flatter looking image, but it’s also possible to get a flat looking picture by raising the black level or reducing the white level. In this case the picture looks flat, but in reality has no more dynamic range than the original. Be very careful of modified gammas said to give a flat look and greater dynamic range from cameras that otherwise don’t have great DR. Often these flat gammas don’t increase the true dynamic range, they just make a flat picture with raised blacks which results in less data being assigned to the mid range and as a result less pleasing finished images.

So the key points to consider are:

Where you can control your lighting, consider using standard gamma.

The bigger the dynamic range you try to capture, the less information per stop you will be recording.

The further you deviate from standard gamma, the more likely the need to grade the footage.

The bigger the dynamic range, the more compressed the gamma curve, the more critical accurate mid range exposure becomes.

Flat isn’t always better.

The practicalities of fast run and gun shooting with a large sensor camera.

Supercell-panoramaWell I’ve just returned home from NAB and a week of Tornado Chasing in the USA. For the Tornado chasing I was shooting in 4K using my Sony F5. I’ve shot run and gun with my F3 and FS700 in the past when shooting air-shows and similar events. But this was very different. Tornado chasing is potentially dangerous. You often only have seconds  to grab a shot which involves leaping out of a car, quickly setting up a tripod and camera and then framing and exposing the shot. You often only have time for one 30 second shot before you have to jump back into the car and move on out ahead of the storm. All of this my be happening in very strong winds and rain. The storms I chased last week had inflow winds rushing into them at 50+ MPH.

The key to shooting any thing fast moving, like this, is having whatever camera kit your using well configured. You need to be able to find the crucial controls for exposure and focus quickly and easily. You need to have a way of measuring and judging exposure and focus accurately. In addition you need a zoom lens that will allow you to get the kinds of shots you need, there’s no time to swap lenses!

For my storm chasing shoot I used the Sony F5 with R5 recorder. This was fitted with a Micron bridge plate as well as a Micron top cheese plate and “Manhandle”. Instead of the Sony viewfinder I used an Alphatron viewfinder as this has a waveform display for exposure. My general purpose lens was a Sigma 18-200mm f3.5-f6.5 stabilised lens with a Canon mount. To control the iris I used a MTF Effect iris control box. For weather protection a CamRade F5/F55 Wetsuit. The tripod I used for this shoot was a Miller 15 head with a set of Carbon Fibre Solo legs.

Storm chasing with a PMW-F5
Storm chasing with a PMW-F5

Overall I was pleased with the way this setup worked. The F5’s ergonomics really help as the logical layout makes it simple to use. The 18-200mm lens is OK. I wish it was faster for shooting in low light but for the daytime and dusk shots, f3.5 (at the wide end) is OK. The F5 is so sensitive that it copes well even with this slow lens. The CamRade wetsuit is excellent. Plenty of clear windows so you can see the camera controls and a well tailored yet loose fit that allows you to get easy access to the camera controls. I’ve used Miller Solo legs before and when you need portability they can’t be beaten. The are not quite as stable as twin tube legged tripods, but for this role they are an excellent fit. The Miller 15 head was also just right. Not too big and bulky, not too small. The fluid motion of the head is really smooth.

Storm Chasing in the USA with the PMW-F5
Storm Chasing in the USA with the PMW-F5

So what didn’t work? Well I used the Element Technica Micron bridge plate. I really like the Micron bridge plate as it allows you to re-balance the camera on the tripod very quickly. But it’s not really designed for quick release, it’s a little tricky to line up the bridge plate with the dovetail so I ended up removing and re-fitting the camera via the tripod plate which again is not ideal. The Micron Bridge plate is not really designed for this type of application, when I go back storm chasing in May I’ll be using a  baseplate that locks into a VCT-14 quick release plate, not sure which one yet, so I have some investigating to do.  The VCT-14 is not nearly as stable or as solid as the Micron, but for this application speed is of the essence and I’m prepared to sacrifice a little bit of stability. The Micron bridge plate is better suited to film style shooting and in that role is fantastic, it’s just not the right tool for this job.

Rainbow under a severe thunderstorm.
Rainbow under a severe thunderstorm.

The MTF-Effect unit is needed to control the aperture of the Canon mount lens, it also powers the optical image stabiliser. But it’s a large square box. I had it mounted on the top of the camera, not in the best place. I need to look at where to mount the box. I’m actually considering re-housing the unit in a custom made hand grip so I can use it to hold the camera with my left hand and have iris control via a thumbwheel. I also want to power it from one of the camera’s auxiliary outputs rather than using the AA batteries internally. The other option is the more expensive Optitek lens mount which I’m hoping to try out soon.  I’m also getting a different lens. The Sigma was fine, but I’m going to get a Sigma 18-250mm (15x) f3.5-f6.5 for a bit more telephoto reach. The other option I could have used is my MTF B4 adapter and a 2/3″ broadcast zoom, but for 4K the Tamron will have better resolution than an HD lens. If I was just shooting HD then the broadcast lens would probably be the best option. After dark I swapped to my Sigma 24-70mm f2.8 for general purpose shooting and this worked well in low light but with the loss of telephoto reach, I need to look into a fast long lens but these tend to be expensive. If you have deep enough pockets the lens to get would probably be the Fujinon Cabrio 19-90 T2.9, but sadly at the moment my budget is blown and my pockets are just not that deep. The Cabrio is very similar to an ENG broadcast lens in that it has a servo zoom, but it’s PL mount and very high resolution. Another lens option would be the Canon CN-E30-105mm T2.8, but overall there isn’t a great deal of choice when it comes down to getting a big zoom range and large aperture at the same time, in a hand-held package. If I was working with a full crew then I would consider using a much larger lens like the Arri Alura 18-80 or Angenieux Optimo 24-290, but then this is no longer what I would consider run and gun and would require an assistant to set up the tripod while I bring out the camera.

A Supercell thunderstorm looking like a flying saucer.
A Supercell thunderstorm looking like a flying saucer.

From an operating point of view one thing I had to do was to keep reminding myself to double check focus. If you think focus is critical in HD, then it’s super critical for 4K. Thunderstorms are horrid things to try and focus on as they are low contrast and soft looking. I had to use a lot of peaking as well as the 1:1 pixel function of the Alphatron viewfinder, one of the neat things about the Alphatron is that peaking continues to work even in the 1:1 zoom mode. As I was shooting raw and using the cameras Cine EI mode to make exposure simpler I turned on the Look Up Tables on the HDSDI outputs and used the P1 LUT. I then exposed using the waveform monitor keeping my highlights (for example the brighter clouds) at or lower than 100%. On checking the raw footage back this looks to have worked well. Quite a few shots needed grading down by 1 to 1.5 stops, but this is not an issue as there is so much dynamic range that the highlights are still fine and you get a cleaner, less noisy image. When shooting raw with the F5 and F55 cameras I’d rather grade down than up. These cameras behave much more like  film cameras due to the massive dynamic range and raw recording, so a little bit of overexposure doesn’t hurt the images as it would when shooting with standard gammas or even log. Grading down (bringing levels down) results in lower noise and a cleaner image.

Frame grab from the F5 of a Supercell storm with a grey funnel cloud beneath.
Frame grab from the F5 of a Supercell storm with a grey funnel cloud beneath.

So you can run and gun in an intense fast moving environment with a large sensor camera. It’s not as easy as with a 2/3″ or 1/2″ camera. You have to take a little more time double checking your focus. The F5 is so sensitive that using a F3.5-F6.5 lens is not a huge  problem. A typical 1/2″ camera (EX1, PMW-200) is rated at about 300 ISO and has an f1.8 lens. The F5 in Cine EI mode is 2000 ISO, almost 3 stops more sensitive. So when you put an f3.5 lens on, the F5 ends up performing better in low light, even at f6.5 it’s only effectively one stop less sensitive. For this kind of subject matter you don’t want to be at f1.8 – f2.8 with a super 35mm sensor anyway as the storm scenes and shots involved work better with a deep focus range rather than a shallow one.

Having watched the footage from the shoot back in HD on a large screen monitor I am delighted with the quality of the footage. Even in HD it has better clarity than I have seen in any of my previous storm footage. This is I believe down to the use of a 4K sensor and the very low noise levels. I’d love to see the 4K material on a 4K monitor. It certainly looks good on my Mac’s retina display. Hopefully I’ll get back out on the plains and prairies of Tornado Alley later in May for some more storm chasing. Anyone want to join me?