Banding in your footage. What Causes It, is it even there?

Once again it’s time to put pen to paper or fingers to keyboard as this is a subject that just keeps coming up again and again.

People really seem to have a lot of problems with banding in footage and I don’t really fully understand why as it’s something I only ever really encounter if I’m pushing a piece of material really, really hard in post production. General the vast majority of the content I shoot does not exhibit problematic banding, even the footage I shoot with 8 bit cameras.

First things first – Don’t blame it on the bits. Even an 8 bit recording  (from a good quality camera) shouldn’t exhibit noticeable banding. An 8 bit recording can contain up to 13 million tonal values. It’s extremely rare for us to shoot luma only, but even if you do it will still have 235 shades and these steps in standard dynamic range are too small for most people to discern so you shouldn’t ever be able to see them. I think that when most people see banding they are not seeing teeny, tiny almost invisible steps what most people see is something much more noticeable – so where is it coming from?

It’s worth considering at this stage that most TV’s, monitors and computer screens are only 8 bit, sometimes less! So if you are looking at one camera and it’s banding free and then you look at another and you see banding, in both cases you are probably looking at an 8 bit image, so it can’t just be the capture bit depth that causing the problem as you cant see 10 bit steps on an 8 bit monitor.

So what could it be?

A very common cause of banding is compression. DCT based codecs such as Jpeg, MJPEG, H264 etc break the image up into small blocks of pixels called macro blocks. Then all the pixels in each block is processed in a similar manner and as a result sometimes there may be a small step between each block or between groups of blocks across a gradient. This can show up as banding. Often we see this with 8 bit codecs because typically 8 bit codecs use older technology or are more highly compressed. It’s not because there are not enough code values. Decreasing the compression ratio will normally eliminate the stepping.

Scaling between bit depths or frame sizes is another very common cause of banding. It’s absolutely vital that you ensure that your monitoring system is up to scratch. It’s very common to see banding in video footage on a computer screen as the video data levels are different to computer data levels and in addition there may also be some small gamma differences so the image has to be scaled on the fly. In addition computer desktops runs at one bit range, the HDMI output another, so all kinds of conversions are taking place that can lead to all kinds of problems when you go from a video clip, to computer levels, to HDMI levels. See this article to fully understand how important it is to get your monitoring pipeline properly sorted. https://www.xdcam-user.com/2017/06/why-you-need-to-sort-out-your-post-production-monitoring/

Look Up Tables (LUT’s) can also introduce banding. LUT’s were never really intended to be used as a quick fix grade, the intention was to use them as an on-set reference or guide, not the final output. The 3D LUT’s that we typically use for grading break the full video range into bands and each band will apply a slightly different correction to the footage than the band above or below. These bands can show up as steps in the LUT’s output, especially with the most common 17x17x17 3D LUT’s. This problem gets even worse if you apply a LUT and then grade on top – a really bad practice.

Noise reduction – In camera or postproduction noise reduction will also often introduce banding. Very often pixel averaging is used to reduce noise. If you have a bunch of pixels that are jittering up and down taking an average value for all those pixels will reduce the noise, but then you can end up with steps across a gradient as you jump from one average value to the next. If you shoot log it’s really important that you turn off any noise reduction (if you can) when you are shooting because when you grade the footage these steps will get exaggerated. Raising the ISO (gain) in a camera also makes this much worse as the cameras built in NR will be working harder, increasing the averaging to compensate the increased noise.

Coming back to 8 bit codecs again – Of course a similar quality 10 bit codec will normally give you more picture information than an 8 bit one. But we have been using 8 bits for decades, largely without any problems. So if you can shoot 10 bit you might get a better end result. But also consider all the other factors I’ve mentioned above.

 

300x250_xdcam_150dpi Banding in your footage. What Causes It, is it even there?

12 thoughts on “Banding in your footage. What Causes It, is it even there?”

  1. I recently saw ‘compression as banding culprit’ syndrome in action. While shopping around for a new camera and preparing to sell some existing gear I connected my A7S (Mark 1) to a Blackmagic Hyperdeck 2 shuttle. I recorded a variety of images including a colorchecker passport, tungsten lamps and some local views of the street with a cloudless blue sky. Recording the 8 bit 422 HDMI output from the Sony to the Uncompressed flavour of Prores (Not 422 HQ) but uncompressed, it became abundantly clear just how much damage the internal codec on the Sony was doing in the banding department. And lets be honest, the brilliant internal codec on the Sony does an exceptional job on cost/benefit analysis. Whatever minor banding was visible direct from the Sony XAVC codec (and it was minor) had vanished completely from the uncompressed recording. This was born out when viewed directly from a Blackmagic Card to a 10bit Dreamcolor panel. The difference was also visible when frames were grabbed from both files and exported to DPX and PSD files from After Effects. Interestingly when showed side by side my test audience couldn’t really spot the difference and it had to be pointed out. Oddly though they all picked the unlabelled ‘uncompressed’ as ‘richer’ and ‘like looking though a window’. But none of them particularly cared about the difference. Unlike my hard drives. Which cared a lot.

  2. Alister, thanks as always for your very informative posts. One thing jumps out at me though. “This problem gets even worse if you apply a LUT and then grade on top – a really bad practice.” I see this grading workflow of using a LUT and then tweaking grade recommended all over the place. If this is a bad workflow then what should the best practice grading workflow be? Using curves and avoiding LUTS and preset look entirely? Just wondering what best practice in grading do you advise. Thank you!

    1. I agree. I generally use a compensating lut and a conversion lut (or a combined compensating\conversion lut) before any additional image manipulation. I don’t have any problems with banding even with 8bit footage.

      1. But by grading after the LUT you no longer have the full color or dynamic range of the original material, so your adjustment range is greatly restricted and will be being applied to any limitations or artefacts coming from the LUTs.

    2. Best practice is always to grade before the LUT is applied. Once a LUT is applied you are no longer grading the original footage. You are now grading the LUT with all the restrictions and artefacts that that imposes. You can add the LUT then grade, but the grading must be taking place “under” the LUT. So the math and processing of the video is done before it’s passed to the LUT. In Resolve this can be achieved by applying the LUT as an Output LUT, by applying the LUT as a global timeline LUT or as the last node in any grade. In Premiere it’s harder, but it can be done by adding the LUT on an adjustment layer, then grading in the layer underneath.But generally if you add the LUT, then grade the grading takes place on the LUT’s output rather than its input.

      Not saying you can use LUT’s but you need to be really careful how you do. There are some really bad practices in use day-in, day-out that have become standard practice because someone saw it on the internet, found it was easy and then it can quickly becomes standard practice. People don’t test, don’t learn, don’t understand “why” anymore. They just copy and this leads to a lot of fundamental mistakes.

  3. Really helpful Alister!
    The above process you describe, applies for both look lut’s and lut’s for bringing log footage into a rec709 space?
    If I’m not mistaken, in PPro,video tracks in timeline have priority from top to bottom…your recommendation is to put the lut on an adjustment layer in a video track ABOVE the footage?
    Thanks!

  4. Thanks for this very interesting article on banding. I am confused when you say grading on top of a LUT bad practice. My understanding was that its acceptable to place a LUT prior to further adjustment in a workflow by, for instance, placing a conversion LUT in node 2 in Resolve and then grading further in node 3, as Marc Weilage, paraphrasing Patrick Inhofer, suggests (https://forums.creativecow.net/thread/277/40569). Or do you mean layered adjustments?

    Thanks

    Sam

    1. Once you apply a LUT, if you grade the LUT’d footage you are grading material with a more restricted range. For example if a LUT clips the highlights, if you grade after the LUT you won’t be able to restore the highlights as the LUT has already clipped them. A node after a LUT won’t ignore the actions applied by the LUT, nodes after the clipped highlight LUT will assume that the clipping was a desired effect that you applied, so they won’t undo the clipping. So you must always do the grading ahead of the LUT where you have the full range of the original file.

  5. Using BRAW in Premiere, there is an option to “Enable LUT” at clip level. If you enable that and then tweak the raw settings, does the processing occur in the correct order?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.