Should I shoot 8 bit UHD or 10 bit HD?

This comes up so many times, probably because the answer is rarely clear cut.

First lets look at exactly what the difference between an 8 bit and a 10 bit recording is.
Both will have the same dynamic range. Both will have the same contrast. Both will have the same color range. One does not  necessarily have more color or contrast than the other. The only thing you can be sure of is the difference in the number of code values. An 8 bit video recording has a maximum of 235 code values per channel giving 13 million possible tonal values. 10 bit recording has up to 970 code values per channel giving up to 912 million tonal values.
 
There is a lot of talk of 8 bit recordings resulting in banding because there are only 235 luma shades. This is a bit of a half truth. It is true that if you have a monochrome image there would only be 235 steps. But we are normally making colour images so we are typically dealing with 13 million tonal values, not simply 235 luma shades. In addition it is worth remembering that the bulk of our current video distribution and display technologies are 8 bit – 8 bit H264, 8 bit screens etc. There are more and more 10 bit codecs coming along as well as more 10 bit screens, but the vast majority are still 8 bit.
Compression artefacts cause far more banding problems than too few steps in the recording codec. Most codecs use some form of noise reduction to help reduce the amount of data that needs to be encoded and this can result in banding. Many codecs divide the image data into blocks and  the edges of these small blocks can lead to banding and stepping.
 
Of course 10 bit can give you more shades. But then 4K gives you more shades too. So an 8 bit UHD recording can sometimes have more shades than a 10 bit HD recording. How is this possible? If you think about it, in UHD each color object in the scene is sampled with twice as many pixels. Imagine a gradient that spans 4 pixels. In 4K you will have 4 samples and 4 steps. In HD you will only have 2 samples and 2 steps, so the HD image might show a single big step while the 4K may have 4 smaller steps. It all depends on how steep the gradient is and how it falls relative to the pixels. It then also depends on how you will handle the footage in post production.
 
So it is not as clear cut as often made out. For some shots with lots of textures 4K 8 bit might actually give more data for grading than 10 bit HD. In other scenes 10 bit HD might be better.
 
Anyone that is getting “muddy” results in 4K compared to HD is doing something wrong. Going from 8 bit 4K to 10 bit HD should not change the image contrast, brightness or color range. The images shouldn’t really look significantly different. Sure the 10 bit HD recording might show some subtle textures a little better, but then the 8 bit 4K might have more texture resolution.
 
My experience is that both work and both have pro’s and con’s. I started shooting 8 bit S-log when the Sony PMW-F3 was introduced 7 years ago and have always been able to get great results provided you expose well. 10 bit UHD would be preferable, I’m not suggesting otherwise (at least 10 GOOD bits are always preferable), but 8 bit works too. 

6 thoughts on “Should I shoot 8 bit UHD or 10 bit HD?”

  1. What about data rates ? would you agree with that : HD has more color depth, UHD more resolution, but the same frame get a data rate of 50 mb/s and the ot100 mb/s to be recorded. So 100 mb/s could be better ?

    1. It will depend on what’s in the scene. UHD has 4x more pixels than HD, so you would think it might need 4x more data – which if uncompressed it would. However it’s easier to compress 8 bits than 10 bits. In addition as you increase the resolution there is not always 4x more information in the image because the real world scenes that we shoot have a limited number of shades and textures. So codecs tend to work more efficiently with higher resolution images. In most cases 50Mb/s HD will be comparable to 100Mb/s UHD in terms of added artefacts etc. BUT it really depends on what’s in the shot. A very finely textured, high detail scene may lead to more artefacts in the 100Mb/s UHD than the 50Mb/s HD. Motion also plays a part in this, so it difficult to have a clear answer.

  2. Hi Alister, I am confused by the 235 shades:
    In case of 8 bit RGB we have 0-255 i.e. full range 255 values.
    In case of YCbCr (digital YUV) we should have 16-235. This would leave a range of 219 possible shades of luminance. Did I get this wrong?

    Does Adobe Premiere expand YCbCr as 16-235 from black to white respectively, or does it expand from 0-235 to black and white?

    It would be great if you could shed some light on this.
    Kind regards, Andreas

  3. 16-235 is legal range (0-100%). Most modern cameras use full range which is 16-255 (0-109%) and some log cameras use data range which is 0-255 including the Sony’s (which adobe gets very wrong). Premiere is not consistent with is file handling, for example a ProRes log file will be handled as is, but XAVC will not which results incorrect blacks.

    S-Log is a data range file, but the S-log3 uses CV95(CV24) to CV890(CV219) – 215 8 bit CV’s
    S-Log2 uses CV90(22) to CV1000(CV250) – 227 8 bit code values.
    (Values given above are approximate – but very, very close).

Leave a Reply to Andreas Urra Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.