Tag Archives: codec

XAVC-I or XAVC-L which to choose?

THE XAVC CODEC FAMILY

The XAVC family of codecs was introduced by Sony back in 2014.  Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.

XAVC-I

XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.


 

XAVC-L

XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).

Performance and Efficiency.

Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of  shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.

Motion Artefacts.

BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
 
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.

Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot. 

Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.

XAVC-I v ProResHQ, multi-generation test.

I often hear people saying that XAVC-I isn’t good enough or that you MUST use ProRes or some other codec. My own experience is that XAVC-I is actually a really good codec and recording to ProRes only ever makes the very tiniest (if any) difference to the finished production.

I’ve been using XAVC-I for over 8 years and it really worked very well for me. I’ve also tested and compared it against ProRes many times and I know the differences are very small, so I am always confident that when using XAVC-I that I will get a great result. But I decided to make this video to show just how close they are.

It was shot with a Sony FX6 using internal XAVC-I (class 300) on an SD card alongside an external recording using ProResHQ on a Shogun 7. I deliberately chose to use Cine EI and S-Log3 at the cameras high base ISO of 12,800 as noise will stress any codec that little bit harder and adding a LUT adds another layer of complexity that might show up any issues all  just to make the test that little bit tougher. The slightly higher noise level of the high base ISO also allows you to see how each codec handles noise more easily.

A sample clip of each codec was place in the timeline (DaVinci Resolve) and a caption added. This was then rendered out, ProRes HQ rendered using ProRes HQ and the XAVC-I files rendered to XAVC-I. So for most of the examples seen the XAVC-I files have been copied and re-encoded 5 times plus the encoding to the file uploaded to YouTube, plus YouTubes own encoding, a pretty tough test.

Because in most workflows I don’t believe many people will use XAVC-I in post production as an intermediate codec I also repeated the tests with the XAVC-I rendered to ProResHQ 5 times over as this is probably more representative of a typical real world workflow. These examples are shown at the end of the video. Of course the YouTube compression will restrict your ability to see some of the differences between the two codecs. But, this is how many people will be distributing their content. Even if not via YouTube, via other highly compressed means, so it’s not an unfair test and reflects many real world applications.

Where the s709 LUT has been added it was added AFTER each further copy of the clip, so this is really a “worst case scenario”. Overall in the end the ProRes HQ and XAVC-I are remarkably similar in performance. In the 300% blow up you can see differences between the XAVC-I that is 6 generations old compared to the 6th generation ProRes HQ if you look very carefully at the noise. But the differences are very, very hard to spot and going 6 generations of XAVC-I is not realistic. It was designed a s a camera codec. In the same test where the XAVC was rendered to ProRes HQ for each post production generation any difference is incredibly hard to find even when magnified 300%. I am not claiming that XAVC-I Class 300 is as good as ProRes HQ. But I think it is worth considering what you need when shooting. Do you really want to have to use an external recorder, do you really want to have to deal with files that are 3 to 4 times larger. Do you want to have to remember to switch recording methods between slow motion and normal speeds? For most productions I very much doubt that the end viewer would ever be able to tell the difference between material shot using XAVC-I class 300 and ProResHQ. And that audience certainly isn’t going to feel they are watching a substandard image, and that’s what counts. 

There is so much emphasis placed on using “better” codecs that I think some people are starting to believe that XAVC-I is unusable or going to limit what they can do. This isn’t the case. It is a pretty good codec and frankly if you can’t get a great looking image when using XAVC then a better codec is unlikely to change that.

What are the benefits of ProRes Raw with the PXW-FS5?

There has been a lot of discussion recently and few videos posted that perhaps give the impression that if you shoot with S-Log2 on an FS5 and compare it to raw shot on the FS5 there is very little difference.

Many of the points raised in the videos are correct. ProRes raw won’t give you any more dynamic range. It won’t improve the cameras low light performance. There are features such as automatic lens aberration correction applied when shooting internally which isn’t applied when shooting raw.  Plus it’s true that shooting ProRes raw requires an external recorder that makes the diminutive little FS5 much more bulky.

So why in that case shoot ProRes Raw?

Frankly, if all you are doing is producing videos that will be compressed to within an inch of their life for YouTube, S-Log2 can do an excellent job when exposed well, it can be graded and can produce a nice image.

But if you are trying to produce the highest quality images possible then well shot ProRes raw will give you more data to work with in post production with fewer compression artefacts than the internal 8 bit UHD XAVC.

I was looking at some shots that I did in preparation for my recent webinar on ProRes raw earlier today and at first glance there isn’t perhaps much difference between the UHD 8 bit XAVC S-Log2 files and the ProRes raw files that were shot within seconds of each other. But look more closely and there are some important differences, especially if skin tones are important too you.

Skin tones sit half way between middle grey and white and typically span around 2 to 3 stops. So with S-Log 2 and an 8 bit recording a face would span around 24 to 34 IRE and have a somewhere between 24 and 35 code values – Yes, that’s right, maybe as few as 24 shades in each of the R, G and B channels. If you apply a basic LUT to this and then view it on a typical 8 bit monitor it will probably look OK.

But compare that to 12 bit linear raw recording and the same face with 2 to 3 stops across it will have anywhere up to 10 to 20 times as many code values ( somewhere around 250 – 500 code values depending on exactly how it’s exposed) . Apply the same LUT as for the S-Log2 and on the surface it looks pretty much the same – or does it?

Look closely and you will see more texture in the 12 bit raw. If you are lucky enough to have a 10 bit monitor the differences are even more apparent. Sure, it isn’t an in-your-face night and day difference but the 12 bit skin tones look less like plastic and much more real, they just look nicer, especially if it’s someone with a good complexion.

In addition looking at my test material I am also seeing some mpeg compression artefacts on the skin tones in the 8 bit XAVC that has a smoothing effect on the skin tones, reducing some of the subtle textures and adding to the slightly un-real, video look.

The other deal with a lack of code values and H624 compression  is banding. Take 8 bit S-Log2 and start boosting the contrast in a sky scene, perhaps to bring out some cloud details and you will start to see banding and stair stepping if you are not very careful. You will also see it across wall and other textureless surfaces. You can even see this on your grading suite waveform scopes in many cases. You won’t get this with well exposed 12 bit linear raw (for any normal grading at least).

None of these are huge deals perhaps. But what is it that makes a great picture? Take Sony’s Venice or the Arri Alexa as examples. We know these to be great cameras that produce excellent images. But what is it that makes the images so good? The short answer is that it is a combination of a wide range of factors, each done as well as is possible. Good DR, good colour, good skin tones etc. So what you want to record is whatever the sensor in your camera can deliver as well as you can. 8 bit UHD compressed to only 100Mb/s is not really that great. 12 bit raw will give you more textures in the mid range and highlights. It does have some limitations in the shadows, but that is easily overcome with a nice bright exposure and printing down in post.

And it’s not just about image quality.

Don’t forget that ProRes Raw makes shooting at 4K DCI possible. If you hope to ever release your work for cinema display, perhaps on the festival circuit, you are going to be much better off shooting in the cinema DCI 4K standard rather than the UHD TV standard. It also allows you to shoot 60fps in 4K (I’m in the middle of a very big 4K 60p project right now). Want to shoot even faster – well with ProRes Raw you can, you can shoot at up to 120fps in 4K. So there are many other benefits to the raw option on the FS5 and recording to ProRes raw on a Shogun Inferno.

There is also the acceptability of 8 bit UHD. No broadcaster that I know of will ever consider 8 bit UHD unless there is absolutely no other way to get the material. You are far more likely to be able to get them to accept 12 bit raw.

Future proofing is another consideration. I am sure that ProRes raw decoders will improve and support in other applications will eventually come. By going back to your raw sensor data with better software you may be able to gain better image quality from your footage in the future. With Log you are already somewhat limited as the bulk of the image processing has already been done and is baked into the footage.

It’s late on Friday afternoon here in the UK and I’ve promised to spend some time with the family this evening. So no videos today. But next week I’ll post some of the examples I’ve been looking at so that you can see where ProRes raw elevates the image quality possible from the FS5.

Banding in your footage. What Causes It, is it even there?

Once again it’s time to put pen to paper or fingers to keyboard as this is a subject that just keeps coming up again and again.

People really seem to have a lot of problems with banding in footage and I don’t really fully understand why as it’s something I only ever really encounter if I’m pushing a piece of material really, really hard in post production. General the vast majority of the content I shoot does not exhibit problematic banding, even the footage I shoot with 8 bit cameras.

First things first – Don’t blame it on the bits. Even an 8 bit recording  (from a good quality camera) shouldn’t exhibit noticeable banding. An 8 bit recording can contain up to 13 million tonal values. It’s extremely rare for us to shoot luma only, but even if you do it will still have 235 shades and these steps in standard dynamic range are too small for most people to discern so you shouldn’t ever be able to see them. I think that when most people see banding they are not seeing teeny, tiny almost invisible steps what most people see is something much more noticeable – so where is it coming from?

It’s worth considering at this stage that most TV’s, monitors and computer screens are only 8 bit, sometimes less! So if you are looking at one camera and it’s banding free and then you look at another and you see banding, in both cases you are probably looking at an 8 bit image, so it can’t just be the capture bit depth that causing the problem as you cant see 10 bit steps on an 8 bit monitor.

So what could it be?

A very common cause of banding is compression. DCT based codecs such as Jpeg, MJPEG, H264 etc break the image up into small blocks of pixels called macro blocks. Then all the pixels in each block is processed in a similar manner and as a result sometimes there may be a small step between each block or between groups of blocks across a gradient. This can show up as banding. Often we see this with 8 bit codecs because typically 8 bit codecs use older technology or are more highly compressed. It’s not because there are not enough code values. Decreasing the compression ratio will normally eliminate the stepping.

Scaling between bit depths or frame sizes is another very common cause of banding. It’s absolutely vital that you ensure that your monitoring system is up to scratch. It’s very common to see banding in video footage on a computer screen as the video data levels are different to computer data levels and in addition there may also be some small gamma differences so the image has to be scaled on the fly. In addition computer desktops runs at one bit range, the HDMI output another, so all kinds of conversions are taking place that can lead to all kinds of problems when you go from a video clip, to computer levels, to HDMI levels. See this article to fully understand how important it is to get your monitoring pipeline properly sorted. https://www.xdcam-user.com/2017/06/why-you-need-to-sort-out-your-post-production-monitoring/

Look Up Tables (LUT’s) can also introduce banding. LUT’s were never really intended to be used as a quick fix grade, the intention was to use them as an on-set reference or guide, not the final output. The 3D LUT’s that we typically use for grading break the full video range into bands and each band will apply a slightly different correction to the footage than the band above or below. These bands can show up as steps in the LUT’s output, especially with the most common 17x17x17 3D LUT’s. This problem gets even worse if you apply a LUT and then grade on top – a really bad practice.

Noise reduction – In camera or postproduction noise reduction will also often introduce banding. Very often pixel averaging is used to reduce noise. If you have a bunch of pixels that are jittering up and down taking an average value for all those pixels will reduce the noise, but then you can end up with steps across a gradient as you jump from one average value to the next. If you shoot log it’s really important that you turn off any noise reduction (if you can) when you are shooting because when you grade the footage these steps will get exaggerated. Raising the ISO (gain) in a camera also makes this much worse as the cameras built in NR will be working harder, increasing the averaging to compensate the increased noise.

Coming back to 8 bit codecs again – Of course a similar quality 10 bit codec will normally give you more picture information than an 8 bit one. But we have been using 8 bits for decades, largely without any problems. So if you can shoot 10 bit you might get a better end result. But also consider all the other factors I’ve mentioned above.

 

How can 16 bit X-OCN deliver smaller files than 10 bit XAVC-I?

Sony’s X-OCN (XOriginal Camera Negative) is a new type of codec from Sony. Currently it is only available via the R7 recorder which can be attached to a Sony PMW-F5, F55 or the new Venice cinema camera.

It is a truly remarkable codec that brings the kind of flexibility normally only available with 16 bit linear raw files but with a files size that is smaller than many conventional high end video formats.

Currently there are two variations of X-OCN.

X-OCN ST is the standard version and then X-OCN LT is the “light” version. Both are 16 bit and both contain 16 bit data based directly on what comes off the cameras sensor. The LT version is barely distinguishable for a 16 bit linear raw recording and the ST version “visually lossless”. Having that sensor data in post production allows you to manipulate the footage over a far greater range than is possible with tradition video files. Traditional video files will already have some form of gamma curve as well as a colour space and white balance baked in. This limits the scope of how far the material can be adjusted and reduces the amount of picture information you have (relative to what comes directly off the sensor) .

Furthermore most traditional video files are 10 bit with a maximum of 1024 code values or levels within the recording. There are some 12 bit codecs but these are still quite rare in video cameras. X-OCN is 16 bit which means that you can have up to 65,536 code values or levels within the recording. That’s a colossal increase in tonal values over traditional recording codecs.

But the thing is that X-OCN LT files are a similar size to Sony’s own XAVC-I (class 480) codec, which is already highly efficient. X-OCN LT is around half the size of the popular 10 bit Apple ProRes HQ codec but offers comparable quality. Even the high quality ST version of X-OCN is smaller than ProRes HQ. So you can have image quality and data levels comparable to Sony’s 16 bit linear raw but in a lightweight, easy to handle 16 bit file that’s smaller than the most commonly used 10 bit version of ProRes.

But how is this even possible? Surely such an amazing 16 bit file should be bigger!

The key to all of this is that the data contained within an X-OCN file is based on the sensors output rather than traditional video.  The cameras that produce the X-OCN material all use bayer sensors. In a traditional video workflow the data from a bayer sensor is first converted from the luminance values that the sensor produces into a YCbCr or RGB signal.

So if the camera has a 4096×2160 bayer sensor in a traditional workflow this pixel level data gets converted to 4096×2160 of Green plus 4096×2160 of Red, plus 4096×2160 of Green (or the same of Y, Cb and Cr). In total you end up with 26 million data points which then need to be compressed using a video codec.

Bayer-to-RGB How can 16 bit X-OCN deliver smaller files than 10 bit XAVC-I?However if we bypass the conversion to a video signal and just store the data that comes directly from the sensor we only need to record a single set of 4096×2160 data points – 8.8 million. This means we only need to store 1/3rd as much data as in a traditional video workflow and it is this huge data saving that is the main reason why it is possible for X-OCN to be smaller than traditional video files while retaining amazing image quality. It’s simply a far more efficient way of recording the data from a bayer camera.

Of course this does mean that the edit or playback computer has to do some extra work because as well as decoding the X-OCN file it has to be converted to a video file, but Sony developed X-OCN to be easy to work with – which it is. Even a modest modern workstation will have no problem working with X-OCN. But the fact that you have that sensor data in the grading suite means you have an amazing degree of flexibility. You can even adjust the way the file is decoded to tailor whether you want more highlight or shadow information in the video file that will created after the X-OCN is decoded.

Why isn’t 16 bit much bigger than 10 bit? Normally a 16 bit file will be bigger than a 10 bit file. But with a video image there are often areas of information that are very similar. Video compression algorithms take advantage of this and instead of recording a value for every pixel will record a single value that represents all of the similar pixels. When you go from 10 bit to 16 bit, while yes, you do have more bits of data to record a greater percentage of the code values will be the same or similar and as a result the codec becomes more efficient. So the files size does increase a bit, but not as much as you might expect.

So, X-OCN, out of the gate, only needs to store 1/3rd of the data points of a similar traditional RGB or YCbCr codec. Increasing the bit depth from the typical 10 bit bit depth of a regular codec to the 16 bits of X-OCN does then increase the amount of data needed to record it. But the use of a clever algorithm to minimise the data needed for those 16 bits means that the end result is a 16 bit file only a bit bigger than XAVC-I but still smaller than ProRes HQ even at it’s highest quality level.

Latest Apple Pro Video Formats Update Adds MXF Playback.

If you are running the latest Mac Sierra OS the recent Pro Video Formats update, version 2.0.5 adds the ability to play back MXF OP1a files in Quick Time Player without the need to transcode.

mxf-playback-e1479727374478 Latest Apple Pro Video Formats Update Adds MXF Playback.
Direct preview of an XAVC MXF file in the finder of OS Sierra.

You can also preview MXF files in the finder window directly! This is a big deal and very welcome, finally you don’t need special software to play back files wrapped in one of the most commonly used professional media wrappers. Of course you must have the codec installed on your computer, it won’t play a file you don’t have the codec for, but XAVC, ProRes and many other pro codecs are include in the update.

At the moment I am able to play back most MXF files including most XAVC and ProRes MXF’s. However some of my XAVC MXF’s are showing up as audio only files. I can still play back these files with 3rd party software, there is no change there. But for some reason I can’t play back every XAVC MXF file directly in Quicktime Player, so play as audio only. I’m not sure why some files are fine and others are not, but this is certainly a step in the right direction. Why it’s taken so long to make this possible I don’t really know, although I suspect it is now possible due to changes in the core Quicktime components of OS Sierra.  You can apply this same Video Formats update to earlier OS’s but don’t gain the MXF playback.

Thanks to reader Mark for the heads-up!

Apple Pro Video Formats 2.0 – MXF in Quick Time, XAVC in FCP7???

So, Apple have released an update to the Pro Video Formats available in quicktime. http://support.apple.com/kb/DL1396?viewlocale=en_US&locale=en_US

The main thrust of this update appears to be to include MXF support in quicktime, including native support for XAVC in FCP7 and Quicktime7! I honestly never thought this would happen so I’m somewhat surprised by this. But it’s good news for FCP7 users and XAVC shooters in general. It does beg the question now as to whether you need the ProRes options for the FS7 or F5/F55.

Using Quicktime Player 7 you can play back XAVC MXF’s on a Mac computer, even in 4K, so VLC may no longer be required and the playback is smooth even with 4K clips. I’m not sure why but Quicktime Player 10 does not recognise 4K XAVC clips at all and HD XAVC clips get transcoded to .mov first. So download and install QT Player 7 for XAVC playback.

Currently it looks like the support is mainly for XAVC-I.

More Codec and Gamma Tests.

More Gemini, Samurai, AC-Log and S-Log sample frame grabs. See download box at bottom of post.
I had thought, when I first wrote this post that I had discovered a strange issue where the 444 RGB recordings from the Gemini had more dynamic range than 422 recordings. I didn’t think this was right, but it was what my NLE’s (FCP and CS5.5) were telling me. Anyway to cut a long story short, what was happening was that when I dropped the Gemini RGB files into the timeline the levels got mapped to legal levels, i.e. nothing over 100% while the YCbCr 422 clips go into the timeline at their original levels. The end result was that it appeared that the 422 clips were clipping before the 444 clips. Thanks to Waho for suggesting that it may be a conversion issue with the frame grabs, I was able to see that it was simply the way the NLE’s (both CS5.5 and FCP were behaving in the same way) were clipping off anything in the 422 clips above 100% both in the frame grabs and also on the monitor output. As the RGB files were all below 100% they were not clipped so appeared to have greater dynamic range.

Anyway….. below is a new set of frame grabs layered up in a single photoshop file showing how the various codecs and recorders and codecs perform. The levels in these have been normalised at 100% to avoid any dodgy clipping issues. I’ve included F3 Cinegamma 4, plus my AC-Log picture profile, plus Samurai ProRes, Gemini S-Log and F3 Internally recorded S-Log of a very extreme contrast situation. Use the link below to download the photoshop layers file. You’ll need to me a registered user to access the link.

[downloads_box title=”More codec test grabs.”]
Photoshop Layered Frame Grabs v3
[/downloads_box]

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.

The 8 bit or 10 bit debate.

Over the years there have been many, often heated debates over the differences between 8 bit and 10 bit codecs. This is my take on the situation, from the acquisition point of view.

The first thing to consider is that a 10 bit codec requires a 30% higher bitrate to achieve the same compression ratio as the equivalent 8 bit codec. So recording 10 bit needs bigger files for the same quality. The EBU recently evaluated several different 8 bit and 10 bit acquisition codecs and their conclusion was that for acquisition there was little to be gained by using any of the commonly available 10 bit codecs over 8 bit because of the data overheads.

My experience in post production has been that what limits what you can do with your footage, more than anything else is noise. If you have a noisy image and you start to push and pull it, the noise in the image tends to limit what you can get away with. If you take two recordings, one at a nominal 100Mb/s and another at say 50Mb/s you will be able to do more with the 100Mb/s material because there will be less noise. Encoding and compressing material introduces noise, often in the form of mosquito noise as well as general image blockiness. The more highly compressed the image the more noise and the more blockiness. It’s this noise and blockiness that will limit what you can do with your footage in post production, not whether it is 10 bit over 8 bit. If you have a 100Mb 10 bit HD compressed recording and comparable 100Mb 8 bit recording then you will be able to do more with the 8 bit recording because it will be in effect 30% less compressed which will give a reduction in noise.

Now if you have a 100Mb 8bit recording and a 130Mb 10 bit recording things are more evenly matched and possibly the 10 bit recording if it is from a very clean, noise free source will have a very small edge, but in reality all cameras produce some noise and it’s likely to be the camera noise that limits what you can do with the images so the 10 bit codec has little advantage for acquisition, if any.

I often hear people complaining about the codec they are using, siting that they are seeing banding across gradients such a white walls or the sky. Very often this is nothing to do with the codec. Very often it is being caused by the display they are using. Computers seem to be the worst culprits. Often you are taking an 8 bit YUV codec, crudely converting that to 8 bit RGB and then further converting it to 24 bit VGA or DVI which then gets converted back down to 16 bit by the monitor. It’s very often all these conversions between YUV and RGB that cause banding on the monitor and not the fact that you have shot at 8 bit.

There is certainly an advantage to be had by using 10 bit in post production for any renders, grading or effects. Once in the edit suite you can afford to use larger codecs running at higher bit rates. ProRes HQ or DNxHD at 185Mb/s or 220Mb/s are good choices but these often wouldn’t be practical as shooting codecs eating through memory cards at over 2Gb per minute. It should also be remembered that these are “I” frame only codecs so they are not as efficient as long GoP codecs. From my point of view I believe that to get something the equivalent of 8 bit Mpeg 2 at 50Mb/s you would need a 10 bit I frame codec running at over 160Mb/s. How do I work that out? Well if we consider that Mpeg 2 is 2.5x more efficient than I frame only then we get to 125Mb/s (50 x 2.5). Next we add the required 30% overhead for 10 bit (125 x 1.3) which gives 162.5Mb/s. This assumes the minimum long GoP efficiency of x2.5. Very often the long GoP advantage is closer to x3.

So I hope you can see that 8 bit still makes sense for acquisition. In the future as cameras get less noisy, storage gets cheaper and codecs get better the situation will change. Also if you are studio based and can record uncompressed 10 bit then why not? Do though consider how you are going to store your media in the long term and consider the overheads needed to throw large files over networks or even the extra time it takes to copy big files compared to small files.