Category Archives: Workflow

Why Doesn’t My Footage Grade Well?

So this came up on social media. Someone had been playing with some sample raw footage provided by a camera manufacturer and he/she was concerned because he felt that this manufacturer supplied footage graded much better than his/her own Sony XAVC-I footage.

There is a lot to a situation like this, but very often the issue isn’t that the other footage was raw while his/her own footage was XAVC-I S-Log. Raw doesn’t normally have more colour or more DR than log. The colour range and the shadow range won’t be significantly different as that tends to be limited by the cameras sensor rather than the recording codec. But what you might have if the raw is 12 bit or greater is a larger bit depth or less compression. Perhaps a bit of both and that can sometimes give some extra precision or finer gradations as well as a bit less noise (although compression can reduce noise). This may come into play when you really start to push and pull the footage very hard in the grade, but generally, if you can’t get the image you want out of 10 bit XAVC-I, 12 bit raw isn’t going to help you. Raw might make it a bit quicker to get to where you want to be and I do love working with the 16 bit X-OCN (raw) from Venice and Burano, but I have never really felt that XAVC S-Log3 is lacking. Even a deeper bit depth might not be all it seems. The sensors in most video cameras under $20K only have 12 bit analog to digital converters and that tends to be the main image quality bottleneck (and this where Venice really shines with its 14 bit A2D).

Sony’s XAVC-I S-log3 grades really well, really, really well. A big issue however is the reliance on LUTs. 3D LUTs divide the image up into 33x or 65x adjustment bands and then it is down to the grading software to interpolate between each band. This can introduce artefacts into the image.

Some people simply skip doing a proper colourspace transform altogether and this may introduce twists into the gamma and colourspace which then makes it hard to get the colours or contrast they really want as it can be hard to bend the colours in a pleasing way without them going “weird”.

Colour managed workflows help to maintain the full range of the original content within the correct output colourspace without any unwanted twists and are often the best way to full realise the potential of the content you have shot.

Plus not all grading software is created equal. I was an Adobe Premiere user for years until I needed to do a lot more grading. When DaVinci Resolve became affordable I switched to Resolve for grading and have never looked back – after all it is a proper grading tool, not edit software with a bunch of plugins bolted on.

But as always the real key is how it was shot. Manufacturer supplied sample content is likely to have been shot very well and highly optimised, after all they want it to make their camera look as good as possible. When comparing footage from different sources you really do need to consider whether just how well it was shot. Was the most appropriate exposure index used for the type of scene. Was it shot at the best possible time of day for the best sun positioning. How much attention went in to things like the careful choice of the colours in the scene to provide pleasing colour contrast. How much time was spent with negative fill to bring down the shadow areas etc, what filtration was used to bleed off highlights or polarise the sky or windows. What lenses were used. All these things will have a massive impact on how gradeable the footage will be.

Updates for Catalyst Browse and Resolve 18.5 Beta

This is just a quick heads up as I’m on the road right now.

Sony have released a major update for Catalyst Browse and Catalyst prepare that is packed full of bug fixes.  https://www.sonycreativesoftware.com/catalystbrowse
 

In addition Black Magic design have just release the public beta of DaVinci Resolve 18.5. With this update you can now use the Raw controls in the Grading room to control the ISO/White Balance/Tint etc of S-Log3 footage from the FX series cameras. This makes it so easy to adjust for any exposure offsets.  https://www.blackmagicdesign.com/support/family/davinci-resolve-and-fusion

Catalyst Browse Invalid Licence Issues

I’m just putting this here in case it is of use to someone stuck with an Invalid Licence message when starting a new installation of Sony’s Catalyst Browse or Prepare software. Typically you get this error if you are migrating the operating system to anew computer via a backup or have uninstalled and then re-installed one of the Catalyst products.

On a Mac

Press and hold Shift + Command (?) while launching the application from the application folder. 

You should then be prompted to log in to your Sony Creative Software account, after which the activation should proceed as normal.

If that does not correct this issue, you may need to remove the licence files from you system.

Locate this folder:

/Users/Shared/

This folder contains files that corresponding to the Catalyst products installed on your system and will have the extension .LICENSE. Delete the .license files and retry the online registration process.

Please note that the .license files are hidden. To make them visible, press Shift + Command (?) + Period (.) 

On a Windows PC.

Press and hold Shift + the Windows key while launching the application.

This should open a window prompting you to log in to your Sony Creative Software account to activate your software as normal.

If that does not correct this issue, you may need to manually remove the licence files from your system.

Locate the following folder:

C:\ProgramData\Sony\

This folder contains files that correspond with the Catalyst products you have installed on your system and they will have the extension .LICENSE. Delete the .license files and retry the online registration process.

The ProgramData folder is a hidden folder. If you can’t see it, you will need to adjust your Folder Options to allow hidden files, folders, and drives to be displayed.

XAVC-I or XAVC-L which to choose?

THE XAVC CODEC FAMILY

The XAVC family of codecs was introduced by Sony back in 2014.  Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.

XAVC-I

XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.


 

XAVC-L

XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).

Performance and Efficiency.

Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of  shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.

Motion Artefacts.

BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
 
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.

Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot. 

Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.

HELP! There is banding in my footage – or is there?

I’ve written about this before, but it’s worth bringing up again as I keep coming across people that are convinced there is a banding issue with their camera or their footage. Most commonly they have shot a clear blue sky or a plain wall and when they start to edit or grade their content they see banding in the footage.

Most of the cameras on the market today have good quality 10 bit codecs and there is no reason why you should ever see banding in a 10 bit recording, it’s actually fairly uncommon in 8 bit recordings unless they are very compressed or a lot of noise reduction has been used.

So – why are these people seeing banding in their footage? 

99% of the time it is because of their monitoring. 

Don’t be at all surprised if you see banding in footage if you view the content on a computer monitor or other monitor connected via a computers own HDMI port or a graphics card HDMI port. When monitoring this way it is very, very common to see banding that isn’t really there. If this is what you are using there will be no way to be sure whether any banding you see is real or not (about the only exception to this is the screen of the new M1 laptops). There are so many level translations between the colourspace and bit depth of the source video files, the computer desktop, the HDMI output and the monitors setup that banding is often introduced somewhere in the chain. Very often the source clips will be 10 bit YCbCr, the computer might be using a 16 bit or 24 bit colour mode and then the  HDMI might only be 8 bit RGB. Plus the gamma of the monitor may be badly matched and the monitor itself of unknown quality.

For a true assessment of whether footage has banding or not you want a proper, good quality video monitor connected via a proper video card such as a Blackmagic Decklink card or a device such as a BlackMagic UltraStudio product. When using a proper video card (not a graphics card) you bypass all the computer processing and go straight from the source content to the correct output. This way you will go from the 10 bit YCbCr direct to a 10 bit YCbCr output so there won’t be extra conversion and translation stages adding phantom artefacts to your footage.

If you are seeing banding, to try to understand whether the banding you are seeing is in the original footage or not try this: Take the footage into your grading software, using a paused (still) frame enlarge the clip so that the area with banding fills the monitor and note exactly where the edges of the bands are. Then slowly change the contrast of the clip. If the position of the edges of the bands moves, they are not in the original footage and something else is causing them. If they do not move, then they are baked in to the original material.

Adobe to drop colour managament by default.

I had an interesting chat with Adobe about how they implemented the colour management of footage in Adobe Premiere at IBC. They seem to realise that the way it is implemented by default isn’t very popular, that it isn’t  well documented and not particularly well implemented.  So things will revert back to everything being handled as Rec-709 in a Rec-709 project by default in the near future. This will come very soon (it may already be in the public Beta but I haven’t checked). This does of course mean that once again your Log footage will look flat, which is actually incorrect, Log is not flat, you are just viewing it incorrectly. Those delivering in HDR will have to figure out how to best manage their footage so it isn’t unnecssarily restricted by passing through a Rec-709 project/timeline and most will end up using LUTs with all the restrictions that they impose once again. Perhaps Adobe will return to colour management in the future once they have figured out how to implement it in a more user friendly way.

More on Adobe’s new color managed workflow.

I’ve written about this before, but the way Adobe have changed the way they manage colourspaces has changed, it hasn’t been well documented, and it’s causing a lot of confusion.

When importing Log footage into the latest versions of Adobe Premiere instead of the log footage looking flat and washed out as it used to, now it looks contrasty and well saturated. If it has been exposed correctly (according to the manufacturers specifications) then it will look like normal Rec-709 footage rather than the flat look that most people associate with log. This is confusing people, many assume Adbe is now adding a LUT to the footage by default, it isn’t. What is happening isd a fundamental change to the way Premiere handles different colorspaces.

NOT ADDING A LUT.

Premiere is NOT adding a LUT. It is transforming between the captured colorspace and the display colorspace so that the footage looks correct with the right contrast, colour and brightness on your display. Your footage remains in its native colorspace at all times (unless you force it into an alternate and possibly wrong colorspace by using the interpret footage function).

Your display could be 709, HDR10, HLG, SGamut3/S-log3 and in each case the footage would, within the limitations of the displays format have the same basic contrast and colour etc, the footage would look the same whether viewing in SDR, HDR or Log because Premiere maps it to the correct levels for the output colorspace you are using to view your content.

OLD BROKEN WORKFLOWS.

The issue is that previously we have been using very broken workflows  that are normally incapable of showing capture colorspaces other than Rec-709 correctly. This has made people believe that log formats are supposed to look flat – They are not! When viewed correctly they should  have the same contrast as 709 etc. Log is not flat, but we have been viewing it incorrectly because most workflows have been incapable of mapping different source colorspaces to our chosen working/viewing colorspace.

LUTs ARE A QUICK FIX – WITH LIMITATIONS.

Up to now to fix these broken workflows we have added LUT’s to convert our beautiful, high dynamic range, vast colorspace source formats into restricted, reduced dynamic range display formats. Once you add that 709 LUT to you S-Log3 footage it is no longer SGamut3/Slog3 it is now Rec-709 with all the restrictions that 709 has such as limited dynamic range, limited colorspace etc and that may limit what you can do with it in the grade. Plus it limits you to only ever outputting in SDR 709.

But what we have now in a colour managed workflow is our big range log being displayed correctly on a 709 display or any other type of display, including HDR or DCI-P3 etc. Because the footage is still in its native colorspace you will have much greater grading latitude, there’s no knee added to the highlights, no shadow roll off, no artificial restriction to the source colorspace. So you can more easily push and pull the material far further during adjustment and grading (raw workflows have always been color managed out of necessity as the raw footage can’t be viewed correctly without first being converted into a viewable colorspace).

HERE’s THE RUB!

But the rub is – you are not now adding someone else’s carefully crafted LUT, which is a combination of creative and artistic corrections that give a pleasing look combined with the Log to Rec 709 conversion.

So – you’re going to have to learn how to grade for yourself – but you will have a much bigger colour and contrast range to work with as your footage will remain in it’s full native capture range.

And – if you need to deliver in multiple formats, which you will need to start doing very soon if you are not already, it is all so much easier as in a colour managed workflow all you do is switch the output format to change from 709 to HDR10 or HLG or DCI-P3 to get whatever format you want without having to re-grade everything or use different LUT’s for each format.

HOW LONG CAN YOU STAY JUST IN REC-709?

And when you consider that almost all new TV’s, the majority of new Phones and Tablets all have HDR screens and this is all now supported on YouTube and Vimeo etc how much longer do you think you will be able to cling on to only delivering in SDR Rec-709 using off-the-shelf SDR LUTs? If you ever want to do stuff for Netflix, Amazon etc you will need to figure out how to work in both SDR and HDR.

IT’S HERE TO STAY

Adobe have done a shockingly bad job of documenting and explaining this new workflow, but it is the future and once you learn how to use it properly it should improve the quality of what you deliver and at the same time expand the range of what you can deliver.

I have to deliver both SDR and HDR for most of my clients and I’ve been using colour managed workflows for around 6 years now (mostly ACES in Resolve). I could not go back to the restrictions of a workflow that doesn’t allow you to output in multiple colorspaces or requires you to perform completely separate grades for SDR and HDR. The great thing about ACES is that it is a standardised and well documented workflow so you can use ACES LUT’s designed for the ACES workflow if you wish. But until Adobe better document their own colour managed workflow it is difficult to design LUT’s for use in the Adobe workflow. Plus LUT’s that work with the Adobe workflow, probably won’t work elsewhere. So – it’s never been a better time to learn how to grade properly or think about what workflow you will use to do your grading.

The bottom line is the days of using LUT’s that add both an artistic look and convert your footage from its source colorspace to a single delivery colorspace are numbered.  Color managed offer far greater flexibility for multi format delivery. Plus they retain the full range and quality of your source material, no matter what colorspace you shot it in or work in.

New SxS/AXS drivers and New Raw Viewer

mac-drivers-600x450 New SxS/AXS drivers and New Raw ViewerOver the last few days Sony have been busy releasing new drivers and new software to support not just Venice 2 but also the AXS-R7 and newer AXS-R1 SxS card readers on Apples M1 macs as well as Windows 11.

There is a new and updated version of Sony’s Raw Viewer software that includes support for the 8K Venice 2 files and which runs correctly on Apples newer M1 silicon. This can be downloaded from here: https://www.sonycreativesoftware.com/download/rawviewer

In addition Sony have released a new AXSM utility tool with new drivers for the AR1/AR3 to support the latest cards and formats as well as support for Apple M1 silicon and Windows 11. This is an essential update if you are using these new readers or Venice 2. This can be downloaded from here: https://www.sonycreativesoftware.com/axsmdrive#download

Timecode doesn’t synchronise anything!!!

There seems to be a huge misunderstanding about what timecode is and what timecode can do. I lay most of the blame for this on manufactures that make claims such as “Our Timecode Gadget Will Keep Your Cameras in Sync” or “by connecting our wireless time code device to both your audio recorder and camera everything will remain in perfect sync”. These claims are almost never actually true.

What is “Sync”.

First we have to consider what we mean when we talk about “sync” or synchronisation.  A dictionary definition would be something like “the operation or activity of two or more things at the same time or rate.” For film and video applications if we are talking about 2 cameras they would be said to be in sync when both start recording each frame that they record at exactly the same moment in time and then over any period of time they record exactly the same number of frames, each frame starting and ending at precisely the same moment.

What is “Timecode”.

Next we have to consider what time code is. Timecode is a numerical value that is attached to each frame of a video or an audio recording in an audio device to give it a time value in hours, minutes, seconds, frames. It is used to identify individual frames and each frame must have a unique numerical value. Each individual successive frames timecode value MUST be “1” greater than the frame before (I’m ignoring drop frame for the sake of clarity here). A normal timecode stream does not feature any form of sync pulse or sync control, it is just a number value.

Controlling the “Frame Rate”.


And now we have to consider what controls the frame rate that a camera or recorder records at. The frame rate the camera records at is governed by the cameras internal sync or frame clock. This is normally a  circuit controlled by a crystal oscillator. It’s worth noting that these circuits can be affected by heat and at different temperatures there may be very slight variations in the frequency of the sync clock. Also this clock starts when you turn the camera on, so the exact starting moment of the sync clock depends on the exact moment the camera is switched on. If you were to randomly turn on a bunch of cameras their sync clocks would all be running out of sync. Even if you could press the record button on each camera at exactly the same moment, each would start recording the first frame at a very slightly different moment in time depending on where in the frame rate cycle the sync clock of each camera is. In higher end cameras there is often a way to externally control the sync clock via an input called “Genlock”.  Applying a synchronisation signal to the cameras Genlock input will pull the cameras sync clock into precise sync with the sync signal and then hold it in sync. 

And the issue is………..

Timecode doesn’t perform a sync function. To SYNCHRONISE two cameras or a camera and audio recorder you need a genlock sync signal and timecode isn’t a sync signal, timecode is just a frame count  number. So timecode cannot synchronise 2 devices. The camera’s sync/frame clock might be running at a very slightly different frame rate to the clock of the source of the time code. When feeding timecode to a camera the camera might already be part way through a frame when the timecode value for that frame arrives, making it too late to be added, so there will be an unavoidable offset. Across multiple cameras this offset will vary, so it is completely normal to have a +/- 2 frame (sometimes more) offset amongst several cameras at the start of each recording.

And once you start to record the problems can get even worse…

If the camera’s frame clock is running slightly faster than the clock of the TC source then perhaps the camera might record 500 frames but only receive 498 timecode values – So what happens for the 2 extra frames the camera records in this time? The answer is the camera will give each frame in the sequence a unique numerical value that increments by 1, so the extra frames will have the necessary 2 additional TC values. And as a result the TC in the camera at the end of the clip will be an additional 2 frames different to that of the TC source. The TC from the source and the TC from the camera won’t exactly match, they won’t be in sync or “two or more things at the same time or rate”, they will be different.

The longer the clip that you record, the greater these errors become as the camera and TC source drift further apart.

Before you press record on the camera, the cameras TC clock will follow the external TC input. But as soon as you press record, every recorded  frame MUST have a unique new numerical value 1 greater than the previous frame, regardless of what value is on the external TC input. So the cameras TC clock will count the frames recorded. And the number of frames recorded is governed by the camera sync/frame clock, NOT the external TC.  

So in reality the ONLY way to truly synchronise the timecode across multiple cameras or audio devices is to use a sync clock connected to the  GENLOCK input of each device.

Connecting an external TC source to a cameras TC input is likely to  result in much closer TC values for both the audio recorder and camera(s) than no connection at all. But don’t be surprised if you see small 1 or 2 frame errors at the start of clips due to the exact timing of when the TC number arrives at the camera relative to when the camera starts to record the first frame and then possibly much larger errors at the ends of clips, these errors are expected and normal. If you can’t genlock everything with a proper sync signal, a better way to do it is to use the camera as the TC source and feed the TC from the camera to the audio recorder. Audio recorders don’t record in frames, they just lay the TC values alongside the audio. As an audio recorder doesn’t need to count frames the TC values will always be in the right place in the audio file to match the cameras TC frame count.