Category Archives: Workflow

Timecode doesn’t synchronise anything!!!

There seems to be a huge misunderstanding about what timecode is and what timecode can do. I lay most of the blame for this on manufactures that make claims such as “Our Timecode Gadget Will Keep Your Cameras in Sync” or “by connecting our wireless time code device to both your audio recorder and camera everything will remain in perfect sync”. These claims are almost never actually true.

What is “Sync”.

First we have to consider what we mean when we talk about “sync” or synchronisation.  A dictionary definition would be something like “the operation or activity of two or more things at the same time or rate.” For film and video applications if we are talking about 2 cameras they would be said to be in sync when both start recording each frame that they record at exactly the same moment in time and then over any period of time they record exactly the same number of frames, each frame starting and ending at precisely the same moment.

What is “Timecode”.

Next we have to consider what time code is. Timecode is a numerical value that is attached to each frame of a video or an audio recording in an audio device to give it a time value in hours, minutes, seconds, frames. It is used to identify individual frames and each frame must have a unique numerical value. Each individual successive frames timecode value MUST be “1” greater than the frame before (I’m ignoring drop frame for the sake of clarity here). A normal timecode stream does not feature any form of sync pulse or sync control, it is just a number value.

Controlling the “Frame Rate”.


And now we have to consider what controls the frame rate that a camera or recorder records at. The frame rate the camera records at is governed by the cameras internal sync or frame clock. This is normally a  circuit controlled by a crystal oscillator. It’s worth noting that these circuits can be affected by heat and at different temperatures there may be very slight variations in the frequency of the sync clock. Also this clock starts when you turn the camera on, so the exact starting moment of the sync clock depends on the exact moment the camera is switched on. If you were to randomly turn on a bunch of cameras their sync clocks would all be running out of sync. Even if you could press the record button on each camera at exactly the same moment, each would start recording the first frame at a very slightly different moment in time depending on where in the frame rate cycle the sync clock of each camera is. In higher end cameras there is often a way to externally control the sync clock via an input called “Genlock”.  Applying a synchronisation signal to the cameras Genlock input will pull the cameras sync clock into precise sync with the sync signal and then hold it in sync. 

And the issue is………..

Timecode doesn’t perform a sync function. To SYNCHRONISE two cameras or a camera and audio recorder you need a genlock sync signal and timecode isn’t a sync signal, timecode is just a frame count  number. So timecode cannot synchronise 2 devices. The camera’s sync/frame clock might be running at a very slightly different frame rate to the clock of the source of the time code. When feeding timecode to a camera the camera might already be part way through a frame when the timecode value for that frame arrives, making it too late to be added, so there will be an unavoidable offset. Across multiple cameras this offset will vary, so it is completely normal to have a +/- 2 frame (sometimes more) offset amongst several cameras at the start of each recording.

And once you start to record the problems can get even worse…

If the camera’s frame clock is running slightly faster than the clock of the TC source then perhaps the camera might record 500 frames but only receive 498 timecode values – So what happens for the 2 extra frames the camera records in this time? The answer is the camera will give each frame in the sequence a unique numerical value that increments by 1, so the extra frames will have the necessary 2 additional TC values. And as a result the TC in the camera at the end of the clip will be an additional 2 frames different to that of the TC source. The TC from the source and the TC from the camera won’t exactly match, they won’t be in sync or “two or more things at the same time or rate”, they will be different.

The longer the clip that you record, the greater these errors become as the camera and TC source drift further apart.

Before you press record on the camera, the cameras TC clock will follow the external TC input. But as soon as you press record, every recorded  frame MUST have a unique new numerical value 1 greater than the previous frame, regardless of what value is on the external TC input. So the cameras TC clock will count the frames recorded. And the number of frames recorded is governed by the camera sync/frame clock, NOT the external TC.  

So in reality the ONLY way to truly synchronise the timecode across multiple cameras or audio devices is to use a sync clock connected to the  GENLOCK input of each device.

Connecting an external TC source to a cameras TC input is likely to  result in much closer TC values for both the audio recorder and camera(s) than no connection at all. But don’t be surprised if you see small 1 or 2 frame errors at the start of clips due to the exact timing of when the TC number arrives at the camera relative to when the camera starts to record the first frame and then possibly much larger errors at the ends of clips, these errors are expected and normal. If you can’t genlock everything with a proper sync signal, a better way to do it is to use the camera as the TC source and feed the TC from the camera to the audio recorder. Audio recorders don’t record in frames, they just lay the TC values alongside the audio. As an audio recorder doesn’t need to count frames the TC values will always be in the right place in the audio file to match the cameras TC frame count. 

Premiere Pro 2022 and Issues With S-Log3 – It’s Not A Bug, It’s A Feature!

This keeps cropping up more and more as users of Adobe Premiere Pro upgrade to the 2022 version.

What people are finding is that when they place S-Log3 (or almost any other log format such as Panasonic V-Log or Canon C-Log) into a project, instead of looking flat and washed as it would have done in previus versions of Premiere, the log footage looks more like Rec-709 with normal looking contrast and normal looking color. Then when they apply their favorite LUT to the S-Log3 it looks completely wrong, or at least very different to the way it looked in previous versions.

So, what’s going on?

This isn’t a bug, this is a deliberate change. Rec-709 is no longer the only colourspace that people need to work in and more and more new computers and monitors support other colourspaces such as P3 or Rec2020. The Macbook Pro I am writing this on has a wonderful HDR screen that supports Rec2020 or DCI P3 and it looks wonderful when working with HDR content!
 
Color Management and Colorspace Transforms.
 
Premiere 2022 isn’t adding a LUT to the log footage, it is doing a colorspace transform so that the footage you shot in one colorspace (S-Log3/SGamut3.cine/V-Log/C-Log/Log-C etc) gets displayed correctly in the colorspace you are working in.

S-Log3 is NOT flat.
 
A common misconception is that S-log3 is flat or washed out. This is not true. S-log3 has normal contrast and normal colour.
 
The only reason it appears flat is because more often than not people use it in a miss matched color space and the miss match that you get when you display material shot using the S-log3/SGamut3 colorspace using the Rec-709 colorspace causes it to be displayed incorrectly and the result is images that appear to be flat, lack contrast and colour when in fact your S-Log3 footage isn’t flat, it has lots of contrast and lots of colour. You are just viewing it incorrectly in the incorrect colorspace.

So, what is Premiere 2022 doing to my log footage?
 
What is now happening in Premiere is that Premiere 2022 reads the clips metadata to determine its native colorspace and it then adds a colorspace transform to convert it to the display colourspace determined by your project settings.
 
The footage is still S-Log3, but now you are seeing it as it is actually supposed to look, albeit within the limitations of the display gamma. S-log3 isn’t flat, it just that previously you were viewing it incorrectly, but now with the correct colorspace being added to match the project settings and the type of monitor you are using the S-Log3 is being displayed correctly having been transformed fro S-Log3/SGamut3 to Rec-709 or whatever your project is set to.
 
If your project is an HDR project, perhaps HDR10 to be compatible with most HDR TV’s or for a Netflix commission then the S-log3 would be transformed to HDR10 and would be seen as HDR on an HDR screen without any grading being necessary. If you then changed you project settings to DCI-P3 then everything in your project will be transformed to P3 and will look correct without grading on a P3 screen. Then change to Rec709 and again it all looks correct without grading – the S-Log3 doesn’t look flat, because in fact it isn’t.

Color Managed Workflows will be the new “normal”.
 
Colour managed workflows such as this are now normal in most high end edit and grading applications and it is something we need to get used to because Rec709 is no longer the only colorspace that people need to deliver in. It won’t be long before delivery in HDR (which may mean one of several different gamma and gamut combinations) becomes normal. This isn’t a bug, this is Premiere catching up and getting ready for a future that won’t be stuck in SDR Rec-709. 

A color managed workflow means that you no longer need to use LUT’s to convert your Log footage to Rec-709, you simply grade you clips within the colorspace you will be delivering in. A big benefit of this comes when working with multiple sources. For example S-Log3 and Rec-709 material in the same project will now look very similar. If you mix log footage from different cameras they will all look quite similar and you won’t need separate LUT’s for each type of footage or for each final output colorspace.

The workaround if you don’t want to change.
 
If you don’t want to adapt to this new more flexible way of working then you can force Premiere to ignore the clips metadata by right clicking on your clips and going to “Modify” and “Interpret Footage” and then selecting “Colorspace Override” and setting this to Rec-709. When you use the interpret footage function on an S-Log3 clip to set the colorspace to Rec709 what you are doing is forcing Premiere to ignore the clips metadata and to treat the S-Log3 as though it is a standard dynamic range Rec-709 clip. In a Rec-709 project his re-introduces the gamut miss-match that most are used to and results in the S-Log3 appearing flat and washed out. You can then apply your favourite LUTs to the S-Log3 and the LUT then transforms the S-Log3 to the projects Rec-709 colorspace and you are back to where you were previously.
 
This is fine, but you do need to consider that it is likely that at some point you will need to learn how to work across multiple colorspaces and using LUTs as colorspace transforms is very inefficient as you will need separate LUTs and separate grades for every colorspace and every different type of source material that you wish to work in. Colour managed workflows such as this new one in Premiere or ACES etc are the way forwards as LUTs are no longer needed for colorspace transforms, the edit and grading software looks after this for you. Arri Log-C will look like S-Log3 which will look like V-Log and then the same grade can be applied no matter what camera or colorspace was used. It will greatly simplify workflows once you understand what is happening under the hood and allows you to output both SDR and HDR versions without having to completely re-grade everything.

Unfortunately I don’t think the way Adobe are implementing their version of a colour managed workflow is very clear. There are too many automatic assumptions about what you want to do and how you want to handle your footage. On top of this there are insufficient controls for the user to force everything into a known set of settings. Instead different things are in different places and it’s not always obvious exactly what is going on under the hood. The color management tools are all small addons here and there and there is no single place where you can go for an overview of the start to finish pipeline and settings as there is in DaVinci Resolve for example. This makes it quite confusing at times and it’s easy to make mistakes or get an unexpected result.  There is more information about what Premiere 2022 is doing here: https://community.adobe.com/t5/premiere-pro-discussions/faq-premiere-pro-2022-color-management-for-log-raw-media/

DaVinci resolve Frame Rendering Issue and XAVC

There is a bug in some versions of DaVinci Resolve 17 that can cause frames in some XAVC files to be rendered in the wrong order. This results in renders where the resulting video appears to stutter or the motion may jump backwards for a frame or two. This has now been fixed in version 17.3.2 so all user of XAVC and DaVinci Resolve are urged to upgrade to at least version 17.3.2.

https://www.blackmagicdesign.com/uk/support/family/davinci-resolve-and-fusion

XAVC-I v ProResHQ, multi-generation test.

I often hear people saying that XAVC-I isn’t good enough or that you MUST use ProRes or some other codec. My own experience is that XAVC-I is actually a really good codec and recording to ProRes only ever makes the very tiniest (if any) difference to the finished production.

I’ve been using XAVC-I for over 8 years and it really worked very well for me. I’ve also tested and compared it against ProRes many times and I know the differences are very small, so I am always confident that when using XAVC-I that I will get a great result. But I decided to make this video to show just how close they are.

It was shot with a Sony FX6 using internal XAVC-I (class 300) on an SD card alongside an external recording using ProResHQ on a Shogun 7. I deliberately chose to use Cine EI and S-Log3 at the cameras high base ISO of 12,800 as noise will stress any codec that little bit harder and adding a LUT adds another layer of complexity that might show up any issues all  just to make the test that little bit tougher. The slightly higher noise level of the high base ISO also allows you to see how each codec handles noise more easily.

A sample clip of each codec was place in the timeline (DaVinci Resolve) and a caption added. This was then rendered out, ProRes HQ rendered using ProRes HQ and the XAVC-I files rendered to XAVC-I. So for most of the examples seen the XAVC-I files have been copied and re-encoded 5 times plus the encoding to the file uploaded to YouTube, plus YouTubes own encoding, a pretty tough test.

Because in most workflows I don’t believe many people will use XAVC-I in post production as an intermediate codec I also repeated the tests with the XAVC-I rendered to ProResHQ 5 times over as this is probably more representative of a typical real world workflow. These examples are shown at the end of the video. Of course the YouTube compression will restrict your ability to see some of the differences between the two codecs. But, this is how many people will be distributing their content. Even if not via YouTube, via other highly compressed means, so it’s not an unfair test and reflects many real world applications.

Where the s709 LUT has been added it was added AFTER each further copy of the clip, so this is really a “worst case scenario”. Overall in the end the ProRes HQ and XAVC-I are remarkably similar in performance. In the 300% blow up you can see differences between the XAVC-I that is 6 generations old compared to the 6th generation ProRes HQ if you look very carefully at the noise. But the differences are very, very hard to spot and going 6 generations of XAVC-I is not realistic. It was designed a s a camera codec. In the same test where the XAVC was rendered to ProRes HQ for each post production generation any difference is incredibly hard to find even when magnified 300%. I am not claiming that XAVC-I Class 300 is as good as ProRes HQ. But I think it is worth considering what you need when shooting. Do you really want to have to use an external recorder, do you really want to have to deal with files that are 3 to 4 times larger. Do you want to have to remember to switch recording methods between slow motion and normal speeds? For most productions I very much doubt that the end viewer would ever be able to tell the difference between material shot using XAVC-I class 300 and ProResHQ. And that audience certainly isn’t going to feel they are watching a substandard image, and that’s what counts. 

There is so much emphasis placed on using “better” codecs that I think some people are starting to believe that XAVC-I is unusable or going to limit what they can do. This isn’t the case. It is a pretty good codec and frankly if you can’t get a great looking image when using XAVC then a better codec is unlikely to change that.

Catalyst BRowse and catalyst Prepare Updated.

Timed to coincide with the release of the ILME-FX6 camcorder Sony have updated both Catalyst Browse and Catalyst Prepare. These new  and long awaited versions add support for the FX6’s rotation metadata and clip flag metadata as well as numerous bug fixes. It should be noted that for the correct operation that a GPU that supports OpenGL is required. Also while the new versions support MacOS Catalina there is no official support for Big Sur. Catalyst Browse is free while Catalyst Prepare is not free. Prepare can perform more complex batch processing of files, checksum and file verification, per-clip adjustments as well as other additional features.

For more information go to the Sony Creative Software website.

Raw Isn’t Magic. With the right tools Log does it too.

Raw can be a brilliant tool, I use it a lot. High quality raw is my preferred way of shooting. But it isn’t magic, it’s just a different type of recording codec.
 
All too often – and I’m as guilty as anyone – people talk about raw as “raw sensor data” a term that implies that raw really is something very different to a normal recording. In reality it’s really not that different. When shooting raw all that happens is that the video frames from the sensor are recorded before they are converted to a colour image. A raw frame is still a picture, it’s just that it’s a bitmap image made up of brightness values, each pixel represented by a single brightness code value rather than a colour image where each location in the image is represented by 3 values one for each of Red, Green and Blue or Luma, Cb and Cr.

As that raw frame is still nothing more than a normal bitmap all the cameras settings such as white balance, ISO etc are in fact baked in to the recording. Each pixel only has one single value and that value will have been determined by the way the camera is setup. Nothing you do in post production can change what was actually recorded. Most CMOS sensors are daylight balanced, so unless the camera adjusts the white balance prior to recording – which is what Sony normally do – your raw recording will be daylight balanced.

Modern cameras when shooting log or raw also record metadata that describes how the camera was set when the image was captured. 

So the recorded raw file already has a particular white balance and ISO. I know lots of people will be disappointed to hear this or simply refuse to believe this but that’s the truth about a raw bitmap image with a single code value for each pixel and that value is determined by the camera settings.

This can be adjusted later in post production, but the adjustment range is not unlimited and it is not the same as making an adjustment in the camera. Plus there can be consequences to the image quality if you make large adjustments. 

Log can be also adjusted extensively in post too. For decades feature films shot on film were scanned using 10 bit Cineon log (which is the log gamma curve S-Log3 is based on) and 10 bit log used for post production until 12 bit and then 16 bit linear intermediates came along like OpenEXR. So this should tell you that actually log can be graded very well and very extensively.

But then many people will tell you that you can’t grade log as well as raw. Often they will give photographers as an example where there is a huge difference between what you can do with a raw photo and a normal image. But we also have to remember this is typically comparing what you can do with a highly compressed 8 bit jpeg file and an often uncompressed 12 or 14 bit raw file. It’s not a fair comparison, of course you would expect the 14 bit file to be better.

The other argument often given is that it’s very hard to change the white balance of log in post, it doesn’t look right or it falls apart. Often these issues are nothing to do with the log recording but more to do with the tools being used.

When you work with raw in your editing or grading software you will almost always be using a dedicated raw tool or raw plugin designed for the flavour of raw you are using. As a result everything you do to the file is optimised for the exact flavour of raw you are dealing with. It shouldn’t come as a surprise to find that to get the best from log you should be using tools specifically designed for the type of log you are using. In the example below you can see how Sony’s Catalyst Browse can perfectly correctly change the white balance and exposure of S-log material with simple sliders just as effectively as most raw formats.
 
Screenshot-2020-11-27-at-17.14.07 Raw Isn't Magic. With the right tools Log does it too.
On the left is the original S-Log3 clip with the wrong white balance (3200K) and on the left is the corrected image. The only corrections made are via the Temperature slider and exposure slider.
 
Applying the normal linear or power law (709 is power law) corrections found in most edit software to Log won’t have the desired effect and basic edit software rarely has proper log controls. You need to use a proper grading package like Resolve and it’s dedicated log controls. Better still some form of colour managed workflow like ACES where your specific type of log is precisely converted on the fly to a special digital intermediate and the corrections are made to the intermediate file. There is no transcoding, you just tell ACES what the footage was was shot on and magic happens under the hood. Once you have done that you can change the white balance or ISO of log material in exactly the same way as raw. There is very, very little difference.
 
Screenshot-2020-11-27-at-17.29.27 Raw Isn't Magic. With the right tools Log does it too.
The same S-Log3 clip as in the above example, this time in DaVinci Resolve using ACES. The only corrections being made are via the Temp slider for the white balance change and the Log-Offset wheel which in ACES provides a precise exposure adjustment.
 
When people say you can’t push log, more often than not it isn’t a matter of can’t, it’s a case of can – but you need to use the right tools.
 
After-correction-450x253 Raw Isn't Magic. With the right tools Log does it too.
This is what log shot with completely the wrong white balance and slightly over exposed looks like after using nothing but the WB and ISO sliders in Catalyst Browse. I don’t believe raw would have looked any different.
 
Less compression or a greater bit depth are where the biggest differences between a log or raw recording come from, not so much from whether the data is log or raw.  Don’t forget raw is often recorded using log, which kind of makes the “you can’t grade log” argument a bit daft.
 
Camera manufactures and raw recorder manufacturers are perfectly happy to allow everyone to believe raw is magic and worse still, let people believe that ANY type of raw must be better than all other types of recordings. Read though any camera forum and you will see plenty of examples of “it’s raw so it must be better” or “I need raw because log isn’t as good” without any comprehension of what raw is and how in reality it’s the way the raw is compressed and the bit depth that really matters.

If we take ProRes Raw as an example: For a 4K 24/25fps file the bit rate is around 900Mb/s. For a conventional ProRes HQ file the bit rate is around 800Mb/s. So the file size difference between the two is not at all big.
 
But the ProRes Raw file only has to store around 1/3 as many data points as the component ProResHQ file. As a result, even though the ProRes Raw file often has a higher bit depth, which in itself usually means better a better quality recording, it is also much, much less compressed and as a result will have fewer artefacts. 

It’s the reduced compression and deeper bit depth possible with raw that can lead to higher quality recordings and as a result may bring some grading advantages compared to a normal ProRes or other compressed file. The best bit is there is no significant file size penalty. So you have the same amount of data, but you data should be of higher quality. So given that you won’t need more storage, which should you use? The higher bit depth less compressed file or the more compressed file?

But, not all raw files are the same. Some cameras feature highly compressed 10 bit raw, which frankly won’t be any better than most other 10 bit recordings as you are having to do all the complex math to create a colour image starting with just 10 bit. Most cameras do this internally at at least 12 bit. I believe raw needs to be at least 12 bit to be worth having.

If you could record uncompressed 12 bit RGB or 12 bit component log from these cameras that would likely be just as good and just as flexible as any raw recordings. But the files would be huge. It’s not that raw is magic, it’s just that raw is generally much less compressed and  depending on the camera may also have a greater bit depth. That’s where the benefits come from.

Changing the FX6’s base look in Custom Mode using LUT’s

This is extremely cool! You can change the FX6’s base look in custom mode using LUTs. This is not the same as baking in a LUT in Cine EI as in custom mode you can change the gain or ISO just as you would with any other gamma. But there’s more than that – you can even adjust the look of the LUT by changing the detail settings, black level, matrix and multi-matrix. Watch the video to see how it’s done.


The LUT’s used in the video can be downloaded from here. https://www.xdcam-user.com/2014/11/new-film-look-luts-for-the-pxw-fs7-pmw-f5-and-pmw-f55/

Or from here: https://pro.sony/en_SC/technology/alister-chapman-blockbuster-lut-v2

Don’t Upgrade FCP-X or OSX!

UPDATE 29th Sept 2020.
The issues have now been resolved so it is now safe to update.


27th Aug 2020
If you are a mac user and especially of you use it to edit footage from a Sony camera I recommend that you do not upgrade the operating system to OSX 10.15.6, Pro Video Codecs to 2.1.2 or upgrade FCP-X to version 10.4.9 at this time.

At the moment there is clearly an issue with footage from the FX9 after these updates. It is not clear whether this is due to the new Pro Video Codecs package 2.1.2  that is comes as part of the update to OSX 10.15.6 or whether it is just related to the FCP-X 10.4.9 update. Some users are reporting that some FX9 MXF files can not be previewed in Finder after updating as well as not being visible in FCP-X.

While so far it I have only seen reports that footage from the FX9 is affected, but it wouldn’t surprise me if Venice material is also affected.

I would suggest waiting for a few weeks after the release of any update before updating and never do an update half way through an important project.

UPDATE: Sony know about the issue and are working with Apple to resolve it. It only seems to affect some FX9 footage and possibly some Venice footage. It appears as the culprit is the Pro Video Codecs update, but this is yet to be confirmed. I would still suggest waiting before upgrading  even if you are using a different camera.

How To Live Stream With The Sony PXW-Z90 and NX80.

The Sony PXW-Z90 is a real gem of a camcorder. It’s very small yet packs a 1″ sensor , has real built in ND filters, broadcast codecs and produces a great image. On top of all that it can also stream live directly to Facebook and other similar platforms. In this video I show you how to set up the Z90 to stream live to YouTube. Facebook is similar. The NX80 from Sony is very similar and can also live stream in the same way.