Category Archives: Workflow

From a good looking image to a stylised look. It’s not easy!

I’ve been running a lot of workshops recently looking at creating LUT’s and scene files for the FS7, F5 and F55. One interesting observation is that when creating a stylised look, almost always the way the footage looks before you grade can have a very big impact on who far you are prepared to push your grade to create a stylised look.

What do I mean by this? Well if you start off in your grading suite looking at some nicely exposed footage with accurate color and a realistic representation of the original scene, when you start to push and pull the colors in the image the pictures start to look a little “wrong” and this might restrict how far you are prepared to push things as it goes against human nature to make things look wrong.

If on the other hand you were to bring all your footage in to the grading suite with a highly stylised look straight from the camera, because it’s already unlike the real world, you are probably going to be more inclined to further stylise the look as you have never seen the material accurately represent the real world so don’t notice that it doesn’t look “right”.

An interesting test to try is to bring in some footage into the grade and apply a very stylised look via a LUT and then grade the footage. Try to avoid viewing the footage with a vanilla true to life LUT if you can.

Then bring in the same or similar footage with a vanilla true to life LUT and see how far you are prepared to push the material before you star getting concerned that it no longer looks right. You will probably find that you will push the stylised footage further than the normal looking material.

As another example if you take almost any recent blockbuster movie and start to analyse the look of the images you will find that most use a very narrow palette of orange skin tones along with blue/green and teal. Imagine what you would think if your TV news was graded this way, I’m sure most people would think that the camera was broken. If a movie was to intercut the stylised “look” images with nicely exposed, naturally colored images I think the stylised images would be the ones that most people would find objectionable as they just wouldn’t look right. But when you watch a movie and everything has the same coherent stylised look it works and it can look really great.

In my workshops when I introduce some of my film style LUT’s for the first time (after looking at normal images), sometimes people really don’t like them as they look wrong. The colors are off, it’s all a bit blue, it’s too contrasty, are all common comments. But if you show someone a video that uses the same stylised look throughout the film then most people like the look. So when assessing a look or style try to look at it in the right context and try to look at it without seeing a “normal” picture. I find it helps to go and make a coffee between viewing the normal footage and then viewing the same material with a stylised look.

Another thing that happens is the longer you view a stylised look the more “normal” it becomes as your brain adapts to the new look.

In fact while typing this I have the TV on. In the commercial break that’s just been on most of the ads used a natural color palette. Then one ad came on that used a film style palette (orange/teal). The film style palette looked really, really odd in the middle of the normal looking ads. But on it’s own that ad does have a very film like quality too it. It’s just that when surrounded by normal looking footage it really stands out and as a result looks wrong.

I have some more LUT’s to share in the coming days, so check back soon for some film like LUT’s for the FS7/F5/F55 and A7s.

Advertisements

Tales of exposure from the grading suite.

I had the pleasure of listening to Pablo Garcia Soriano the resident DiT/Colorist at the Sony Digital Motion Picture Center at Pinewood Studios last week talk about grading modern digital cinema video cameras during the WTS event .

The thrust of his talk was about exposure and how getting the exposure right during the shoot makes a huge difference in how much you can grade the footage in post. His main observation was that many people are under exposing the camera and this leads to excessive noise which makes the pictures hard to grade.

There isn’t really any real way to reduce the noise in a video camera because nothing you normally do can change the sensitivity of the sensor or the amount of noise it produces. Sure, noise reduction can mask noise, but it doesn’t really get rid of it and it often introduces other artefacts. So the only way to change the all important signal to noise ratio, if you can’t change the noise, is to change the signal.

In a video camera that means opening the aperture and letting in more light. More light means a bigger video signal and as the noise remains more or less constant that means a better signal to noise ratio.

If you are shooting log or raw then you do have a fair amount of leeway with your exposure. You can’t go completely crazy with log, but you can often over expose by a stop or two with no major issues. You know, I really don’t like using the term “over-expose”  in these situations. But that’s what you might want to do, to let in up to 2 stops more light than you would normally.

In photography, photographers shooting raw have been using a technique called exposing to the right (ETTR) for a long time. The term comes from the use of a histogram to gauge exposure and then exposing so the the signal goes as far to the right on the histogram as possible (the right being the “bright” side of the scale). If you really wanted to have the best possible signal to noise ratio you could use this method for video too. But ETTR means setting your exposure based on your brightest highlights and as highlights will be different from shot to shot this means the mid range of you shot will go up and down in exposure depending on how bright the highlights are. This is a nightmare for the colorist as it’s the mid-tones and mid range that is the most important, this is what the viewer notices more than anything else. If these are all over the place the colorist has to work very hard to normalise the levels and it can lead to a lot of variability in the footage.  So while ETTR might be the best way to get the very best signal to noise ratio (SNR), you still need to be consistent from shot to shot so really you need to expose for mid range consistency, but shift that mid range a little brighter to get a better SNR.

Pablo told his audience that just about any modern digital cinema camera will happily tolerate at least 3/4 of a stop of over exposure and he would always prefer footage with very slightly clipped highlights rather than deep shadows lost in the noise. He showed a lovely example of a dark red car that was “correctly” exposed. The deep red body panels of the car were full of noise and this made grading the shot really tough even though it had been exposed by the book.

When I shoot with my F5 or FS7 I always rate them a stop slower that the native ISO of 2000. So I set my EI to 1000 or even 800 and this gives me great results. With the F55 I rate that at 800 or even 640EI. The F65 at 400EI.

If you ever get offered a chance to see one of Pablo’s demos at the DMPCE go and have a listen. He’s very good.

What is “Exposure”

What do we really mean when we talk about exposure?

If you come from a film background you will know that exposure is the measure of how much light is allowed to fall on the film. This is controlled by two things, the shutter speed and the aperture of the lens. How you set these is determined by how sensitive the film stock is to light.

But what about in the video world? Well exposure means exactly the same thing, it’s how much light we allow our video sensor to capture. Controlled by shutter speed and aperture. The amount of light we need to allow to fall on the sensor is dependant on the sensitivity of the sensor, much like film. But with video there is another variable and that is the gamma curve…. or is it????

This is an area where a lot of video camera operators have trouble, especially when you start dealing with more exotic gamma curves such as log. The reason for the problem is down to the fact that most video camera operators are taught or have learnt to expose their footage at specific video levels. For example if you’re shooting for TV it’s quite normal to shoot so that white is around 90%, skin tones are around 70% and middle grey is in the middle, somewhere around the 45% mark. And that’s been the way it’s been done for decades. It’s certainly how I was taught to expose a video camera.

If you have a video camera with different gamma curves try a simple test. Set the camera to its standard TV gamma (rec-709 or similar). Expose the shot so that it looks right, then change the gamma curve without changing the aperture or shutter speed. What happens? Well the pictures will get brighter or darker, there will be brightness differences between the different gamma curves. This isn’t an exposure change, after all you haven’t changed the amount of light falling on the sensor, this is a change in the gamma curve and the values at which it records different brightnesses.

An example of this would be setting a camera to Rec-709 and exposing white at 90% then switching to S-log3 (keeping the same ISO for both) and white would drop down to 61%. The exposure hasn’t changed, just the recording levels.

It’s really important to understand that different gammas are supposed to have different recording levels. Rec-709 has a 6 stop dynamic range (without adding a knee). So between 0% and around 100% we fit 6 stops with white falling at 85-90%. So if we want to record 14 stops where do we fit in the extra 8 stops that S-Log3 offers when we are already using 0 to 100% for 6 stops with 709?? The answer is we shift the range. By putting the 6 stops that 709 can record between  around 15% and 68% with white falling at 61% we make room above and below the original 709 range to fit in another 8 stops.

So a difference in image brightness when changing gamma curves does not represent a change in exposure, it represents a change in recording range. The only way to really change the exposure is to change the aperture and shutter speed. It’s really, really important to understand this.

Furthermore your exposure will only ever look visibly correct when the gamma curve of the display device is the same as the capture gamma curve. So if shooting log and viewing on a normal TV or viewfinder that typically has 709 gamma the picture will not look right. So not only are the levels different to those we have become used to with traditional video but the picture looks wrong too.

As more and more exotic (or at least non-standard) gamma curves become common place it’s very important that we learn to think about what exposure really is. It isn’t how bright the image is (although this is related to exposure) it is about letting the appropriate amount of light fall on the sensor. How do we determine the correct amount of light? Well we need to measure it using a waveform scope, zebras etc, BUT you must also know the correct reference levels for the gamma you are using for a white or middle grey target.

You might also like to read this article on understanding log and exposure levels.

Notes on Timecode sync with two cameras.

When you want two cameras to have matching timecode you need to synchronise not just the time code but also the frame rates of both cameras. Remember timecode is a counter that counts the frames the camera is recording. If one camera is recording more frames than the other, then even with a timecode cable between both cameras the timecode will drift during long takes. So for perfect timecode sync you must also ensure the frame rates of both cameras is also identical by using Genlock to synchronise the frame rates.

Genlock is only going to make a difference if it is always kept connected. As soon as you disconnect the Genlock the cameras will start to drift. If using genlock first connect the Ref output to the Genlock in. Then while this is still connected connect the TC out to TC in. Both cameras should be set to Free-run timecode with the TC on the master camera set to the time of day or whatever time you wish both cameras to have. If you are not going to keep the genlock cable connected for the duration of the shoot, then don’t bother with it at all, as it will make no difference just connecting it for a few minutes while you sync the TC.

In the case of a Sony camera when the TC out is connected to the TC in of the slave camera, the slave camera will normally display EXT-LK when the timecode signals are locked.

Genlock: Synchronises the precise timing of the frame rates of the cameras. So taking a reference out from one camera and feeding it to the Genlock input of another will cause both cameras to run precisely in sync while the two cameras are still connected together. While connected by genlock the frame counts of both camera (and the timecode counts) will remain in sync. As soon as you remove the genlock sync cable the cameras will start to drift apart. The amount of sync (and timecode) drift will depend on many factors, but with a Sony camera will most likely be in the order of a at least a few seconds a day, sometimes as much as a few minutes.

Timecode: Connecting the TC out of one camera to the TC in of another will cause the time code in the receiving camera to sync to the nearest possible frame number of the sending camera when the receiving camera is set to free run while the camera is in standby.  When the TC is disconnected both cameras timecode will continue to count according to the frame rate that the camera is running at. If the cameras are genlocked, then as the frame sync and frame count is the same then so too will be the timecode counts. If the cameras are not genlocked then the timecode counts will drift by the same amount as the sync drift.

Timecode sync only and long takes can be problematic. If the timecodes of two cameras are jam sync’d but there is no genlock then on long takes timecode drift may be apparent. When you press the record button the timecodes of both cameras will normally be in sync, forced into sync by the timecode signal. But when the cameras are rolling the timecode will count the actual frames recorded and ignor the timecode input. So if the cameras are not synchronised via genlock then they may not be in true sync so one camera may be running fractionally faster than the other and as a result in long clips there may be timecode differences as one camera records more frames than the other in the same time period.

Using S-log2 and S-Log3 from the A7S (or any Alpha camera) in post production.

If you have followed my guide to shooting S-Log2 on the A7s then you may now be wondering how to use the footage in post production.

This is not going to be a tutorial on editing or grading. Just an outline guide on how to work with S-log2, mainly with Adobe Premiere and DaVinci Resolve. These are the software packages that I use. Once upon a time I was an FCP user, but I have never been able to get on with FCP-X. So I switched to Premiere CC which now offers some of the widest and best codec support as well as an editing interface very similar to FCP. For grading I like DaVinci Resolve. It’s very powerful and simple to use, plus the Lite version is completely free. If you download Resolve it comes with a very good tutorial. Follow that tutorial and you’ll be editing and grading with Resolve in just a few hours.

The first thing to remember about S-Log2/S-gamut material is that it has a different gamma and colour space used by almost every TV and monitor in use today. So to get pictures that look right on a TV we will need to convert the S-Log2 to the standard used by normal HD TV’s which is know as Rec-709. The best way to do this is via a Look Up Table or LUT.

Don’t be afraid of LUT’s. It might be a new concept for you, but really LUT’s are easy to use and when used right they bring many benefits. Many people like myself share LUT’s online, so do a google search and you will find many different looks and styles that you can download for your project.

So what is a LUT? It’s a simple table of values that converts one set of signal levels to another. You may come across different types of LUT’s… 1D, 3D, Cube etc. At a basic level these all do the same thing, there are some differences but at this stage we don’t need to worry about those differences. For grading and post production correction, in the vast majority of cases you will want to use a 3D Cube LUT. This is the most common type of LUT. The LUT’s that you use must be designed for the gamma curve and colour space that the material was shot in and the gamma curve and colorspace you want to end up in. So, in the case of a Sony camera, be that an A7s, A7r, A6300 or whatever  we want LUT’s that are designed for either S-Log2 and S-Gamut or S-Log3 and SGamut3.cine.  LUT’s designed for anything other than this will still transform the footage, but the end results will be unpredictable as the tables input values will not match the correct values for S-Log2/S-Log3.

One of the nice things about LUT’s is that they are non-destructive. That is to say that if you add a LUT to a clip you are not actually changing the original clip, you are simply altering the way the clip is displayed. If you don’t like the way the clip looks you can just try a different LUT.

If you followed the A7s shooting guide then you will remember that S-Log2 or S-Log3 takes a very large shooting scene dynamic range (14 stops) and squeezes that down to fit in a standard video camera recording range. When this squeezed or compressed together range is then shown on a conventional REC-709 TV with a relatively small dynamic range (6 stops) the end result is a flat looking, low contrast image where the overall levels are shifted down a bit, so as well as being low contrast and flat the pictures may also look dark.

 

Slide7 Using S-log2 and S-Log3 from the A7S (or any Alpha camera) in post production.
To make room for the extra dynamic range and the ability to record very bright objects, white and mid tones are shifted down in level.
Slide8 Using S-log2 and S-Log3 from the A7S (or any Alpha camera) in post production.
The on screen contrast appears reduced as the capture contrast is greater than the display contrast.

To make the pictures on our conventional 709 TV or computer moniotr have a normal contrast range, in post production we need to expand the  the squeezed recorded S-Log2/S-Log3 range to the display range of REC-709. To do this we apply an S-Log2 or S-Log3 to Rec-709 LUT to the footage during the post production process. The LUT table will shift the S-log input values to the correct REC-709 output values. This can be done either with your edit software or dedicated grading software. But, we may need to do more than just add the LUT.

Slide12 Using S-log2 and S-Log3 from the A7S (or any Alpha camera) in post production.
Adding a LUT in post production expands the squeezed S-Log2 recording back to a normal contrast range.

There is a problem because normal TV’s only have a limited display range, often smaller that the recorded image range. So when we expand the squeezed S-Log2/S-Log3  footage back to a normal contrast range the amount of dynamic range in the recording exceeds the dynamic range that the TV can display so the highlights and brighter parts of the picture are lost, they are no longer seen and as a result the footage may now look over exposed.

Slide13 Using S-log2 and S-Log3 from the A7S (or any Alpha camera) in post production.
With the dynamic range now expanded by the LUT the recordings brightness range exceeds the range that the TV or monitor can show, so while the contrast is correct, the pictures may look over exposed.

But don’t panic! The brightness information is still there in your footage, it hasn’t been lost, it just can’t be displayed. So we need to tweak and fine tune the footage to bring the brighter parts of the image back in to range. This is typivally called “grading” or color correcting the material.

Normally you want to grade the clip before it passes through the LUT as prior to the LUT the full range of the footage is always retained. The normal procedure is to add the LUT to the clip or footage as an output LUT, that is to say the LUT is on the output from the grading system. Although it’s preferable to have the LUT after any corrections, don’t worry too much about where your LUT goes. Most edit and grading software will still retain the full range of everything you have recorded, even if you can’t always see it on the TV or monitor.

Slide14 Using S-log2 and S-Log3 from the A7S (or any Alpha camera) in post production.
By grading or adjusting the footage before it enters the LUT we can bring the highlights back within the range that the TV or monitor can show.

If you chose to deliberately over expose the camera by a stop or two to get the best from the 8 bit recordings (see part one of the guide) then the LUT that you should use should also incorporate compensation for this over exposure. The LUT sets that I have provided for the Sony Alpha cameras includes LUTs that have compensation for +1 and +2 stops of over exposure.

IN PRACTICE.

So how do we do this in practice?

First of all you need some LUT’s. If you haven’t already downloaded my LUT’s please download one or both of my LUT sets:

20 Cube LUT’s for S-Log2 and A7 (also work with any S-Log2 camera).

Or

LUTs for S-Log3 (These work just fine with S-Log3 material from the Alpha cameras, although I recommend using S-Log2 with any 8 bit camera)

To start off with you can just edit your S-Log footage as you would normally. Don’t worry too much about adding a LUT at the edit stage. Once the edit is locked down you have two choices. You can either export your edit to a dedicated grading package, or, if your edit package supports LUT’s you can add the LUT’s directly in the edit application.

Applying LUT’s in the edit application.

In FCP, Premiere CS5 and CS6 you can use the free LUT Buddy Plug-In from Red Giant to apply LUT’s to your clips.

In FCP-X you can use a plugin called LUT Utility from Colorgrading Central.

In Premiere CC you use the built in Lumetri filter plugin found under the “filters”, “color correction filters” tab (not the Lumetri Looks).

In all the above cases you add the filter or plugin to the clip and then select the LUT that you wish to use. It really is very easy. Once you have applied the LUT you can then further fine tune and adjust the clip using the normal color correction tools. To apply the same LUT to multiple clips simply select a clip that already has the LUT applied and hit “copy” or “control C” and then select the other clips that you wish to apply the LUT to and then select “paste – attributes” to copy the filter settings to the other clips.

Exporting Your Project To Resolve (or another grading package).

This is my preferred method for grading as you will normally find that you have much better correction tools in a dedicated grading package. What you don’t want to do is to render out your edit project and then take that render into the grading package. What you really want to do is export an edit list or XML file that contains the details of your project. The you open that edit list or XML file in the grading package. This should then open the original source clips as an edited timeline that matches the timeline you have in your edit software so that you can work directly with the original material. Again you would just edit as normal in your edit application and then export the project or sequence as preferably an XML file or a CMX EDL. XML is preferred and has the best compatibility with other applications.

Once you have imported the project into the grading package you then want to apply your chosen LUT. If you are using the same LUT for the entire project then the LUT can be added as an “Output” LUT for the entire project. In this way the LUT acts on the output of your project as a final global LUT. Any grading that you do will then happen prior to the LUT which is the best way to do things. If you want to apply different LUT’s to different clips then you can add a LUT to individual clips. If the grading application uses nodes then the LUT should be on the last node so that any grading takes place in nodes prior to the LUT.

Once you have added your LUT’s and graded your footage you have a couple of choices. You can normally either render out a single clip that is a compilation of all the clips in the edit or you can render the graded footage out as individual clips. I normally render out individual clips with the same file names as the original source clips, just saved in a different folder. This way I can return to my edit software and swap the original clips for the rendered and graded clips in the same project. Doing this allows me to make changes to the edit or add captions and effects that may not be possible to add in the grading software.

How to create a user LUT for the PMW-F5 or F55 in Resolve (or other grading software).

It’s very easy to create your own 3D LUT for the Sony PMW-F5 or PMW-F55 using DaVinci Resolve or just about any grading software with LUT export capability. The LUT should be a 17x17x17 or 33x33x33 .cube LUT (this is what Resolve creates by default).

Simply shoot some test Slog2 or Slog3 clips at the native ISO. You must use the same Slog and color space as you will be using in the camera.

Import and grade the clips in Resolve as you wish the final image to look. Then once your happy with your look, right click on the clip in the timeline and “Export LUT”. Resolve will then create a .cube LUT.

Then place the .cube LUT file created by the grading software on an SD card in the PMWF55_F5 folder. You may need to create the following folder structure on the SD card, so first you have a PRIVATE folder, in that there is a SONY folder and so on.

PRIVATE   :   SONY   :    PRO   :   CAMERA   :    PMWF55_F5

Put the SD card in the camera, then go to the File menu and go to “Monitor 3D LUT” and select “Load SD Card”. The camera will offer you a 1 to 4 destination memory selection, choose 1,2,3 or 4, this is the location where the LUT will be saved. You should then be presented with a list of all the LUT’s on the SD card. Select your chosen LUT to save it from the SD card to the camera.

Once loaded in to the camera when you choose 3D User LUT’s you can select between user LUT memory 1,2,3 or 4. Your LUT will be in the memory you selected when you copied the LUT from the SD card to the camera.

Flicker, jaggies and moire in down converted 4K.

I kind of feel like we have been here once before. That’s probably because we have and I wrote about it first time around.

A typical video camera has a special filter in it called an optical low pass filter (OLPF). This filter deliberately reduces the contrast of fine details in the image that comes from the cameras lens and hits the sensor to prevent aliasing, jagged edges and moiré rainbow patterns. It’s a very important part of the cameras design. An HD camera will have a filter designed with a significant contrast reduction on parts of the image that approach the limits of HD resolution. So very fine HD details will be low contrast and slightly soft.

When you shoot with a 4K camera, the camera will have an OLPF that operates at 4K. So the camera captures lots of very fine, very high contrast HD information that would be filtered out by an HD OLPF. There are pro’s and con’s to this. It does mean that if you down convert from 4K or UHD to HD you will have an incredibly sharp image with lots of very fine high contrast detail. But that fine detail might cause aliasing or moiré if you are not careful.

The biggest issue will be with consumer or lower cost 4K cameras that add some image sharpening so that when viewed on a 4K screen the 4K footage really “pops”. When these sharpened and very crisp images are scaled down to HD the image can appear to flicker or “buzz”. This will be especially noticeable if the sharpening on the HD TV is set too high.

So what can you do? The most important thing is to include some form of anti-aliasing to the image when you down scale from 4K to HD.  You do need to use a scaling process that will perform good quality pixel blending, image re-sampling or another form of anti-aliasing. A straight re-size will result in aliasing which can appear as either flicker, moire or a combination of both. Another alternative is to apply a 2 or 3 pixel blur to the 4K footage BEFORE re-sizing the image to HD. This seems a drastic measure but is very effective and has little impact on the sharpness of the final HD image. Also make sure that the sharpening on your TV is set reasonably low.

I previously wrote about this very same subject when HD cameras were being introduced and many people were using them for SD productions. The same issues occurred then. Here are the original articles:

Getting good SD from HD Part 1.

Getting good SD from HD Part 2.

Remember to take a look in the TECH NOTES for info like this. There’s a lot of information in the XDCAM-USER archives now.

Are You Screwing Up Your Footage In Resolve?

First of all let me say that DaVinci Resolve is a great piece of software. Very capable, very powerful and great quality. BUT there is a hidden “Gotcha” that not many are aware of and even more are totally confused by (including me for a time).

This has taken me days of research, fiddling, googling and messing around to finally be sure of exactly what is going on inside Resolve. I am NOT a Resolve expert, so if anyone thinks I have this wrong do please let me know, but here goes……

These are the important things to understand about Resolve.

Internally Resolve Always Works With Data Levels (bit 0 to bit 1023 or CV0-CV1023 – CV stands for Code Value).

Resolve’s Scopes Always Measure The Internal Data Levels – These are NOT necessarily the Output Levels.

There Are 3 Data Ranges Used For Video – Data CV0 to CV1023, Legal Video 0-100IRE = CV64 to CV940 and Extended Range Video 0-109IRE CV64 to CV1023 (1019 over HDSDI).

Most Modern Video Cameras Record Using Extended Range Video, 0-109IRE or CV64 to CV1019.

Resolve Only Has Input Options For Data Levels or Legal Range Video. There is no option for Extended Range video.

If Transcoding Footage You May Require Extended Range Video Export. For example converting SLog video or footage from the majority of modern cameras which record up to 109IRE.

Resolve Only Has Output Options For Data Levels or Legal Range Video. There is no simple option to output, monitor or export using just the 64 to 1019 range as a 64 to 1019 range.

So, clearly anyone wanting to work with Extended Range Video has a problem. Not so much for grading perhaps, but a big issue if you want to transcode anything. Do remember that almost every modern video camera makes use of the full extended video range. It’s actually quite rare to find a modern camera that does not go above 100IRE.

So why not just use data levels for everything? Well that is an option. You can set your clips attributes (in the media pane) to Data Levels, set you monitor output to Data Levels and when you render choose Data Levels. In fact this is what YOU MUST DO if you want to convert files from one format to another without any scaling or level shifts. But be warned, never, ever, grade like this unless you add a Soft Clip LUT (more on that in a bit) as you will end up with illegal super blacks, blacks that are blacker than black and will not display correctly on most devices.

There are probably an awful lot of people out there using Resolve to convert XAVC or other formats to ProRes and in the process unwittingly making a mess of their footage, especially SLog2 and  hypergammas.

On input you can choose clip attributes for Data 0-1023 or Video 64-940 as well as Auto (in most cases if Resolve detects luma levels under 64 the footage is treated as Data, otherwise video levels). Anything set to video levels or detected as video levels gets scaled from the sources  CV64-940 range to Resolve’s internal CV0-1023 range.

As Resolves waveform/vector scopes etc always measure the internal scaled range there is no way to tell just by looking at the scopes what range your original material was in or whether it’s been scaled. If you do want to check the range of the source clip, try reducing the video level in the colour panel. If your clip is extended range then you should be able to se the previously hidden high range by pulling the levels down. A legal range clip on the other hand will have nothing above Resolves 1023 so the peak level will just drop.

On output you can choose Data 0-1023 or Legal Video 64-960 for your output or monitoring range (Resolve uses 960 which is the CbCr max value, Y is 940). For Resolve to handle the majority of modern cameras and many modern workflows where outputting 64-1023  may be required, there is no option!!!!!! So if you are working with video levels, anything you want to work with using extended range ends up either scaled on input or clipped/range restricted, blacks crushed, on output.

For example:

Import Hypergamma or SLog which is 64-1023, don’t touch or grade the footage, then export using video levels and the range is clipped and will no longer have the highlights recorded above 100IRE in the original. The original input files will be CV64-1023 but the video range output files will be CV64-940, the range is clipped off at 940 (100IRE).  If you set the clip attributes to “video 64-940” then on input CV940 is mapped to CV1023 in Resolve, so anything you shot between 100 and 109IRE (940-1019) goes out of range and is not seen on the output (It’s still there inside Resolve, but you can’t get to it unless you grade the footage). There just isn’t a correct option to pass through Full Range video 1:1, unless you use data in, data out, but then you run the risk of having illegal super blacks. If you leave the clip attributes as video and the export using Data Levels then your original CV64 black gets pulled down to CV0 so your blacks are crushed, however you do then retain the stuff above 100IRE.

If you’re using Resolve to convert XAVC SLog2 or SLog3 to something else, ProRes perhaps, this means that any Look Up Tables used in the downstream application will not behave as expected because your output clip will have the wrong levels. So for file conversions you MUST use data levels on the input clip attributes and data levels on output to pass the video through as per the original, even though you are working with footage that complies with perfectly correct, Extended Range video standards. But you must never edit or grade like this as you will get super blacks on your output….. Unless you generate a soft clip LUT.

 If you import a full range video clip that goes from CV64 to CV1019(1023) (0 to 109IRE) and do nothing to it then it will come out of Resolve as either data levels CV0 to CV1023 (-7IRE to 109IRE) or legal video CV64 to CV940 (0 – 100IRE), neither of which is ideal when transcoding footage. 

 So what can you do if you really need an Extended Range workflow? Well you can generate a Soft Clip LUT in Resolve to manage your output range. For this to work correctly you need to work entirely with data levels. Clip attributes must be set to Data levels, Monitor out to Data Levels and Exports should be at Data Levels. This is NOT necessary for direct 1:1 transcoding as the assumption is that you want a direct 1:1 copy of the original data, just in a different format.

You use Resolves Soft Clip LUT generator (on the Look Up Tables settings page) to create a 1D LUT with a Black Clip of 64 and a White Clip of 1019. This LUT is then applied as a 1D Output LUT. If you are using an existing output LUT (1D or 3D) then you can use the Soft Clip LUT generator to make a modified version of that existing LUT, adding the 64 and 1019 clip levels.

 So what is it doing?

As you are working at Data Levels your clips and footage will come in to Resolve 1:1. So a clip with a range of CV0-1023 will come in as CV0-1023, a CV64-940 clip will come in with CV64-940 and a CV64-1019 clip as CV64-1019. Most video clips from a modern camera will use CV64-1019.  A clip using CV64-1019 will be imported and handled as CV64-1019 within the full 0-1023 range, but the levels are not shifted or altered so if it’s CV220 in the original it will be CV220 inside Resolve. One immediate benefit is that Resolves scopes are now showing the actual original levels of the source clip, as shot. Phew – that’s a lot of CV’s in that paragraph, hope your following along OK.

You grade your footage as normal. The Soft Clip LUT will clip anything below CV64 (0 IRE, video black) but allow the full extended video range up to CV1019(1023) to be used. It won’t shift the level, just not allow anything to go below CV64. If grading for output do ensure that you really do want extended range (If you want to stay broadcast safe use video range).

The output to your HDSDI monitor will be unscaled data CV0-1019, but because of the LUT clipping at 64, there will be nothing below 64, no super blacks, this is how it should be, this is correct and what you want for an extended range workflow, perhaps for passing your footage on to another video editing application for finishing or where it will be mixed with other full range footage. The majority of grading workflows however will probably be conventional Legal Video Range.

When you render a file using data levels, the file will go from CV0-1019 but again because of the Soft Clip LUT there will be nothing below 64 (black) but you can use the Full Range above CV940 so super whites etc will be passed through correctly to the rendered file. This way you can make use of the complete extended video range.

 In Summary:

If you want to use Resolve to convert files from one codec to another, without changing your levels you must ensure the Clip Attributes are set to Data, your monitor out must be set to Data Levels and you must Render using Data Levels. If you don’t there is a very high likelihood that your levels will be incorrect or altered, almost certainly different to what you shot.

If you wish to grade and output anything above 100IRE (perhaps when mixing graded footage with full range camera footage) then again you must use data levels throughout the workflow but you should add a Soft Clip LUT with CV1019 as the upper clip and CV64 as the lower clip to prevent illegal black levels but retain full video range to 109IRE.

It would be so much simpler if Resolve had an extended range video out option.

ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

It’s all in the grade!

So I spent much of last week shooting a short film and commercial (more about the shoot in a separate article). It was shot in raw using my Sony F5, but could have been shot with a wide range of cameras. The “look” for this production is very specific. Much of it set late in the day or in the evening and requiring a gentle romantic look.

In the past much of this look would have been created in camera. Shooting with a soft filter for the romantic look, shifting the white balance with warming cards or a dialled in white balance for a warm golden hour evening look. Perhaps a custom picture profile or scene file to alter the look of the image coming from the camera. These methods are still very valid, but thanks to better recording codecs and lower cost grading and post production tools, these days it’s often easier to create the look in post production.

When you look around on YouTube or Vimeo at most of the showreels and demo reels from people like me they will almost always have been graded. Grading is a huge, modern,  part of the finishing process and it makes a huge difference to the final look of a production. So don’t automatically assume everything you see on-line looked like that when it was shot. It probably didn’t and a very, very big part of the look tends to be created in post these days.

One further way to work is to go half way to your finished look in camera and then finish off the look in post. For some productions this is a valid approach, but it comes with some risks and there are some things that once burnt into the recording can be hard to then subsequently change in post, for example any in camera sharpening is difficult to remove in post as are crushed blacks or skewed or offset white balance.

Also understand that there is a big difference between trying to grade using the color correction tools in an edit suite and using a dedicated editing package. For many, many years I used to grade using my editing software, simply because that was what I had. Plug-ins such as Magic Bullet looks are great and offer a quick and effective way to get a range of looks, but while you can do a lot with a typical edit color corrector, it pales into insignificance compared to what can be done with a dedicated grading tool, for example not only creating a look but then adjusting individual elements of the image.

When it comes to grading tools then DaVinci Resolve is probably the one that most people have heard of. Resolve Lite is free, yet still incredibly capable (provided you have a computer that will run it). There are lots of other options too like Adobe Speed Grade, but the key thing is that if you change your workflow to include lots of grading, then you need to change the way you shoot too. If you have never used a proper grading tool then I urge you to learn how to use one. As processing power improves and these tools become more and more powerful they will play an ever greater role in video production.

So how should you shoot for a production that will be graded? I’m sure you will have come across the term “shoot flat” and this is often said to be the way you should shoot when you’re going to grade. Well, yes and no. It depends on the camera you are using, the codec, noise levels and many other factors. If you are the DP, Cinematographer or DiT, then it’s your job to know how footage from your camera will behave in post production so that you can provide the best possible blank canvas for the colourist.

What is shooting flat exactly? Lets say your monitor is a typical LCD monitor. It will be able to show 6 or 7 stops of dynamic range. Black at stop 0 will appear to be black and whites at stop 7 will appear bright white. If your camera has a 7 stop range then the blacks and whites from the camera will be mapped 1:1 with the monitor and the picture will have normal contrast. But what happens when you then have a camera that can capture double that range, say 12 to 14 stops?. The bright whites captured by the camera will be significantly brighter than before. If you then take that image and try to show it on the same LCD monitor you have an issue because the LCD cannot go any brighter than before, so the much brighter whites from the high dynamic range shot are shown at the same brightness as the original low dynamic range shot. Not only that but the now larger tonal range is now squashed together into the monitors limited range. This reduces the contrast in the viewed image and as a result it looks flat.

That’s a real “shoot flat” image (a wide dynamic range shown on a typical dynamic range monitor), but you have to be careful because you can also create a flat looking image by raising the cameras black level or black gamma or reducing the white level. Doing this reduces the contrast in the shadows and mid tones and will make the pictures look low contrast and flat. But raising the black level or black gamma or reducing the white point rarely increases the dynamic range of a camera, most cameras dynamic range is limited by the way they handle highlights and over exposure, not shadows, dark or white level. So just beware, not all flat looking images bring real post production advantages, I’ve seen many examples of special “flat” picture profiles or scene files that don’t actually add anything to the captured image, it’s all about dynamic range, not contrast range. See this article for more in depth info on shooting flat.

If you’re shooting for grading, shooting flat with a camera with a genuinely large dynamic range is often beneficial as you provide the colourist with a broader dynamic range image that he/she/you can then manipulate so that it looks good on typically small dynamic range TV’s and monitors, but excessively raising the black level or black gamma rarely helps the colourist as this just introduces an area that will need to be corrected to restore good contrast rather than adding anything new or useful to the image. You also need to consider that it’s all very well shooting with a camera that can capture a massive dynamic range, but as there is no way to ever show that full range, compromises must be made in the grade so that the picture looks nice. An example of this would be a very bright sky. In order to show the clouds in the sky the rest of the scene may need to be darkened as the sky is always brighter than everything else in the real world. This might mean the mid tones have to be rather dark in order to preserve the sky. The other option would be to blow the sky out in the grade to get a brighter mid range. Either way, we don’t have a way of showing the 14 stop range available from cameras like the F5/F55 with current display technologies, so a compromise has to be made in post and this should be in the back of your mind when shooting scenes with large dynamic ranges. With a low dynamic range camera, you the camera operator would choose whether to let the highlights over expose to preserve the mid range or whether to protect the highlights and put up with a darker mid range. But now with these high dynamic range cameras that decision is largely moved to post production, but you should still be looking at your mid tones and if needed adding a bit of extra illumination so that the mids are not fighting the highlights.

In addition to shooting flat there is a lot of talk about using log gamma curves, S-Log, S-log2, LogC etc. Again IF the camera and recording codec are optimised for Log then this can be an extremely good approach. Remember that if you choose to use a log gamma curve then you will also need to adjust the way you expose to place skin tones etc in the correct part of the log curve. It’s no longer about exposing for what looks good on the monitor or in the viewfinder, but about exposing the appropriate shades in the correct part of the log curve.  I’ve written many articles on this so I’m not going to go into it here, other than to say log is not a magic fix for great results and log needs a 10 bit codec if your going to use it properly. See these articles on Log: S-Log and 8 bit  or Correct Exposure with Log. Using Log does allow you to capture the cameras full range, it will give you a flat looking image and when used correctly it will give the colourist a large blank canvas to play with. When using log it is vital that you use a proper grading tool that will apply log based corrections to your footage as adding linear corrections in a typical edit application to log footage will not give the best results.

So what if your camera doesn’t have log? What can you do to help improve the way the image looks after post production? First of all get your exposure right. Don’t over expose. Anything that clips can not be recovered in post. Something that’s a little too dark can easily be brightened a bit, but if it’s clipped it’s gone for good. So watch those highlights. Don’t under expose,  just expose correctly. If you’re having a problem with a bright sky don’t be tempted to add a strong graduated filter to the camera to darken the sky. If the colorist tries to adjust the contrast of the image the grad may become more extreme and objectionable. It’s better to use a reflector or some lights to raise the foreground rather than a graduated filter to lower the highlight.

One thing that can cause grading problems is any knee compression. Most video cameras by default use something called the “Knee” to compress highlights. This does give the camera the ability to capture a greater dynamic range, but this is done by aggressively compressing together the highlights and it’s either on or off. If the light changes during the shot and the cameras knee is set to auto (as most are by default) then the highlight compression will change mid shot and this can be a nightmare to grade. So instead of using the cameras default knee settings use a scene file or picture profile to set the knee to manual or use an extended range gamma curve like a Hypergamma or Cinegamma that does not have a knee and instead uses a progressive type of highlight compression.

Another thing that can become an issue in the grading suite is image sharpening. In camera sharpening such as detail correction works by boosting contrast around edges. So if you take an already sharpened image into the grading suite and then boost the contrast in post, the sharpening will become more visible and the pictures may take on more of a video look or become over sharpened. It’s just about impossible to remove image sharpening in post, but to add a bit of sharpening is quite easy. So, if you’re shooting for post consider either turning off the detail correction circuits all together or at the very least reduce the levels applied by a decent amount.

Color and white balance: One thing that helps keep things simple in the grade is having a consistent image. The last thing you want is the white balance changing half way through the shot, so as a minimum use a fixed white balance or preset white balance. I find it better to shoot with preset white when shooting for a post heavy workflow as even if the light changes a little from scene to scene or shot to shot the RGB gain levels remain the same so any corrections applied have a similar effect, the colourist then just tweaks the shots for any white balance differences. It’s also normally easier to swing the white balance in post if preset is used as there won’t be any odd shifts added as can sometimes happen if you have used a grey/white card to white balance.

Just as the brightness or luma of an image can clip if over exposed then so too can the colour. If you’re shooting colourful scenes, especially shows or events with coloured lights then it will help you if you reduce the saturation of the colour matrix by around 20%, this allows you to record stronger colours before they clip. Colour can then be added back in in the grade if needed.

Noise and grain: This is very important. The one thing above all the others that will limit how far you can push your image in post is noise and grain. There are two sources of this, camera noise and compression noise. Camera noise is dependant on the cameras gain and chosen gamma curve. Aways strive to use as little gain as possible, remember that if the image is just a little dark you can always add gain in post, so don’t go adding un-necessary gain in camera. A proper grading suite will have powerful noise reduction tools and these normally work best if the original footage is noise free and then gain added in post, rather than trying to de-noise grainy camera clips.

The other source of noise and grain is compression noise. Generally speaking, the more highly compressed the video stream is then the greater the noise will be. Compression noise is often more problematic than camera noise as in many cases it will have a regular pattern or structure which makes it visually more distracting than random camera noise. More often than not the banding seen in images across the sky or flat surfaces is caused by compression artefacts rather than anything else and during grading any artefacts such as these can become more visible. So try to use as little compression as possible, this may mean using an external recorder but these can be purchased or hired quite cheaply these days. As always, before a big production test your workflow. Shoot some sample footage, grade it and see what it looks like. If you have a banding problem, suspect the codec or compression ratio first, not whether it’s 8 bit or 10 bit, in practice it’s not 8 bit that causes banding, but too much or poor quality compression (so even a camera with only an 8 bit output like the FS700 will benefit from recording on a better quality external recorder).

RAW: Of course the best way of providing the colourist (even if that’s yourself) the best blank canvas is to shoot with a camera that can record the raw sensor data. By shooting raw you do not add any in camera sharpening or gamma curves that may then need to be removed in post. In addition raw normally means capturing the cameras full dynamic range. But that’s not possible for everyone and generally involves working with very large amounts of data. If you follow my guidelines above you should at least have material that will allow you a good range of adjustment and fine tuning in post. This isn’t “fix it in post”, we are not putting right something that is wrong. We are shooting in a way that allows us to make use of the incredible processing power available in a modern computer to produce great looking images. You are making those last adjustments that make a picture look great using a nice big monitor (hopefully calibrated) in a normally more relaxed environment than on most shoots.

The way videos are produced is changing. Heavy duty grading used to be reserved for high end productions, drama and movies. But now it is common place, faster and easier than ever. Of course there are still many applications where there isn’t the time for grading, such as TV news, but grading is going to play an every greater part in more and more productions, so it’s worth learning how to do it properly and how to adjust your shooting setup and style to maximise the quality of the finished production.