Raw can be a brilliant tool, I use it a lot. High quality raw is my preferred way of shooting. But it isn’t magic, it’s just a different type of recording codec.
All too often – and I’m as guilty as anyone – people talk about raw as “raw sensor data” a term that implies that raw really is something very different to a normal recording. In reality it’s really not that different. When shooting raw all that happens is that the video frames from the sensor are recorded before they are converted to a colour image. A raw frame is still a picture, it’s just that it’s a bitmap image made up of brightness values, each pixel represented by a single brightness code value rather than a colour image where each location in the image is represented by 3 values one for each of Red, Green and Blue or Luma, Cb and Cr.
As that raw frame is still nothing more than a normal bitmap all the cameras settings such as white balance, ISO etc are in fact baked in to the recording. Each pixel only has one single value and that value will have been determined by the way the camera is setup. Nothing you do in post production can change what was actually recorded. Most CMOS sensors are daylight balanced, so unless the camera adjusts the white balance prior to recording – which is what Sony normally do – your raw recording will be daylight balanced.
Modern cameras when shooting log or raw also record metadata that describes how the camera was set when the image was captured.
So the recorded raw file already has a particular white balance and ISO. I know lots of people will be disappointed to hear this or simply refuse to believe this but that’s the truth about a raw bitmap image with a single code value for each pixel and that value is determined by the camera settings.
This can be adjusted later in post production, but the adjustment range is not unlimited and it is not the same as making an adjustment in the camera. Plus there can be consequences to the image quality if you make large adjustments.
Log can be also adjusted extensively in post too. For decades feature films shot on film were scanned using 10 bit Cineon log (which is the log gamma curve S-Log3 is based on) and 10 bit log used for post production until 12 bit and then 16 bit linear intermediates came along like OpenEXR. So this should tell you that actually log can be graded very well and very extensively.
But then many people will tell you that you can’t grade log as well as raw. Often they will give photographers as an example where there is a huge difference between what you can do with a raw photo and a normal image. But we also have to remember this is typically comparing what you can do with a highly compressed 8 bit jpeg file and an often uncompressed 12 or 14 bit raw file. It’s not a fair comparison, of course you would expect the 14 bit file to be better.
The other argument often given is that it’s very hard to change the white balance of log in post, it doesn’t look right or it falls apart. Often these issues are nothing to do with the log recording but more to do with the tools being used.
When you work with raw in your editing or grading software you will almost always be using a dedicated raw tool or raw plugin designed for the flavour of raw you are using. As a result everything you do to the file is optimised for the exact flavour of raw you are dealing with. It shouldn’t come as a surprise to find that to get the best from log you should be using tools specifically designed for the type of log you are using. In the example below you can see how Sony’s Catalyst Browse can perfectly correctly change the white balance and exposure of S-log material with simple sliders just as effectively as most raw formats.
Applying the normal linear or power law (709 is power law) corrections found in most edit software to Log won’t have the desired effect and basic edit software rarely has proper log controls. You need to use a proper grading package like Resolve and it’s dedicated log controls. Better still some form of colour managed workflow like ACES where your specific type of log is precisely converted on the fly to a special digital intermediate and the corrections are made to the intermediate file. There is no transcoding, you just tell ACES what the footage was was shot on and magic happens under the hood. Once you have done that you can change the white balance or ISO of log material in exactly the same way as raw. There is very, very little difference.
When people say you can’t push log, more often than not it isn’t a matter of can’t, it’s a case of can – but you need to use the right tools.
Less compression or a greater bit depth are where the biggest differences between a log or raw recording come from, not so much from whether the data is log or raw. Don’t forget raw is often recorded using log, which kind of makes the “you can’t grade log” argument a bit daft.
Camera manufactures and raw recorder manufacturers are perfectly happy to allow everyone to believe raw is magic and worse still, let people believe that ANY type of raw must be better than all other types of recordings. Read though any camera forum and you will see plenty of examples of “it’s raw so it must be better” or “I need raw because log isn’t as good” without any comprehension of what raw is and how in reality it’s the way the raw is compressed and the bit depth that really matters.
If we take ProRes Raw as an example: For a 4K 24/25fps file the bit rate is around 900Mb/s. For a conventional ProRes HQ file the bit rate is around 800Mb/s. So the file size difference between the two is not at all big.
But the ProRes Raw file only has to store around 1/3 as many data points as the component ProResHQ file. As a result, even though the ProRes Raw file often has a higher bit depth, which in itself usually means better a better quality recording, it is also much, much less compressed and as a result will have fewer artefacts.
It’s the reduced compression and deeper bit depth possible with raw that can lead to higher quality recordings and as a result may bring some grading advantages compared to a normal ProRes or other compressed file. The best bit is there is no significant file size penalty. So you have the same amount of data, but you data should be of higher quality. So given that you won’t need more storage, which should you use? The higher bit depth less compressed file or the more compressed file?
But, not all raw files are the same. Some cameras feature highly compressed 10 bit raw, which frankly won’t be any better than most other 10 bit recordings as you are having to do all the complex math to create a colour image starting with just 10 bit. Most cameras do this internally at at least 12 bit. I believe raw needs to be at least 12 bit to be worth having.
If you could record uncompressed 12 bit RGB or 12 bit component log from these cameras that would likely be just as good and just as flexible as any raw recordings. But the files would be huge. It’s not that raw is magic, it’s just that raw is generally much less compressed and depending on the camera may also have a greater bit depth. That’s where the benefits come from.
It’s amazing how often people will tell you how easy it is to change the white balance or adjust the ISO of raw footage in post. But can you, is it really true and is it somehow different to changing the ISO or white balance of Log footage?
Let’s start with ISO. If ISO is sensitivity, or the equivalent of sensitivity how on earth can you change the sensitivity of the camera once you get into post production. The answer is you can’t.
But then we have to consider how ISO works on an electronic camera. You can’t change the sensor in a video camera so in reality you can’t change how sensitive an electronic camera is (I’m ignoring cameras with dual ISO for a moment). All you can do is adjust the gain or amplification applied to the signal from the sensor. You can add gain in post production too. So, when you adjust the exposure or using the ISO slider for your raw footage in post all you are doing is adjusting how much gain you are adding. But you can do the same with log or any other gamma.
One thing that makes a difference with raw is that the gain is applied in such a way that what you see looks like an actual sensitivity change no matter what gamma you are transforming the raw to. This makes it a little easier to make changes to the final brightness in a pleasing way. But you can do exactly the same thing with log footage. Anything you do in post must be altering the recorded file, it can never actually change what you captured.
Changing the white balance in post: White Balance is no different to ISO, you can’t change in post what the camera captured. All you can do is modify it through the addition or subtraction of gain.
Think about it. A sensor must have a certain response to light and the colours it sees depending on the material it’s made from and the colour filters used. There has to be a natural fixed white balance or a colour temperature that it works best at.
The Silicon that video sensors are made from is almost always more sensitive at the red end of the spectrum than the blue end. So as a result almost all sensors tend to produce the best results with light that has a lot of blue (to make up for the lack of blue sensitivity) and not too much red. So most cameras naturally perform best with daylight and as a result most sensors are considered daylight balanced.
If a camera produces a great image under daylight how can you possibly get a great image under tungsten light without adjusting something? Somehow you need to adjust the gain of the red and blue channels.
Do it in camera and what you record is optimised for your choice of colour temperature at the time of shooting. But you can always undo or change this in post by subtracting or adding to whatever was added in the camera.
If the camera does not move away from its native response then if you want anything other than the native response you will have to do it in post and you will be recording at the cameras native white balance. If you want a different colour temp then you need to add or subtract gain to the R & B channels in post to alter it.
Either way what you record has a nominal white balance and anything you do in post is skewing what you have recorded using gain. There is no such thing as a camera with no native white balance, all cameras will favour one particular colour temperature. So even if a manufacturer claims that the white balance isn’t baked in what they mean is they don’t offer the ability to make any adjustments to the recorded signal. If you want the very best image quality, the best method is to adjust at the time of recording. So, as a result a lot of camera manufacturers will skew the gain of the red and blue channels of the sensor in the camera when shooting raw as this optimises what you are recording. You can then skew it again in post should you want a different balance.
With either method if you want to change the white balance from what was captured you are altering the gain of the red and blue channels. Raw doesn’t magically not have a white balance, so shooting with the wrong white balance and correcting it in post is not something you want to do. Often you can’t correct badly balanced raw any better than you can correct incorrectly balanced log.
How far you can adjust or correct raw depends on how it’s been compressed (or not), the bit depth, whether it’s log or linear and how noisy it is. Just like a log recording really, it all depends on the quality of the recording.
The big benefit raw can have is that the amount of data that needs to be recorded is considerably reduced compared conventional component or RGB video recordings. As a result it’s often possible to record using a greater bit depth or with much less compression. It is the greater bit depth or reduced compression that really makes a difference. 16 bit data can have up to 65,536 luma gradations, compare that to the 4096 of 12 bit or 1024 of 10 bit and you can see how a 16 bit recording can have so much more information than a 10 bit one. And that makes a difference. But 10 bit log v 10 bit raw, well it depends on the compression, but well compressed 10 bit log will likely outperform 10 bit raw as the all important colour processing will have been done in the camera at a much higher bit depth than 10 bit.
I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.
In my distant past I worked in and helped run a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.
But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.
Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.
You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.
This creates two issues.
1: Everything looks the same – BORING!!!!
2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.
How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?
If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.
Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.
LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.
Blackmagic Design have just released the latest update to DaVinci Resolve. If you have been experiencing crashes when using XAVC material from the PXW-FX9 I recommend you download and install this update.
If you are not a Resolve user and are struggling with grading or getting the very best from any log or raw camera, then I highly recommend you take a look at DaVinci Resolve. It’s also a very powerful edit package. The best bit is the free version supports most cameras. If you need full MXF support you will need to buy the studio version, but with a one off cost of only $299 USD it really is a bargain and gets you away from any horrid subscription services.
There is a video on YouTube right now where the author claims that the Sony Alpha cameras don’t record correctly internally when shooting S-Log2 or S-Log3. The information contained in this video is highly miss-leading and the conclusion that the problem is with the way Sony record internally is incorrect. There really isn’t anything wrong with the way Sony do their recordings. Neither is there anything wrong with the HDMI output. While centered around the Alpha cameras the information below is also important for anyone that records S-Log2 or S-log3 externally with any other camera.
Some background: Within the video world there are 2 primary ranges that can be used to record a video signal.
Legal Range uses code value 16 for black and code value 235 for white (anything above CV235 is classed as a super-white and these can still be recorded but considered to be beyond 100%).
Full or Data Range uses code value 0 for black and code value 255 for white or 100%.
Most cameras and most video systems are based on legal range. ProRes recordings are almost always legal range. Most Sony cameras use legal range and do include super-whites for some of the curves such as Cinegammas or Hypergammas to gain a bit more dynamic range. The vast majority of video recordings use legal range. So most software defaults to legal range.
But very, very importantly – S-log2 and S-log is always full/data range.
Most of the time this doesn’t cause any issues. When you record internally in the camera the internal recordings have metadata that tells the playback, editing or grading software that the S-Log files have been recorded using full range. Because of this metadata the software will play the files back and process them at the correct levels. However if you record the S-Log with an external recorder the recorder doesn’t always know that what it is getting is full range and not legal range, it just records it, as it is, exactly as it comes out of the camera. That then causes a problem later on because the externally recorded file doesn’t have the right metadata to ensure that the full range S-Log material is handled correctly and most software will default to legal range if it knows no different.
Lets have a look at what happens when you import an internally recorded S-Log2 .mp4 file from a Sony A7S into Adobe Premiere:
A few things to note here. One is Adobe’s somewhat funky scopes where the 8 bit code values don’t line up with the normally used IRE values used for video productions. Normally 8 bit code value 235 would be 100IRE or 100%, but for some reason Adobe have code value 255 lined up with 100%. My suspicion is that the scope % scale is not video % or IRE but instead RGB%. This is really confusing. A further complication is that Adobe have code value 0 as black, again, I think, but am not sure that this is RGB code value 0. In the world of video Black should be code value 16. But the scopes appear to work such that 0 is black and that 100 is full scale video out. Anything above 100 and below 0 will be clipped in any file you render out.
Looking at the scopes in the screen grab above, the top step on the grey scale chart is around code value 252. That is the code value you would expect it to be, that lines up just nicely with where the peak of an S-Log2 recording should be. This all looks correct, nothing goes above 100 or below 0 so nothing will be clipped.
So now lets look at an external ProRes recording, recorded at exactly the same time as the internal recording and see what Premier does with that:
OK, so we can see straight away something isn’t quite right here. In an 8 bit recording it should be impossible to have a code value higher that 255, but the scopes are suggesting that the recording has a peak code value of something around CV275. That is impossible, so alarm bells should be ringing. Something is not quite right here. In addition the S-Log2 appears to be going above 100, so that means if I were to simply export this as a new file, the top of the recording will be clipped and it won’t match the original. This is very clearly not right.
Now lets take a look at what happens in Adobe Premiere when you apply Sony’s standard S-Log2 to Rec-709 LUT to a correctly exposed internal recording:
This all looks good and as expected. Blacks are sitting down just above the 0 line (which I think we can safely assume is black) and the whites of the picture are around code value 230 or 90, whatever that means. But they are certainly nice and bright and are not in the range that will be clipped. So I can believe this as being more or less correct and as expected.
So next I’m going to add the same standard LUT to the external recording to see what happens.
OK, this is clearly not right. Our blacks now go below the 0 line and they look clipped. The highlights don’t look totally out of place, but clearly there is something going very, very wrong when we this normal LUT to this correctly exposed external recording. There is no way our blacks should be going below zero and they look crushed/clipped. The internal recording didn’t behave like this. So what is going on with the external recording?
To try and figure this out lets take a look at the same files in DaVinci Resolve. For a start I trust the scopes in Resolve much more and it is a far better programme for managing different types of files. First we will look at the internal S-Log2 recording:
Once again the levels of the internal S-Log2 recordings look absolutely fine. Our peak is around code value 1010 which would be 252 in 8 bit. Right where the brightest bits of an S-log2 file should be. Now lets take a look at the external recording.
If you compare the two screen grabs above you can see that the levels are exactly the same. Our peak level is around CV1010/CV252, just where it should be and the blacks look the same also. The internal and external recordings have the same levels and look the same. There is no difference (other then perhaps less compression and fewer artefacts in the ProRes file). There is nothing wrong with either of these recordings and certainly nothing wrong with the way Sony record S-Log2 internally. This is absolutely what I expect to see.
BUT – I’ve been a little bit sneaky here. As I knew that the external recording was a full range recording I told DaVinci Resolve to treat it as a full range recording. In the media bin I right clicked on the clip and under “clip attributes” I changed the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.
This is what it looks like if you allow Resolve to guess at what range the S-Log2 full range clip is by leaving the input range setting to “auto”:
In the above image we can see how in Resolve the clip becomes clipped because in a legal range recording anything over CV235/CV940 would be an illegal super white. Resolve is scaling the clip and pushing anything in the original file that was above CV235/CV940 off the top of the scale. The scaling is incorrect because Resolve doesn’t know the clip is supposed to be full range and therefore not scaled. If we compare this to what Premiere did with the external recording it’s actually very similar. Premiere also scaled the clip, only Premiere will show all those “illegal” levels above it’s 100 line instead of clipping then as Resolve does. That’s why Premiere can have those “impossible” 8 bit code values going up to CV275.
Just to be complete here, I did also test the internal .mp4 recordings in Resolve switching between “auto” and “full” range and in both cases the levels stayed exactly the same. This shows that Resolve is correctly handling the internally record full range S-Log as full range.
What about if you add a LUT? Well you MUST tell Resolve to treat the S-Log2 ProRes clip as a full range clip otherwise the LUT will not be right, if your footage is S-Log3 you also have to tell Resolve that it is full range:
Both the internal and external recordings are actually exactly the same. Both have the same levels, both use FULL range. There is absolutely nothing wrong with Sony’s internal recordings. The problem stems from the way most software will assume that the ProRes files are legal range. But if it’s an S-Log2 or S-Log3 recording it will in fact be full (data) range. Handling a full range clip as legal range means that highlights will be too high/bright or clipped and blacks will be crushed. So it’s really important that your software handles the footage correctly. If you are shooting using S-Log3 this problem is harder to spot as S-Log3 has a peak recording level that is well with the legal range, so you often won’t realise it’s being scaled incorrectly as it won’t necessarily look clip. If you use LUT’s and your ProRes clips look crushed or highlights look clipped you need to check that the input scaling is correct. It’s really important to get this right.
Why is there no difference between the levels when you shoot with a Cinegamma? Well when you shoot with a cinegamma the internal recordings are legal range so the internal recordings get treated as legal range and so do the external recordings, so they don’t appear to be different (In the YouTube video that led to this post the author discovers that if you record with a normal profile first and then switch to a log profile while recording the internal and external files will match. But this is because now the internal recording has the incorrect metadata, so it too gets scaled incorrectly, so both the internal and external files are now wrong – but the the same).
Once again: There is nothing wrong with the internal recordings. The problem is with the way the external recordings are being handled. The external recordings haven’t been recorded incorrectly, they have been recorded as they should be. The problem is the edit software is incorrectly interpreting the external recordings. The external recordings don’t have the necessary metadata to mark the files as full range because the recorder is external to the camera and doesn’t know what it’s being sent by the camera. This is a common problem when using external recorders.
What can we do in Premiere to make Premiere work right with these files?
You don’t need to do anything in Premiere for the internal .mp4 recordings. They are handled correctly but Premiere isn’t handling the full/data range ProRes files correctly.
My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect. So if anyone knows of a better way to do this do please let me know.
Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction.
Blackmagic Designs DaVinci Resolve is a really amazing piece of software, especially given that there is a free version that packs in almost all of the power of the full paid studio version.
Today, post production grading is becoming an ever more important part of the video production process. In the past basic colour correction functions of most edit applications were enough for most people. But now if you are shooting using log or raw it’s very important that you have the right toolset to take advantage of the benefits that log and raw offer.
For decades I have used Adobe Premiere for my editing and it has allowed me to create many great videos from broadcast TV series to simple corporates. As an edit application it’s still pretty solid. But now I shoot almost everything using log and raw and I have never been completely happy with the results from Premiere, even with Lumetri.
So I started to do my grading in Resolve and I have never looked back. The degree of control I have in Resolve is much greater. There are wonderful features such as DaVinci’s own Colour Managed workflow or the ACES workflow which makes dealing with log and raw from virtually any camera a breeze. If you want a film look choose ACES, for more punchy looks choose DaVinci Color Managed. You don’t need LUT’s, exposure adjustments are easy and you can then add all kinds of different secondary corrections such as power windows quickly and easily. The colour managed workflow are particularly beneficial if you wish to produce HDR versions of your productions.
But until recently my workflow was a 2 stage workflow. Edit in Premiere, then grade and finish in Resolve. But the last couple of versions of Resolve have seen some huge advances in its editing speed and capabilities. The editor is now as good as anyone else’s, so I am now editing in Resolve too. It’s a very similar to Premiere so it didn’t take long to make the switch.
One question that I am often asked is where to find good training information and guides for Resolve. Well clearly Blackmagic Design have been listening as they have now released a series of videos that will help guide anyone new to Resolve through the basics. In total there are 8 hours of easy to follow video. The manual is also pretty good!
If you have never tried Resolve then I really urge you to give it a go. It is an incredibly powerful piece of software. It isn’t difficult to master once you see how it’s laid out, how the different “rooms” work and how to use nodes. When I started with it I really found it all quite logical. You start in the “media” room to bring in your material, then progress on to the edit room for editing, finishing in the deliver room to encode and produce your master files and other output versions.
So do take a look at the videos linked below if you want to learn more about Resolve and do give it a try. Remember the free version will do almost everything that the full version will. The full Studio version isn’t expensive and features one of the best suites of noise reduction tools anywhere. It only costs a one off payment of $299.00 USD, no silly subscription fees to keep having to pay as with Adobe!
One last thing before I get to the videos: If you do a lot of grading you really should get a proper control panel. I have the Blackmagic micro panel and this really speeds up my grading. If you don’t have a panel you can only adjust a single grading parameter at a time. With a panel you can do things like bringing up the gain while pulling down the black level. This allows you to see the interaction between your different adjustments much more dynamically and it’s just plain faster. Most of the key functions have dedicated controls so you can quickly dial in a bit of contrast, switch to log mode, bypass a node and boost the saturation all through direct controls, very much quicker than with just a mouse. The use of the micro panel has probably halved the amount of time it takes me to grade a typical project – and – I’m getting a better result because it’s more intuitive.
So here are the videos:
Introduction to Editing.
Fusion Part 1. VFX and Graphics
Fusion Part 2. 3D FX
Fairlight Audio Part 1.
Fairlight Audio Part 2.
Delivery and Encoding.
DaVinci Resolve Mini Panel.
I’m often asked at the various workshops I run why I don’t grade in Adobe Premiere. Here’s why – they can’t even get basic import levels right.
Below are two screen grabs. The first is from Adobe Premiere CC 2019 and shows an ungraded, as shot, HLG clip. Shot with a Sony Z280 (love that little camera). Note how the clip appears over grossly exposed with a nuclear looking sky and clipped snow, it doesn’t look nice. Also note that the waveform suggest the clips peak levels exceed 110%. Now I know for a fact that if you shoot HLG with any Sony camera white will never exceed 100%.
The second screen grab is from DaVinci Resolve and it shows the same clip. Note how in Resolve that although bright the clip certainly doesn’t look over exposed as it does in Premiere. Note also how the levels show by the waveform now no longer exceeds code value 869 (100% white is 940). These are the correct and expected levels, this is how the clip is supposed to look. Not the utter nonsense that Adobe creates.
Why can’t Adobe get this right. This problem has existed for ages and it really screws up your footage. If you are using S-Log and you try to add a LUT then things get even worse as the LUT expects the correct levels, not these totally incorrect levels.
Take the SDI or raw out from the camera and record a ProRes file on something like a Shogun while recording XAVC internally and the two files look totally different in Premiere but they look the same in Resolve. Come on Adobe – you should be doing better than this.
If they can’t even bring clips in at the correct levels, what hope is there of being able to get a decent grading output? I can make the XAVC clips look OK in Premiere but I have to bring the levels down to do this. I shouldn’t have to. I exposed it right when I shot it so I expect it to look right in my edit software.
I have been editing with Adobe Premiere since around 1994. I took a rather long break from Premiere between 2001 and 2011 and switched over to Apple and Final Cut Pro which in many ways used to be very similar to Premiere (I think some of the same software writers were used for FCP as Premiere). My FCP edit stations were always muti-core Mac Towers. The old G5’s first then later on the Intel Towers. Then along came FCP-X. I just didn’t get along with FCP-X when it first came out. I’m still not a huge fan of it now, but will happily concede that FCP-X is a very capable, professional edit platform.
So in 2011 I switch back to Adobe Premiere as my edit platform of choice. Along the way I have also used various versions of Avid’s software, which is another capable platform.
But right now I’m really not happy with Premiere. Over the last couple of years it has become less stable than it used to be. I run it on a MacBook Pro which is a well defined hardware platform, yet I still get stability issues. I’m also experiencing problems with gamma and level shifts that just shouldn’t be there. In addition Premiere is not very good with many long GOP codecs. FCP-X seems to make light work of XAVC-L compared to Premiere. Furthermore Adobe’s Media encoder which once used to be one of the first encoders to get new codecs or features is now lagging behind, Apples Compressor now has the ability to do at he full range of HDR files. Media Compressor can only do HDR10. If you don’t know, it is possible to buy Compressor on it’s own.
Meanwhile DaVinci Resolve has been my grading platform of choice for a few years now. I have always found it much easier to get the results and looks that I want from Resolve than from any edit software – this isn’t really a surprise as after all that’s what Resolve was originally designed for.
The last few versions of Resolve have become much faster thanks to some major processing changes under the hood and in addition there has been a huge amount of work on Resolves edit capabilities. It can now be used as a fully featured edit platform. I recently used Resolve to edit some simpler projects that were going to be graded as this way I could stay in the same software for both processes, and you know what it’s a pretty good editor. There are however a few things that I find a bit funky and frustrating in the edit section of Resolve at the moment. Some of that may simply be because I am less familiar with it for editing than I am Premiere.
Anyway, on to my point. Resolve is getting to be a pretty good edit platform and it’s only going to get better. We all know that it’s a really good and very powerful grading platform and with the recent inclusion of the Fairlight audio suite within Resolve it’s pretty good at handling audio too. Given that the free version of Resolve can do all of the edit, sound and grading functions that most people need, why continue to subscribe to Adobe or pay for FCP-X?
With the cost of the latest generations of Apple computers expanding the price gap between them and similar spec Windows machines – as well as the new Macbooks lacking built in ports like HDMI, USB3 that we all use every day (you now have to use adapters and dongles). The Apple eco system is just not as attractive as it used to be. Resolve is cross platform, so an Mac user can stay with Apple if they wish, or move over to Windows or Linux whenever they want with Resolve. You can even switch platforms mid project if you want. I could start an edit on my MacBook and the do the grade on a PC workstation staying with Resolve through the complete process.
Even if you need the extra features of the full version like very good noise reduction, facial recognition, 4K DCI output or HDR scopes then it’s still good value as it currently only costs $299/£229 which is less than a years subscription to Premiere CC.
But what about the rest of the Adobe Creative suite? Well you don’t have to subscribe to the whole suite. You can just get Photoshop or After Effects. But there are also many alternatives. Again Blackmagic Design have Fusion 9 which is a very impressive VFX package used for many Hollywood movies and like Resolve there is also a free version with a very comprehensive tools set or again for just $299/£229 you get the full version with all it’s retiming tools etc.
For a Photoshop replacement you have GIMP which can do almost everything that Photoshop can do. You can even use Photoshop filters within GIMP. The best part is that GIMP is free and works on both Mac’s and PC’s.
So there you have it – It looks like Blackmagic Design are really serious about taking a big chunk of Adobe Premiere’s users. Resolve and Fusion are cross platform so, like Adobe’s products it doesn’t matter whether you want to use a Mac or a PC. But for me the big thing is you own the software. You are not going to be paying out rather a lot of money month on month for something that right now is in my opinion somewhat flakey.
I’m not quite ready to cut my Creative Cloud subscription yet, maybe on the next version of Resolve. But it won’t be long before I do.
With the first of the production Venice cameras now starting to find their way to some very lucky owners it’s time to take a look at some features that are not always well understood, or that perhaps no one has told you about yet.
Dual Native ISO’s: What does this mean?
An electronic camera uses a piece of silicon to convert photons of light into electrons of electricity. The efficiency at doing this is determined by the material used. Then the amount of light that can be captured and thus the sensitivity is determined by the size of the pixels. So, unless you physically change the sensor for one with different sized pixels (which will in the future be possible with Venice) you can’t change the true sensitivity of the camera. All you can do is adjust the electronic parameters.
With most video cameras the ISO is changed by increasing the amount of amplification applied to the signal coming off the sensor. Adding more gain or increasing the amplification will result in a brighter picture. But if you add more amplification/gain then the noise from the sensor is also amplified by the same amount. Make the picture twice as bright and normally the noise doubles.
In addition there is normally an optimum amount of gain where the full range of the signal coming from the sensor will be matched perfectly with the full recording range of the chosen gamma curve. This optimum gain level is what we normally call the “Native ISO”. If you add too much gain the brightest signal from the sensor would be amplified too much and exceed the recording range of the gamma curve. Apply too little gain and your recordings will never reach the optimum level and darker parts of the image may be too dark to be seen.
As a result the Native ISO is where you have the best match of sensor output to gain. Not too much, not too little and hopefully low noise. This is typically also referred to as 0dB gain in an electronic camera and normally there is only 1 gain level where this perfect harmony between sensor, gain and recording range is achieved, this becoming the native ISO.
Side Note: On an electronic camera ISO is an exposure rating, not a sensitivity measurement. Enter the cameras ISO rating into a light meter and you will get the correct exposure. But it doesn’t really tell you how sensitive the camera is as ISO has no allowance for increasing noise levels which will limit the darkest thing a camera can see.
Tweaking the sensor.
However, there are some things we can tweak on the sensor that effect how big the signal coming from the sensor is. The sensors pixels are analog devices. A photon of electricity hits the silicone photo receptor (pixel) and it gets converted into an electron of electricity which is then stored within the structure of the pixel as an analog signal until the pixel is read out by a circuit that converts the analog signal to a digital one, at the same time adding a degree of noise reduction. It’s possible to shift the range that the A to D converter operates over and the amount of noise reduction applied to obtain a different readout range from the sensor. By doing this (and/or other similar techniques, Venice may use some other method) it’s possible to produce a single sensor with more than one native ISO.
A camera with dual ISO’s will have two different operating ranges. One tuned for higher light levels and one tuned for lower light levels. Venice will have two exposure ratings: 500 ISO for brighter scenes and 2500 ISO for shooting when you have less light. With a conventional camera, to go from 500 ISO to 2500 ISO you would need to add just over 12dB of gain and this would increase the noise by a factor of more than 4. However with Venice and it’s dual ISO’s, as we are not adding gain but instead altering the way the sensor is operating the noise difference between 500 ISO and 2500 ISO will be very small.
You will have the same dynamic range at both ISO’s. But you can choose whether to shoot at 500 ISO for super clean images at a sensitivity not that dissimilar to traditional film stocks. This low ISO makes it easy to run lenses at wide apertures for the greatest control over the depth of field. Or you can choose to shoot at the equivalent of 2500 ISO without incurring a big noise penalty.
One of Venice’s key features is that it’s designed to work with Anamorphic lenses. Often Anamorphic lenses are typically not as fast as their spherical counterparts. Furthermore some Anamorphic lenses (particularly vintage lenses) need to be stopped down a little to prevent excessive softness at the edges. So having a second higher ISO rating will make it easier to work with slower lenses or in lower light ranges.
COMBINING DUAL ISO WITH 1 STOP ND’s.
Next it’s worth thinking about how you might want to use the cameras ND filters. Film cameras don’t have built in ND filters. An Arri Alexa does not have built in ND’s. So most cinematographers will work on the basis of a cinema camera having a single recording sensitivity.
The ND filters in Venice provide uniform, full spectrum light attenuation. Sony are incredibly fussy over the materials they use for their ND filters and you can be sure that the filters in Venice do not degrade the image. I was present for the pre-shoot tests for the European demo film and a lot of time was spent testing them. We couldn’t find any issues. If you introduce 1 stop of ND, the camera becomes 1 stop less sensitive to light. In practice this is no different to having a camera with a sensor 1 stop less sensitive. So the built in ND filters, can if you choose, be used to modify the base sensitivity of the camera in 1 stop increments, up to 8 stops lower.
So with the dual ISO’s and the ND’s combined you have a camera that you can setup to operate at the equivalent of 2 ISO all the way up to 2500 ISO in 1 stop steps (by using 2500 ISO and 500 together you can have approximately half stops steps between 10 ISO and 650 ISO). That’s an impressive range and at no stage are you adding extra gain. There is no other camera on the market that can do this.
On top of all this we do of course still have the ability to alter the Exposure Index of the cameras LUT’s to offset the exposure to move the exposure mid point up and down within the dynamic range. Talking of LUT’s I hope to have some very interesting news about the LUT’s for Venice. I’ve seen a glimpse of the future and I have to say it looks really good!
The raw and X-OCN material from a Venice camera (and from a PMW-F55 or F5 with the R7 recorder) contains a lot of dynamic metadata. This metadata tells the decoder in your grading software exactly how to handle the linear sensor data stored in the files. It tells your software where in the recorded data range the shadows start and finish, where the mid range sits and where the highlights start and finish. It also informs the software how to decode the colors you have recorded.
I recently spent some time with Sony Europe’s color grading guru Pablo Garcia at the Digital Motion Picture Center in Pinewood. He showed me how you can manipulate this metadata to alter the way the X-OCN is decoded to change the look of the images you bring into the grading suite. Using a beta version of Black Magic’s DaVinci Resolve software, Pablo was able to go into the clips metadata in real time and simply by scrubbing over the metadata settings adjust the shadows, mids and highlights BEFORE the X-OCN was decoded. It was really incredible to see the amount of data that Venice captures in the highlights and shadows. By adjusting the metadata you are tailoring the the way the file is being decoded to suit your own needs and getting the very best video information for the grade. Need more highlight data – you got it. Want to boost the shadows, you can, at the file data level before it’s converted to a traditional video signal.
It’s impressive stuff as you are manipulating the way the 16 bit linear sensor data is decoded rather than a traditional workflow which is to decode the footage to a generic intermediate file and then adjust that. This is just one of the many features that X-OCN from the Sony Venice offers. It’s even more incredible when you consider that a 16 bit linear X-OCN LT file is similar in size to 10 bit XAVC-I(class 480) and around half the size of Apples 10 bit ProRes HQ. X-OCN LT looks fantastic and in my opinion grades better than XAVC S-Log. Of course for a high end production you will probably use the regular X-OCN ST codec rather than the LT version, but ST is still smaller than ProRes HQ. What’s more X-OCN is not particularly processor intensive, it’s certainly much easier to work with X-OCN than cDNG. It’s a truly remarkable technology from Sony.
Next week I will be shooting some more test with a Venice camera as we explore the limits of what it can do. I’ll try and get some files for you to play with.
Please don’t take this post the wrong way. I DO understand why some people like to try and emulate film. I understand that film has a “look”. I also understand that for many people that look is the holy grail of film production. I’m simply looking at why we do this and am throwing the big question out there which is “is it the right thing to do”? I welcome your comments on this subject as it’s an interesting one worthy of discussion.
In recent years with the explosion of large sensor cameras with great dynamic range it has become a very common practice to take the images these cameras capture and apply a grade or LUT that mimics the look of many of todays major movies. This is often simply referred to as the “film look”.
This look seems to be becoming more and more extreme as creators attempt to make their film more film like than the one before, leading to a situation where the look becomes very distinct as opposed to just a trait of the capture medium. A common technique is the “teal and orange” look where the overall image is tinted teal and then skin tones and other similar tones are made slightly orange. This is done to create colour contrast between the faces of the cast and the background as teal and orange are on opposite sites of the colour wheel.
Another variation of the “film look” is the flat look. I don’t really know where this look came from as it’s not really very film like at all. It probably comes from shooting with a log gamma curve, which results in a flat, washed out looking image when viewed on a conventional monitor. Then because this look is “cool” because shooting on log is “cool” much of the flatness is left in the image in the grade because it looks different to regular TV ( or it may simply be that it’s easier to create a flat look than a good looking high contrast look). Later in the article I have a nice comparison of these two types of “film look”.
Not Like TV!
Not looking like TV or Video may be one of the biggest drivers for the “film look”. We watch TV day in, day out. Well produced TV will have accurate colours, natural contrast (over a limited range at least) and if the TV is set up correctly should be pretty true to life. Of course there are exceptions to this like many daytime TV or game shows where the saturation and brightness is cranked up to make the programmes vibrant and vivid. But the aim of most TV shows is to look true to life. Perhaps this is one of the drivers to make films look different, so that they are not true to life, more like a slightly abstract painting or other work of art. Colour and contrast can help setup different moods, dull and grey for sadness, bright and colourful for happy scenes etc, but this should be separate from the overall look applied to a film.
Another aspect of the TV look comes from the fact that most TV viewing takes place in a normal room where light levels are not controlled. As a result bright pictures are normally needed, especially for daytime TV shows.
But What Does Film Look Like?
But what does film look like? As some of you will know I travel a lot and spend a lot of time on airplanes. I like to watch a film or 2 on longer flights and recently I’ve been watching some older films that were shot on film and probably didn’t have any of the grading or other extensive manipulation processes that most modern movies go through.
Lets look at a few frames from some of those movies, shot on film and see what they look like.
The all time classic Lawrence of Arabia. This film is surprisingly colourful. Red, blues, yellows are all well saturated. The film is high contrast. That is, it has very dark blacks, not crushed, but deep and full of subtle textures. Skin tones are around 55 IRE and perhaps very slightly skewed towards brown/red, but then the cast are all rather sun tanned. But I wouldn’t call the skin tones orange. Diffuse whites typically around 80 IRE and they are white, not tinted or coloured.
When I watched Braveheart, one of the things that stood out to me was how green the foliage and grass was. The strong greens really stood out in this movie compared to more modern films. Overall it’s quite dark, skin tones are often around 45 IRE and rarely more than 55 IRE, very slightly warm/brown looking, but not orange. Again it’s well saturated and high contrast with deep blacks. Overall most scenes have a quite low peak and average brightness level. It’s quite hard to watch this film in a bright room on a conventional TV, but it looks fantastic in a darkened room.
Raiders of the Lost Ark does show some of the attributes often used for the modern film look. Skin tones are warm and have a slight orange tint and overall the movie is very warm looking. A lot of the sets use warm colours with browns and reds being prominent. Colours are well saturated. Again we have high contrast with deep blacks and those much lower than TV skin tones, typically 50-55IRE in Raiders. Look at the foliage and plants though, they are close to what you might call TV greens, ie realistic shades of green.
A key thing I noticed in all of these (and other) older movies is that overall the images are darker than we would use for daytime TV. Skin tones in movies seem to sit around 55IRE. Compare that to the typical use of 70% zebras for faces on TV. Also whites are generally lower, often diffuse white sitting at around 75-80%. One important consideration is that films are designed to be shown in dark cinema theatres where white at 75% looks pretty bright. Compare that to watching TV in a bright living room where to make white look bright you need it as bright as you can get. Having diffuse whites that bit lower in the display range leaves a little more room to separate highlights from whites giving the impression of a greater dynamic range. It also brings the mid range down a bit so the shadows also look darker without having to crush them.
Side Note: When using Sony’s Hypergammas and Cingeammas they are supposed to be exposed so that white is around 70-75% with skin tones around 55-60%. If used like this with a sutable colour matrix such as “cinema” they can look quite film like.
If we look at some recent movies the look can be very different.
The Revenant is a gritty film and it has a gritty look. But compare it to Braveheart and it’s very different. We have the same much lower skin tone and diffuse white levels, but where has the green gone? and the sky is very pale. The sky and trees are all tinted slightly towards teal and de-saturated. Overall there is only a very small colour range in the movie. Nothing like the 70mm film of Laurence of Arabia or the 35mm film of Braveheart.
In the latest instalment of the Pirates of the Caribbean franchise the images are very “brown”. Notice how even the whites of the ladies dresses or soldiers uniforms are slightly brown. The sky is slightly grey (I’m sure the sky was much bluer than this). The palm tree fronds look browner than green and Jack Sparrow looks like he’s been using too much fake tan as his face is border line orange (and almost always also quite dark).
Wonder woman is another very brown movie. In this frame we can see that the sky is quite brown. Meanwhile the grass is pushed towards teal and de-saturated, it certainly isn’t the colour of real grass. Overall colours are subdued with the exception of skin tones.
These are fairly typical of most modern movies. Colours generally quite subdued, especially greens and blues. The sky is rarely a vibrant blue, grass is rarely a grassy green. Skin tones tend to be very slightly orange and around 50-60IRE. Blacks are almost always deep and the images contrasty. Whites are rarely actually white, they tend to be tinted either slightly brown or slightly teal. Steel blues and warm browns are favoured hues. These are very different looking images to the movies shot on film that didn’t go through extensive post production manipulation.
So the film look, isn’t really about making it look like it was shot on film, it’s a stylised look that has become stronger and stronger in recent years with most movies having elements of this look. So in creating the “film look” we are not really mimicking film, but copying a now almost standard colour grading recipe that has some film style traits.
BUT IS IT A GOOD THING?
In most cases these are not unpleasant looks and for some productions the look can add to the film, although sometimes it can be taken to noticeable and objectionable extremes. However we do now have cameras that can capture huge colour ranges. We also have the display technologies to show these enormous colour ranges. Yet we often choose to deliberately limit what we use and very often distort the colours in our quest for the “film look”.
HDR TV’s with Rec2020 colour can show both a greater dynamic range and a greater colour range than we have ever seen before. Yet we are not making use of this range, in particular the colour range except in some special cases like some TV commercials as well as high end wild life films such as Planet Earth II.
This TV commercial for TUI has some wonderful vibrant colours that are not restricted to just browns and teal yet it looks very film like. It does have an overall warm tint, but the other colours are allowed to punch through. It feels like the big budget production that it clearly was without having to resort to the modern defacto restrictive film look colour palette. Why can’t feature films look like this? Why do they need to be dull with a limited colour range? Why do we strive to deliberately restrict our colour pallet in the name of fashion?
What’s even more interesting is what was done for the behind the scenes film for the TUI advert…..
The producers of the BTS film decided to go with an extremely flat, washed out look, another form of modern “film look” that really couldn’t be further from film. When an typical viewer watches this do they get it in the same way as we that work in the industry do? Do they understand the significance of the washed out, flat, low contrast pictures or do they just see weird looking milky pictures that lack colour with odd skin tones? The BTS film just looks wrong to me. It looks like it was shot with log and not graded. Personally, I don’t think it looks cool or stylish, it just looks wrong and cheap compared to the lush imagery in the actual advert (perhaps that was the intention).
I often see people looking for a film look LUT. Often they want to mimic a particular film. That’s fine, it’s up to them. But if everyone starts to home in on one particular look or style then the films we watch will all look the same. That’s not what I want. I want lush rich colours where appropriate. Then I might want to see a subdued look in a period piece or a vivid look for a 70’s film. Within the same movie colour can be used to differentiate between different parts of the story. Take Woody Allen’s Cafe Society, shot by Vittorio Storaro for example. The New York scenes are grey and moody while the scenes in LA that portray a fresh start are vibrant and vivid. This is I believe important, to use colour and contrast to help tell the story.
Our modern cameras give us an amazing palette to work with. We have the tools such as DaVinci Resolve to manipulate those colours with relative ease. I believe we should be more adventurous with our use of colour. Reducing exposure levels a little compared to the nominal TV and video – skin tones at 70% – diffuse whites at 85-90%, helps replicate the film look and also leaves a bit more space in the highlight range to separate highlights from whites which really helps give the impression of a more contrasty image. Blacks should be black, not washed out and they shouldn’t be crushed either.
Above all else learn to create different styles. Don’t be afraid of using colour to tell your story and remember that real film isn’t just brown and teal, it’s actually quite colourful. Great artists tend to stand out when their works are different, not when they are the same as everyone else.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.