There is a bug in some versions of DaVinci Resolve 17 that can cause frames in some XAVC files to be rendered in the wrong order. This results in renders where the resulting video appears to stutter or the motion may jump backwards for a frame or two. This has now been fixed in version 17.3.2 so all user of XAVC and DaVinci Resolve are urged to upgrade to at least version 17.3.2.
As that raw frame is still nothing more than a normal bitmap all the cameras settings such as white balance, ISO etc are in fact baked in to the recording. Each pixel only has one single value and that value will have been determined by the way the camera is setup. Nothing you do in post production can change what was actually recorded. Most CMOS sensors are daylight balanced, so unless the camera adjusts the white balance prior to recording – which is what Sony normally do – your raw recording will be daylight balanced.
Modern cameras when shooting log or raw also record metadata that describes how the camera was set when the image was captured.
So the recorded raw file already has a particular white balance and ISO. I know lots of people will be disappointed to hear this or simply refuse to believe this but that’s the truth about a raw bitmap image with a single code value for each pixel and that value is determined by the camera settings.
This can be adjusted later in post production, but the adjustment range is not unlimited and it is not the same as making an adjustment in the camera. Plus there can be consequences to the image quality if you make large adjustments.
But then many people will tell you that you can’t grade log as well as raw. Often they will give photographers as an example where there is a huge difference between what you can do with a raw photo and a normal image. But we also have to remember this is typically comparing what you can do with a highly compressed 8 bit jpeg file and an often uncompressed 12 or 14 bit raw file. It’s not a fair comparison, of course you would expect the 14 bit file to be better.
The other argument often given is that it’s very hard to change the white balance of log in post, it doesn’t look right or it falls apart. Often these issues are nothing to do with the log recording but more to do with the tools being used.
It’s the reduced compression and deeper bit depth possible with raw that can lead to higher quality recordings and as a result may bring some grading advantages compared to a normal ProRes or other compressed file. The best bit is there is no significant file size penalty. So you have the same amount of data, but you data should be of higher quality. So given that you won’t need more storage, which should you use? The higher bit depth less compressed file or the more compressed file?
But, not all raw files are the same. Some cameras feature highly compressed 10 bit raw, which frankly won’t be any better than most other 10 bit recordings as you are having to do all the complex math to create a colour image starting with just 10 bit. Most cameras do this internally at at least 12 bit. I believe raw needs to be at least 12 bit to be worth having.
It’s amazing how often people will tell you how easy it is to change the white balance or adjust the ISO of raw footage in post. But can you, is it really true and is it somehow different to changing the ISO or white balance of Log footage?
Let’s start with ISO. If ISO is sensitivity, or the equivalent of sensitivity how on earth can you change the sensitivity of the camera once you get into post production. The answer is you can’t.
But then we have to consider how ISO works on an electronic camera. You can’t change the sensor in a video camera so in reality you can’t change how sensitive an electronic camera is (I’m ignoring cameras with dual ISO for a moment). All you can do is adjust the gain or amplification applied to the signal from the sensor. You can add gain in post production too. So, when you adjust the exposure or using the ISO slider for your raw footage in post all you are doing is adjusting how much gain you are adding. But you can do the same with log or any other gamma.
One thing that makes a difference with raw is that the gain is applied in such a way that what you see looks like an actual sensitivity change no matter what gamma you are transforming the raw to. This makes it a little easier to make changes to the final brightness in a pleasing way. But you can do exactly the same thing with log footage. Anything you do in post must be altering the recorded file, it can never actually change what you captured.
Changing the white balance in post: White Balance is no different to ISO, you can’t change in post what the camera captured. All you can do is modify it through the addition or subtraction of gain.
Think about it. A sensor must have a certain response to light and the colours it sees depending on the material it’s made from and the colour filters used. There has to be a natural fixed white balance or a colour temperature that it works best at.
The Silicon that video sensors are made from is almost always more sensitive at the red end of the spectrum than the blue end. So as a result almost all sensors tend to produce the best results with light that has a lot of blue (to make up for the lack of blue sensitivity) and not too much red. So most cameras naturally perform best with daylight and as a result most sensors are considered daylight balanced.
If a camera produces a great image under daylight how can you possibly get a great image under tungsten light without adjusting something? Somehow you need to adjust the gain of the red and blue channels.
Do it in camera and what you record is optimised for your choice of colour temperature at the time of shooting. But you can always undo or change this in post by subtracting or adding to whatever was added in the camera.
If the camera does not move away from its native response then if you want anything other than the native response you will have to do it in post and you will be recording at the cameras native white balance. If you want a different colour temp then you need to add or subtract gain to the R & B channels in post to alter it.
Either way what you record has a nominal white balance and anything you do in post is skewing what you have recorded using gain. There is no such thing as a camera with no native white balance, all cameras will favour one particular colour temperature. So even if a manufacturer claims that the white balance isn’t baked in what they mean is they don’t offer the ability to make any adjustments to the recorded signal. If you want the very best image quality, the best method is to adjust at the time of recording. So, as a result a lot of camera manufacturers will skew the gain of the red and blue channels of the sensor in the camera when shooting raw as this optimises what you are recording. You can then skew it again in post should you want a different balance.
With either method if you want to change the white balance from what was captured you are altering the gain of the red and blue channels. Raw doesn’t magically not have a white balance, so shooting with the wrong white balance and correcting it in post is not something you want to do. Often you can’t correct badly balanced raw any better than you can correct incorrectly balanced log.
How far you can adjust or correct raw depends on how it’s been compressed (or not), the bit depth, whether it’s log or linear and how noisy it is. Just like a log recording really, it all depends on the quality of the recording.
The big benefit raw can have is that the amount of data that needs to be recorded is considerably reduced compared conventional component or RGB video recordings. As a result it’s often possible to record using a greater bit depth or with much less compression. It is the greater bit depth or reduced compression that really makes a difference. 16 bit data can have up to 65,536 luma gradations, compare that to the 4096 of 12 bit or 1024 of 10 bit and you can see how a 16 bit recording can have so much more information than a 10 bit one. And that makes a difference. But 10 bit log v 10 bit raw, well it depends on the compression, but well compressed 10 bit log will likely outperform 10 bit raw as the all important colour processing will have been done in the camera at a much higher bit depth than 10 bit.
I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.
In my distant past I worked in and helped run a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.
But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.
Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.
You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.
This creates two issues.
1: Everything looks the same – BORING!!!!
2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.
How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?
If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.
Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.
LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.
Blackmagic Design have just released the latest update to DaVinci Resolve. If you have been experiencing crashes when using XAVC material from the PXW-FX9 I recommend you download and install this update.
If you are not a Resolve user and are struggling with grading or getting the very best from any log or raw camera, then I highly recommend you take a look at DaVinci Resolve. It’s also a very powerful edit package. The best bit is the free version supports most cameras. If you need full MXF support you will need to buy the studio version, but with a one off cost of only $299 USD it really is a bargain and gets you away from any horrid subscription services.
There is a video on YouTube right now where the author claims that the Sony Alpha cameras don’t record correctly internally when shooting S-Log2 or S-Log3. The information contained in this video is highly miss-leading and the conclusion that the problem is with the way Sony record internally is incorrect. There really isn’t anything wrong with the way Sony do their recordings. Neither is there anything wrong with the HDMI output. While centered around the Alpha cameras the information below is also important for anyone that records S-Log2 or S-log3 externally with any other camera.
Some background: Within the video world there are 2 primary ranges that can be used to record a video signal.
Legal Range uses code value 16 for black and code value 235 for white (anything above CV235 is classed as a super-white and these can still be recorded but considered to be beyond 100%).
Full or Data Range uses code value 0 for black and code value 255 for white or 100%.
Most cameras and most video systems are based on legal range. ProRes recordings are almost always legal range. Most Sony cameras use legal range and do include super-whites for some of the curves such as Cinegammas or Hypergammas to gain a bit more dynamic range. The vast majority of video recordings use legal range. So most software defaults to legal range.
But very, very importantly – S-log2 and S-log is always full/data range.
Most of the time this doesn’t cause any issues. When you record internally in the camera the internal recordings have metadata that tells the playback, editing or grading software that the S-Log files have been recorded using full range. Because of this metadata the software will play the files back and process them at the correct levels. However if you record the S-Log with an external recorder the recorder doesn’t always know that what it is getting is full range and not legal range, it just records it, as it is, exactly as it comes out of the camera. That then causes a problem later on because the externally recorded file doesn’t have the right metadata to ensure that the full range S-Log material is handled correctly and most software will default to legal range if it knows no different.
Lets have a look at what happens when you import an internally recorded S-Log2 .mp4 file from a Sony A7S into Adobe Premiere:
A few things to note here. One is Adobe’s somewhat funky scopes where the 8 bit code values don’t line up with the normally used IRE values used for video productions. Normally 8 bit code value 235 would be 100IRE or 100%, but for some reason Adobe have code value 255 lined up with 100%. My suspicion is that the scope % scale is not video % or IRE but instead RGB%. This is really confusing. A further complication is that Adobe have code value 0 as black, again, I think, but am not sure that this is RGB code value 0. In the world of video Black should be code value 16. But the scopes appear to work such that 0 is black and that 100 is full scale video out. Anything above 100 and below 0 will be clipped in any file you render out.
Looking at the scopes in the screen grab above, the top step on the grey scale chart is around code value 252. That is the code value you would expect it to be, that lines up just nicely with where the peak of an S-Log2 recording should be. This all looks correct, nothing goes above 100 or below 0 so nothing will be clipped.
So now lets look at an external ProRes recording, recorded at exactly the same time as the internal recording and see what Premier does with that:
OK, so we can see straight away something isn’t quite right here. In an 8 bit recording it should be impossible to have a code value higher that 255, but the scopes are suggesting that the recording has a peak code value of something around CV275. That is impossible, so alarm bells should be ringing. Something is not quite right here. In addition the S-Log2 appears to be going above 100, so that means if I were to simply export this as a new file, the top of the recording will be clipped and it won’t match the original. This is very clearly not right.
Now lets take a look at what happens in Adobe Premiere when you apply Sony’s standard S-Log2 to Rec-709 LUT to a correctly exposed internal recording:
This all looks good and as expected. Blacks are sitting down just above the 0 line (which I think we can safely assume is black) and the whites of the picture are around code value 230 or 90, whatever that means. But they are certainly nice and bright and are not in the range that will be clipped. So I can believe this as being more or less correct and as expected.
So next I’m going to add the same standard LUT to the external recording to see what happens.
OK, this is clearly not right. Our blacks now go below the 0 line and they look clipped. The highlights don’t look totally out of place, but clearly there is something going very, very wrong when we this normal LUT to this correctly exposed external recording. There is no way our blacks should be going below zero and they look crushed/clipped. The internal recording didn’t behave like this. So what is going on with the external recording?
To try and figure this out lets take a look at the same files in DaVinci Resolve. For a start I trust the scopes in Resolve much more and it is a far better programme for managing different types of files. First we will look at the internal S-Log2 recording:
Once again the levels of the internal S-Log2 recordings look absolutely fine. Our peak is around code value 1010 which would be 252 in 8 bit. Right where the brightest bits of an S-log2 file should be. Now lets take a look at the external recording.
If you compare the two screen grabs above you can see that the levels are exactly the same. Our peak level is around CV1010/CV252, just where it should be and the blacks look the same also. The internal and external recordings have the same levels and look the same. There is no difference (other then perhaps less compression and fewer artefacts in the ProRes file). There is nothing wrong with either of these recordings and certainly nothing wrong with the way Sony record S-Log2 internally. This is absolutely what I expect to see.
BUT – I’ve been a little bit sneaky here. As I knew that the external recording was a full range recording I told DaVinci Resolve to treat it as a full range recording. In the media bin I right clicked on the clip and under “clip attributes” I changed the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.
This is what it looks like if you allow Resolve to guess at what range the S-Log2 full range clip is by leaving the input range setting to “auto”:
In the above image we can see how in Resolve the clip becomes clipped because in a legal range recording anything over CV235/CV940 would be an illegal super white. Resolve is scaling the clip and pushing anything in the original file that was above CV235/CV940 off the top of the scale. The scaling is incorrect because Resolve doesn’t know the clip is supposed to be full range and therefore not scaled. If we compare this to what Premiere did with the external recording it’s actually very similar. Premiere also scaled the clip, only Premiere will show all those “illegal” levels above it’s 100 line instead of clipping then as Resolve does. That’s why Premiere can have those “impossible” 8 bit code values going up to CV275.
Just to be complete here, I did also test the internal .mp4 recordings in Resolve switching between “auto” and “full” range and in both cases the levels stayed exactly the same. This shows that Resolve is correctly handling the internally record full range S-Log as full range.
What about if you add a LUT? Well you MUST tell Resolve to treat the S-Log2 ProRes clip as a full range clip otherwise the LUT will not be right, if your footage is S-Log3 you also have to tell Resolve that it is full range:
Both the internal and external recordings are actually exactly the same. Both have the same levels, both use FULL range. There is absolutely nothing wrong with Sony’s internal recordings. The problem stems from the way most software will assume that the ProRes files are legal range. But if it’s an S-Log2 or S-Log3 recording it will in fact be full (data) range. Handling a full range clip as legal range means that highlights will be too high/bright or clipped and blacks will be crushed. So it’s really important that your software handles the footage correctly. If you are shooting using S-Log3 this problem is harder to spot as S-Log3 has a peak recording level that is well with the legal range, so you often won’t realise it’s being scaled incorrectly as it won’t necessarily look clip. If you use LUT’s and your ProRes clips look crushed or highlights look clipped you need to check that the input scaling is correct. It’s really important to get this right.
Why is there no difference between the levels when you shoot with a Cinegamma? Well when you shoot with a cinegamma the internal recordings are legal range so the internal recordings get treated as legal range and so do the external recordings, so they don’t appear to be different (In the YouTube video that led to this post the author discovers that if you record with a normal profile first and then switch to a log profile while recording the internal and external files will match. But this is because now the internal recording has the incorrect metadata, so it too gets scaled incorrectly, so both the internal and external files are now wrong – but the the same).
Once again: There is nothing wrong with the internal recordings. The problem is with the way the external recordings are being handled. The external recordings haven’t been recorded incorrectly, they have been recorded as they should be. The problem is the edit software is incorrectly interpreting the external recordings. The external recordings don’t have the necessary metadata to mark the files as full range because the recorder is external to the camera and doesn’t know what it’s being sent by the camera. This is a common problem when using external recorders.
What can we do in Premiere to make Premiere work right with these files?
You don’t need to do anything in Premiere for the internal .mp4 recordings. They are handled correctly but Premiere isn’t handling the full/data range ProRes files correctly.
My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect. So if anyone knows of a better way to do this do please let me know.
Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction.
Blackmagic Designs DaVinci Resolve is a really amazing piece of software, especially given that there is a free version that packs in almost all of the power of the full paid studio version.
Today, post production grading is becoming an ever more important part of the video production process. In the past basic colour correction functions of most edit applications were enough for most people. But now if you are shooting using log or raw it’s very important that you have the right toolset to take advantage of the benefits that log and raw offer.
For decades I have used Adobe Premiere for my editing and it has allowed me to create many great videos from broadcast TV series to simple corporates. As an edit application it’s still pretty solid. But now I shoot almost everything using log and raw and I have never been completely happy with the results from Premiere, even with Lumetri.
So I started to do my grading in Resolve and I have never looked back. The degree of control I have in Resolve is much greater. There are wonderful features such as DaVinci’s own Colour Managed workflow or the ACES workflow which makes dealing with log and raw from virtually any camera a breeze. If you want a film look choose ACES, for more punchy looks choose DaVinci Color Managed. You don’t need LUT’s, exposure adjustments are easy and you can then add all kinds of different secondary corrections such as power windows quickly and easily. The colour managed workflow are particularly beneficial if you wish to produce HDR versions of your productions.
But until recently my workflow was a 2 stage workflow. Edit in Premiere, then grade and finish in Resolve. But the last couple of versions of Resolve have seen some huge advances in its editing speed and capabilities. The editor is now as good as anyone else’s, so I am now editing in Resolve too. It’s a very similar to Premiere so it didn’t take long to make the switch.
One question that I am often asked is where to find good training information and guides for Resolve. Well clearly Blackmagic Design have been listening as they have now released a series of videos that will help guide anyone new to Resolve through the basics. In total there are 8 hours of easy to follow video. The manual is also pretty good!
If you have never tried Resolve then I really urge you to give it a go. It is an incredibly powerful piece of software. It isn’t difficult to master once you see how it’s laid out, how the different “rooms” work and how to use nodes. When I started with it I really found it all quite logical. You start in the “media” room to bring in your material, then progress on to the edit room for editing, finishing in the deliver room to encode and produce your master files and other output versions.
So do take a look at the videos linked below if you want to learn more about Resolve and do give it a try. Remember the free version will do almost everything that the full version will. The full Studio version isn’t expensive and features one of the best suites of noise reduction tools anywhere. It only costs a one off payment of $299.00 USD, no silly subscription fees to keep having to pay as with Adobe!
One last thing before I get to the videos: If you do a lot of grading you really should get a proper control panel. I have the Blackmagic micro panel and this really speeds up my grading. If you don’t have a panel you can only adjust a single grading parameter at a time. With a panel you can do things like bringing up the gain while pulling down the black level. This allows you to see the interaction between your different adjustments much more dynamically and it’s just plain faster. Most of the key functions have dedicated controls so you can quickly dial in a bit of contrast, switch to log mode, bypass a node and boost the saturation all through direct controls, very much quicker than with just a mouse. The use of the micro panel has probably halved the amount of time it takes me to grade a typical project – and – I’m getting a better result because it’s more intuitive.
So here are the videos:Introduction to Editing. Colour Grading. Fusion Part 1. VFX and Graphics Fusion Part 2. 3D FX Fairlight Audio Part 1. Fairlight Audio Part 2. Delivery and Encoding. Media Mangement. DaVinci Resolve Mini Panel.
I’m often asked at the various workshops I run why I don’t grade in Adobe Premiere. Here’s why – they can’t even get basic import levels right.
Below are two screen grabs. The first is from Adobe Premiere CC 2019 and shows an ungraded, as shot, HLG clip. Shot with a Sony Z280 (love that little camera). Note how the clip appears over grossly exposed with a nuclear looking sky and clipped snow, it doesn’t look nice. Also note that the waveform suggest the clips peak levels exceed 110%. Now I know for a fact that if you shoot HLG with any Sony camera white will never exceed 100%.
The second screen grab is from DaVinci Resolve and it shows the same clip. Note how in Resolve that although bright the clip certainly doesn’t look over exposed as it does in Premiere. Note also how the levels show by the waveform now no longer exceeds code value 869 (100% white is 940). These are the correct and expected levels, this is how the clip is supposed to look. Not the utter nonsense that Adobe creates.
Why can’t Adobe get this right. This problem has existed for ages and it really screws up your footage. If you are using S-Log and you try to add a LUT then things get even worse as the LUT expects the correct levels, not these totally incorrect levels.
Take the SDI or raw out from the camera and record a ProRes file on something like a Shogun while recording XAVC internally and the two files look totally different in Premiere but they look the same in Resolve. Come on Adobe – you should be doing better than this.
If they can’t even bring clips in at the correct levels, what hope is there of being able to get a decent grading output? I can make the XAVC clips look OK in Premiere but I have to bring the levels down to do this. I shouldn’t have to. I exposed it right when I shot it so I expect it to look right in my edit software.
I have been editing with Adobe Premiere since around 1994. I took a rather long break from Premiere between 2001 and 2011 and switched over to Apple and Final Cut Pro which in many ways used to be very similar to Premiere (I think some of the same software writers were used for FCP as Premiere). My FCP edit stations were always muti-core Mac Towers. The old G5’s first then later on the Intel Towers. Then along came FCP-X. I just didn’t get along with FCP-X when it first came out. I’m still not a huge fan of it now, but will happily concede that FCP-X is a very capable, professional edit platform.
So in 2011 I switch back to Adobe Premiere as my edit platform of choice. Along the way I have also used various versions of Avid’s software, which is another capable platform.
But right now I’m really not happy with Premiere. Over the last couple of years it has become less stable than it used to be. I run it on a MacBook Pro which is a well defined hardware platform, yet I still get stability issues. I’m also experiencing problems with gamma and level shifts that just shouldn’t be there. In addition Premiere is not very good with many long GOP codecs. FCP-X seems to make light work of XAVC-L compared to Premiere. Furthermore Adobe’s Media encoder which once used to be one of the first encoders to get new codecs or features is now lagging behind, Apples Compressor now has the ability to do at he full range of HDR files. Media Compressor can only do HDR10. If you don’t know, it is possible to buy Compressor on it’s own.
Meanwhile DaVinci Resolve has been my grading platform of choice for a few years now. I have always found it much easier to get the results and looks that I want from Resolve than from any edit software – this isn’t really a surprise as after all that’s what Resolve was originally designed for.
The last few versions of Resolve have become much faster thanks to some major processing changes under the hood and in addition there has been a huge amount of work on Resolves edit capabilities. It can now be used as a fully featured edit platform. I recently used Resolve to edit some simpler projects that were going to be graded as this way I could stay in the same software for both processes, and you know what it’s a pretty good editor. There are however a few things that I find a bit funky and frustrating in the edit section of Resolve at the moment. Some of that may simply be because I am less familiar with it for editing than I am Premiere.
Anyway, on to my point. Resolve is getting to be a pretty good edit platform and it’s only going to get better. We all know that it’s a really good and very powerful grading platform and with the recent inclusion of the Fairlight audio suite within Resolve it’s pretty good at handling audio too. Given that the free version of Resolve can do all of the edit, sound and grading functions that most people need, why continue to subscribe to Adobe or pay for FCP-X?
With the cost of the latest generations of Apple computers expanding the price gap between them and similar spec Windows machines – as well as the new Macbooks lacking built in ports like HDMI, USB3 that we all use every day (you now have to use adapters and dongles). The Apple eco system is just not as attractive as it used to be. Resolve is cross platform, so an Mac user can stay with Apple if they wish, or move over to Windows or Linux whenever they want with Resolve. You can even switch platforms mid project if you want. I could start an edit on my MacBook and the do the grade on a PC workstation staying with Resolve through the complete process.
Even if you need the extra features of the full version like very good noise reduction, facial recognition, 4K DCI output or HDR scopes then it’s still good value as it currently only costs $299/£229 which is less than a years subscription to Premiere CC.
But what about the rest of the Adobe Creative suite? Well you don’t have to subscribe to the whole suite. You can just get Photoshop or After Effects. But there are also many alternatives. Again Blackmagic Design have Fusion 9 which is a very impressive VFX package used for many Hollywood movies and like Resolve there is also a free version with a very comprehensive tools set or again for just $299/£229 you get the full version with all it’s retiming tools etc.
For a Photoshop replacement you have GIMP which can do almost everything that Photoshop can do. You can even use Photoshop filters within GIMP. The best part is that GIMP is free and works on both Mac’s and PC’s.
So there you have it – It looks like Blackmagic Design are really serious about taking a big chunk of Adobe Premiere’s users. Resolve and Fusion are cross platform so, like Adobe’s products it doesn’t matter whether you want to use a Mac or a PC. But for me the big thing is you own the software. You are not going to be paying out rather a lot of money month on month for something that right now is in my opinion somewhat flakey.
I’m not quite ready to cut my Creative Cloud subscription yet, maybe on the next version of Resolve. But it won’t be long before I do.
With the first of the production Venice cameras now starting to find their way to some very lucky owners it’s time to take a look at some features that are not always well understood, or that perhaps no one has told you about yet.
Dual Native ISO’s: What does this mean?
An electronic camera uses a piece of silicon to convert photons of light into electrons of electricity. The efficiency at doing this is determined by the material used. Then the amount of light that can be captured and thus the sensitivity is determined by the size of the pixels. So, unless you physically change the sensor for one with different sized pixels (which will in the future be possible with Venice) you can’t change the true sensitivity of the camera. All you can do is adjust the electronic parameters.
With most video cameras the ISO is changed by increasing the amount of amplification applied to the signal coming off the sensor. Adding more gain or increasing the amplification will result in a brighter picture. But if you add more amplification/gain then the noise from the sensor is also amplified by the same amount. Make the picture twice as bright and normally the noise doubles.
In addition there is normally an optimum amount of gain where the full range of the signal coming from the sensor will be matched perfectly with the full recording range of the chosen gamma curve. This optimum gain level is what we normally call the “Native ISO”. If you add too much gain the brightest signal from the sensor would be amplified too much and exceed the recording range of the gamma curve. Apply too little gain and your recordings will never reach the optimum level and darker parts of the image may be too dark to be seen.
As a result the Native ISO is where you have the best match of sensor output to gain. Not too much, not too little and hopefully low noise. This is typically also referred to as 0dB gain in an electronic camera and normally there is only 1 gain level where this perfect harmony between sensor, gain and recording range is achieved, this becoming the native ISO.
Side Note: On an electronic camera ISO is an exposure rating, not a sensitivity measurement. Enter the cameras ISO rating into a light meter and you will get the correct exposure. But it doesn’t really tell you how sensitive the camera is as ISO has no allowance for increasing noise levels which will limit the darkest thing a camera can see.
Tweaking the sensor.
However, there are some things we can tweak on the sensor that effect how big the signal coming from the sensor is. The sensors pixels are analog devices. A photon of electricity hits the silicone photo receptor (pixel) and it gets converted into an electron of electricity which is then stored within the structure of the pixel as an analog signal until the pixel is read out by a circuit that converts the analog signal to a digital one, at the same time adding a degree of noise reduction. It’s possible to shift the range that the A to D converter operates over and the amount of noise reduction applied to obtain a different readout range from the sensor. By doing this (and/or other similar techniques, Venice may use some other method) it’s possible to produce a single sensor with more than one native ISO.
A camera with dual ISO’s will have two different operating ranges. One tuned for higher light levels and one tuned for lower light levels. Venice will have two exposure ratings: 500 ISO for brighter scenes and 2500 ISO for shooting when you have less light. With a conventional camera, to go from 500 ISO to 2500 ISO you would need to add just over 12dB of gain and this would increase the noise by a factor of more than 4. However with Venice and it’s dual ISO’s, as we are not adding gain but instead altering the way the sensor is operating the noise difference between 500 ISO and 2500 ISO will be very small.
You will have the same dynamic range at both ISO’s. But you can choose whether to shoot at 500 ISO for super clean images at a sensitivity not that dissimilar to traditional film stocks. This low ISO makes it easy to run lenses at wide apertures for the greatest control over the depth of field. Or you can choose to shoot at the equivalent of 2500 ISO without incurring a big noise penalty.
One of Venice’s key features is that it’s designed to work with Anamorphic lenses. Often Anamorphic lenses are typically not as fast as their spherical counterparts. Furthermore some Anamorphic lenses (particularly vintage lenses) need to be stopped down a little to prevent excessive softness at the edges. So having a second higher ISO rating will make it easier to work with slower lenses or in lower light ranges.
COMBINING DUAL ISO WITH 1 STOP ND’s.
Next it’s worth thinking about how you might want to use the cameras ND filters. Film cameras don’t have built in ND filters. An Arri Alexa does not have built in ND’s. So most cinematographers will work on the basis of a cinema camera having a single recording sensitivity.
The ND filters in Venice provide uniform, full spectrum light attenuation. Sony are incredibly fussy over the materials they use for their ND filters and you can be sure that the filters in Venice do not degrade the image. I was present for the pre-shoot tests for the European demo film and a lot of time was spent testing them. We couldn’t find any issues. If you introduce 1 stop of ND, the camera becomes 1 stop less sensitive to light. In practice this is no different to having a camera with a sensor 1 stop less sensitive. So the built in ND filters, can if you choose, be used to modify the base sensitivity of the camera in 1 stop increments, up to 8 stops lower.
So with the dual ISO’s and the ND’s combined you have a camera that you can setup to operate at the equivalent of 2 ISO all the way up to 2500 ISO in 1 stop steps (by using 2500 ISO and 500 together you can have approximately half stops steps between 10 ISO and 650 ISO). That’s an impressive range and at no stage are you adding extra gain. There is no other camera on the market that can do this.
On top of all this we do of course still have the ability to alter the Exposure Index of the cameras LUT’s to offset the exposure to move the exposure mid point up and down within the dynamic range. Talking of LUT’s I hope to have some very interesting news about the LUT’s for Venice. I’ve seen a glimpse of the future and I have to say it looks really good!
The raw and X-OCN material from a Venice camera (and from a PMW-F55 or F5 with the R7 recorder) contains a lot of dynamic metadata. This metadata tells the decoder in your grading software exactly how to handle the linear sensor data stored in the files. It tells your software where in the recorded data range the shadows start and finish, where the mid range sits and where the highlights start and finish. It also informs the software how to decode the colors you have recorded.
I recently spent some time with Sony Europe’s color grading guru Pablo Garcia at the Digital Motion Picture Center in Pinewood. He showed me how you can manipulate this metadata to alter the way the X-OCN is decoded to change the look of the images you bring into the grading suite. Using a beta version of Black Magic’s DaVinci Resolve software, Pablo was able to go into the clips metadata in real time and simply by scrubbing over the metadata settings adjust the shadows, mids and highlights BEFORE the X-OCN was decoded. It was really incredible to see the amount of data that Venice captures in the highlights and shadows. By adjusting the metadata you are tailoring the the way the file is being decoded to suit your own needs and getting the very best video information for the grade. Need more highlight data – you got it. Want to boost the shadows, you can, at the file data level before it’s converted to a traditional video signal.
It’s impressive stuff as you are manipulating the way the 16 bit linear sensor data is decoded rather than a traditional workflow which is to decode the footage to a generic intermediate file and then adjust that. This is just one of the many features that X-OCN from the Sony Venice offers. It’s even more incredible when you consider that a 16 bit linear X-OCN LT file is similar in size to 10 bit XAVC-I(class 480) and around half the size of Apples 10 bit ProRes HQ. X-OCN LT looks fantastic and in my opinion grades better than XAVC S-Log. Of course for a high end production you will probably use the regular X-OCN ST codec rather than the LT version, but ST is still smaller than ProRes HQ. What’s more X-OCN is not particularly processor intensive, it’s certainly much easier to work with X-OCN than cDNG. It’s a truly remarkable technology from Sony.
Next week I will be shooting some more test with a Venice camera as we explore the limits of what it can do. I’ll try and get some files for you to play with.