Tag Archives: grading

Sony’s Internal Recording Levels Are Correct.

There is a video on YouTube right now where the author claims that the Sony Alpha cameras don’t record correctly internally when shooting S-Log2 or S-Log3. The information contained in this video is highly miss-leading and the conclusion that the problem is with the way Sony record internally is incorrect. There really isn’t anything wrong with the way Sony do their recordings. Neither is there anything wrong with the HDMI output. While centered around the Alpha cameras the information below is also important for anyone that records S-Log2 or S-log3 externally with any other camera.

Some background: Within the video world there are 2 primary ranges that can be used to record a video signal.

Legal Range uses code value 16 for black and code value 235 for white (anything above CV235 is classed as a super-white and these can still be recorded but considered to be beyond 100%).

Full or Data Range uses code value 0 for black and code value 255 for white or 100%.

Most cameras and most video systems are based on legal range. ProRes recordings are almost always legal range. Most Sony cameras use legal range and do include super-whites for some of the curves such as Cinegammas or Hypergammas to gain a bit more dynamic range. The vast majority of video recordings use legal range. So most software defaults to legal range.

But very, very importantly – S-log2 and S-log is always full/data range.

Most of the time this doesn’t cause any issues. When you record internally in the camera the internal recordings have metadata that tells the playback, editing or grading software that the S-Log files have been recorded using full range. Because of this metadata the software will play the files back and process them at the correct levels. However if you record the S-Log with an external recorder the recorder doesn’t always know that what it is getting is full range and not legal range, it just records it, as it is, exactly as it comes out of the camera. That then causes a problem later on because the externally recorded file doesn’t have the right metadata to ensure that the full range S-Log material is handled correctly and most software will default to legal range if it knows no different.

Lets have a look at what happens when you import an internally recorded S-Log2 .mp4 file from a Sony A7S into Adobe Premiere:

Screenshot-2019-03-01-at-10.04.22 Sony's Internal Recording Levels Are Correct.
Internal S-Log2 in Premiere.

A few things to note here. One is Adobe’s somewhat funky scopes where the 8 bit code values don’t line up with the normally used IRE values used for video productions. Normally 8 bit code value 235 would be 100IRE or 100%, but for some reason Adobe have code value 255 lined up with 100%. My suspicion is that the scope % scale is not video % or IRE but instead RGB%. This is really confusing. A further complication is that Adobe have code value 0 as black, again, I think, but am not sure that this is RGB code value 0. In the world of video Black should be code value 16. But the scopes appear to work such that 0 is black and that 100 is full scale video out. Anything above 100 and below 0 will be clipped in any file you render out.

Looking at the scopes in the screen grab above, the top step on the grey scale chart is around code value 252. That is the code value you would expect it to be, that lines up just nicely with where the peak of an S-Log2 recording should be. This all looks correct, nothing goes above 100 or below 0 so nothing will be clipped.

So now lets look at an external ProRes recording, recorded at exactly the same time as the internal recording and see what Premier does with that:

Screenshot-2019-03-01-at-10.05.32 Sony's Internal Recording Levels Are Correct.
External ProRes in Adobe Premiere

OK, so we can see straight away something isn’t quite right here. In an 8 bit recording it should be impossible to have a code value higher that 255, but the scopes are suggesting that the recording has a peak code value of something around CV275. That is impossible, so alarm bells should be ringing. Something is not quite right here. In addition the S-Log2 appears to be going above 100, so that means if I were to simply export this as a new file, the top of the recording will be clipped and it won’t match the original. This is very clearly not right.

Now lets take a look at what happens in Adobe Premiere when you apply Sony’s standard S-Log2 to Rec-709 LUT to a correctly exposed internal recording:

Screenshot-2019-03-01-at-10.10.05 Sony's Internal Recording Levels Are Correct.
Internal S-Log2 with 709 LUT applied.

This all looks good and as expected. Blacks are sitting down just above the 0 line (which I think we can safely assume is black) and the whites of the picture are around code value 230 or 90, whatever that means. But they are certainly nice and bright and are not in the range that will be clipped. So I can believe this as being more or less correct and as expected.

So next I’m going to add the same standard LUT to the external recording to see what happens.

Screenshot-2019-03-01-at-10.11.24 Sony's Internal Recording Levels Are Correct.
External S-Log2 with standard 709 LUT applied.

OK, this is clearly not right. Our blacks now go below the 0 line and they look clipped. The highlights don’t look totally out of place, but clearly there is something going very, very wrong when we this normal LUT to this correctly exposed external recording. There is no way our blacks should be going below zero and they look crushed/clipped. The internal recording didn’t behave like this. So what is going on with the external recording?

To try and figure this out lets take a look at the same files in DaVinci Resolve. For a start I trust the scopes in Resolve much more and it is a far better programme for managing different types of files. First we will look at the internal S-Log2 recording:

Screenshot-2019-03-01-at-10.21.17-1 Sony's Internal Recording Levels Are Correct.
Internal S-Log2, all looks good.

Once again the levels of the internal S-Log2 recordings look absolutely fine. Our peak is around code value 1010 which would be 252 in 8 bit. Right where the brightest bits of an S-log2 file should be. Now lets take a look at the external recording.

Screenshot-2019-03-01-at-10.22.51 Sony's Internal Recording Levels Are Correct.
External ProRes S-Log2 (Full Range)

If you compare the two screen grabs above you can see that the levels are exactly the same. Our peak level is around CV1010/CV252, just where it should be and the blacks look the same also. The internal and external recordings have the same levels and look the same. There is no difference (other then perhaps less compression and fewer artefacts in the ProRes file). There is nothing wrong with either of these recordings and certainly nothing wrong with the way Sony record S-Log2 internally. This is absolutely what I expect to see.

BUT – I’ve been a little bit sneaky here. As I knew that the external recording was a full range recording I told DaVinci Resolve to treat it as a full range recording. In the media bin I right clicked on the clip and under “clip attributes” I changed the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.

This is what it looks like if you allow Resolve to guess at what range the S-Log2 full range clip is by leaving the input range setting to “auto”:

Screenshot-2019-03-01-at-10.24.46 Sony's Internal Recording Levels Are Correct.
External ProRes S-Log2 Auto Range

In the above image we can see how in Resolve the clip becomes clipped because in a legal range recording anything over CV235/CV940 would be an illegal super white. Resolve is scaling the clip and pushing anything in the original file that was above CV235/CV940 off the top of the scale. The scaling is incorrect because Resolve doesn’t know the clip is supposed to be full range and therefore not scaled. If we compare this to what Premiere did with the external recording it’s actually very similar. Premiere also scaled the clip, only Premiere will show all those “illegal” levels above it’s 100 line instead of clipping then as Resolve does. That’s why Premiere can have those “impossible” 8 bit code values going up to CV275.

Just to be complete here, I did also test the internal .mp4 recordings in Resolve switching between “auto” and “full” range and in both cases the levels stayed exactly the same. This shows that Resolve is correctly handling the internally record full range S-Log as full range.

What about if you add a LUT? Well you MUST tell Resolve to treat the S-Log2 ProRes clip as a full range clip otherwise the LUT will not be right, if your footage is S-Log3 you also have to tell Resolve that it is full range:

Screenshot-2019-03-01-at-13.09.16 Sony's Internal Recording Levels Are Correct.
Resolve: Internal recording with the standard 709 LUT applied, all is exactly as expected. Deep shadows and white right at the top of the range.
Screenshot-2019-03-01-at-13.10.10 Sony's Internal Recording Levels Are Correct.
Resolve: External recording with the standard 709 LUT applied, clip input range set to “full”. Everything is once again as you would expect. Deep shadows and white at the top of the range. Also not that it is near perfect match to the internal recording. No hue or color shift (Premiere introduces a color shift, more on that later).
Screenshot-2019-03-01-at-13.14.02 Sony's Internal Recording Levels Are Correct.
Resolve: External recording with the standard 709 LUT applied, clip input range set to “auto”. This is clearly not right. The highlights are clipped and the blacks are crushed and clipped. It is so important to get the input range right when working with LUT’s!!

CONCLUSIONS:

Both the internal and external recordings are actually exactly the same. Both have the same levels, both use FULL range. There is absolutely nothing wrong with Sony’s internal recordings. The problem stems from the way most software will assume that the ProRes files are legal range. But if it’s an S-Log2 or S-Log3 recording it will in fact be full (data) range. Handling a full range clip as legal range means that highlights will be too high/bright or clipped and blacks will be crushed. So it’s really important that your software handles the footage correctly. If you are shooting using S-Log3 this problem is harder to spot as S-Log3 has a peak recording level that is well with the legal range, so you often won’t realise it’s being scaled incorrectly as it won’t necessarily look clip. If you use LUT’s and your ProRes clips look crushed or highlights look clipped you need to check that the input scaling is correct. It’s really important to get this right.

Why is there no difference between the levels when you shoot with a Cinegamma? Well when you shoot with a cinegamma the internal recordings are legal range so the internal recordings get treated as legal range and so do the external recordings, so they don’t appear to be different (In the YouTube video that led to this post the author discovers that if you record with a normal profile first and then switch to a log profile while recording the internal and external files will match. But this is because now the internal recording has the incorrect metadata, so it too gets scaled incorrectly, so both the internal and external files are now wrong – but the the same).

Once again: There is nothing wrong with the internal recordings. The problem is with the way the external recordings are being handled. The external recordings haven’t been recorded incorrectly, they have been recorded as they should be. The problem is the edit software is incorrectly interpreting the external recordings. The external recordings don’t have the necessary metadata to mark the files as full range because the recorder is external to the camera and doesn’t know what it’s being sent by the camera. This is a common problem when using external recorders.

What can we do in Premiere to make Premiere work right with these files?

You don’t need to do anything in Premiere for the internal .mp4 recordings. They are handled correctly but Premiere isn’t handling the full/data range ProRes files correctly.

My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect. So if anyone knows of a better way to do this do please let me know.

Screenshot-2019-03-01-at-11.04.04 Sony's Internal Recording Levels Are Correct.
Using the legacy “fast color corrector” filter to transform the external recording to the correct range within Premiere.

Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction.

New Training Videos For DaVinci Resolve.

Blackmagic Designs DaVinci Resolve is a really amazing piece of software, especially given that there is a free version that packs in almost all of the power of the full paid studio version.

Today, post production grading is becoming an ever more important part of the video production process. In the past basic colour correction functions of most edit applications were enough for most people. But now if you are shooting using log or raw it’s very important that you have the right toolset to take advantage of the benefits that log and raw offer.

For decades I have used Adobe Premiere for my editing and it has allowed me to create many great videos from broadcast TV series to simple corporates. As an edit application it’s still pretty solid. But now I shoot almost everything using log and raw and I have never been completely happy with the results from Premiere, even with Lumetri.

So I started to do my grading in Resolve and I have never looked back. The degree of control I have in Resolve is much greater. There are wonderful features such as DaVinci’s own Colour Managed workflow or the ACES workflow which makes dealing with log and raw from virtually any camera a breeze. If you want a film look choose ACES, for more punchy looks choose DaVinci Color Managed. You don’t need LUT’s, exposure adjustments are easy and you can then add all kinds of different secondary corrections such as power windows quickly and easily. The colour managed workflow are particularly beneficial if you wish to produce HDR versions of your productions.

But until recently my workflow was a 2 stage workflow. Edit in Premiere, then grade and finish in Resolve. But the last couple of versions of Resolve have seen some huge advances in its editing speed and capabilities. The editor is now as good as anyone else’s, so I am now editing in Resolve too. It’s a very similar to Premiere so it didn’t take long to make the switch.

One question that I am often asked is where to find good training information and guides for Resolve. Well clearly Blackmagic Design have been listening as they have now released a series of videos that will help guide anyone new to Resolve through the basics. In total there are 8 hours of easy to follow video. The manual is also pretty good!

If you have never tried Resolve then I really urge you to give it a go. It is an incredibly powerful piece of software. It isn’t difficult to master once you see how it’s laid out, how the different “rooms” work and how to use nodes. When I started with it I really found it all quite logical. You start in the “media” room to bring in your material, then progress on to the edit room for editing, finishing in the deliver room to encode and produce your master files and other output versions.

So do take a look at the videos linked below if you want to learn more about Resolve and do give it a try. Remember the free version will do almost everything that the full version will. The full Studio version isn’t expensive and features one of the best suites of noise reduction tools anywhere. It only costs a one off payment of $299.00 USD, no silly subscription fees to keep having to pay as with Adobe!

One last thing before I get to the videos: If you do a lot of grading you really should get a proper control panel. I have the Blackmagic micro panel and this really speeds up my grading. If you don’t have a panel you can only adjust a single grading parameter at a time. With a panel you can do things like bringing up the gain while pulling down the black level. This allows you to see the interaction between your different adjustments much more dynamically and it’s just plain faster. Most of the key functions have dedicated controls so you can quickly dial in a bit of contrast, switch to log mode, bypass a node and boost the saturation all through direct controls, very much quicker than with just a mouse. The use of the micro panel has probably halved the amount of time it takes me to grade a typical project – and – I’m getting a better result because it’s more intuitive.

So here are the videos:

Introduction to Editing.
Colour Grading. Fusion Part 1. VFX and Graphics Fusion Part 2. 3D FX Fairlight Audio Part 1. Fairlight Audio Part 2. Delivery and Encoding. Media Mangement. DaVinci Resolve Mini Panel.

Adobe still can’t get XAVC levels right!

I’m often asked at the various workshops I run why I don’t grade in Adobe Premiere. Here’s why – they can’t even get basic import levels right.

Below are two screen grabs. The first is from Adobe Premiere CC 2019 and shows an ungraded, as shot, HLG clip. Shot with a Sony Z280 (love that little camera). Note how the clip appears over grossly exposed with a nuclear looking sky and clipped snow, it doesn’t look nice. Also note that the waveform suggest the clips peak levels exceed 110%. Now I know for a fact that if you shoot HLG with any Sony camera white will never exceed 100%.

Adobe-HLG1c-1024x626 Adobe still can't get XAVC levels right!
Incorrect levels with an XAVC clip in Adobe Premier. Click on the image to view a larger version.

The second screen grab is from DaVinci Resolve and it shows the same clip. Note how in Resolve that although bright the clip certainly doesn’t look over exposed as it does in Premiere. Note also how the levels show by the waveform now no longer exceeds code value 869 (100% white is 940).  These are the correct and expected levels, this is how the clip is supposed to look. Not the utter nonsense that Adobe creates.

resolve-hlg1b-1024x626 Adobe still can't get XAVC levels right!
Same XAVC clip in Resolve and now the levels are correct. Click on the image to view a larger version.

Why can’t Adobe get this right. This problem has existed for ages and it really screws up your footage. If you are using S-Log and you try to add a LUT then things get even worse as the LUT expects the correct levels, not these totally incorrect levels.

Take the SDI or raw out from the camera and record a ProRes file on something like a Shogun while recording XAVC internally and the two files look totally different in Premiere but they look the same in Resolve. Come on Adobe – you should be doing better than this.

If they can’t even bring clips in at the correct levels, what hope is there of being able to get a decent grading output? I can make the XAVC clips look OK in Premiere but I have to bring the levels down to do this. I shouldn’t have to. I exposed it right when I shot it so I expect it to look right in my edit software.

Can DaVinci Resolve steal the edit market from Adobe and Apple.

I have been editing with Adobe Premiere since around 1994. I took a rather long break from Premiere between 2001 and 2011 and switched over to Apple and  Final Cut Pro which in many ways used to be very similar to Premiere (I think some of the same software writers were used for FCP as Premiere). My FCP edit stations were always muti-core Mac Towers. The old G5’s first then later on the Intel Towers. Then along came FCP-X. I just didn’t get along with FCP-X when it first came out. I’m still not a huge fan of it now, but will happily concede that FCP-X is a very capable, professional edit platform.

So in 2011 I switch back to Adobe Premiere as my edit platform of choice. Along the way I have also used various versions of Avid’s software, which is another capable platform.

But right now I’m really not happy with Premiere. Over the last couple of years it has become less stable than it used to be. I run it on a MacBook Pro which is a well defined hardware platform, yet I still get stability issues. I’m also experiencing problems with gamma and level shifts that just shouldn’t be there. In addition Premiere is not very good with many long GOP codecs. FCP-X seems to make light work of XAVC-L compared to Premiere. Furthermore Adobe’s Media encoder which once used to be one of the first encoders to get new codecs or features is now lagging behind, Apples Compressor now has the ability to do at he full range of HDR files. Media Compressor can only do HDR10. If you don’t know, it is possible to buy Compressor on it’s own.

Meanwhile DaVinci Resolve has been my grading platform of choice for a few years now. I have always found it much easier to get the results and looks that I want from Resolve than from any edit software – this isn’t really a surprise as after all that’s what Resolve was originally designed for.

editing-xl-1024x629 Can DaVinci Resolve steal the edit market from Adobe and Apple.
DaVinci Resolve a great grading software and it’s edit capabilities are getting better and better.

The last few versions of Resolve have become much faster thanks to some major processing changes under the hood and in addition there has been a huge amount of work on Resolves edit capabilities. It can now be used as a fully featured edit platform. I recently used Resolve to edit some simpler projects that were going to be graded as this way I could stay in the same software for both processes, and you know what it’s a pretty good editor. There are however a few things that I find a bit funky and frustrating in the edit section of Resolve at the moment. Some of that may simply be because I am less familiar with it for editing than I am Premiere.

Anyway, on to my point. Resolve is getting to be a pretty good edit platform and it’s only going to get better. We all know that it’s a really good and very powerful grading platform and with the recent inclusion of the Fairlight audio suite within Resolve it’s pretty good at handling audio too. Given that the free version of Resolve can do all of the edit, sound and grading functions that most people need, why continue to subscribe to Adobe or pay for FCP-X?

With the cost of the latest generations of Apple computers expanding the price gap between them and similar spec Windows machines – as well as the new Macbooks lacking built in ports like HDMI, USB3 that we all use every day (you now have to use adapters and dongles). The  Apple eco system is just not as attractive as it used to be. Resolve is cross platform, so an Mac user can stay with Apple if they wish, or move over to Windows or Linux whenever they want with Resolve. You can even switch platforms mid project if you want. I could start an edit on my MacBook and the do the grade on a PC workstation staying with Resolve through the complete process.

Even if you need the extra features of the full version like very good noise reduction, facial recognition, 4K DCI output or HDR scopes then it’s still good value as it currently only costs $299/£229 which is less than a years subscription to Premiere CC.

But what about the rest of the Adobe Creative suite? Well you don’t have to subscribe to the whole suite. You can just get Photoshop or After Effects. But there are also many alternatives. Again Blackmagic Design have Fusion 9 which is a very impressive VFX package used for many Hollywood movies and like Resolve there is also a free version with a very comprehensive tools set or again for just $299/£229 you get the full version with all it’s retiming tools etc.

motion-xl-1024x512 Can DaVinci Resolve steal the edit market from Adobe and Apple.
Blackmagic Designs Fusion is a very impressive video effects package for Mac and PC.

For a Photoshop replacement you have GIMP which can do almost everything that Photoshop can do. You can even use Photoshop filters within GIMP. The best part is that GIMP is free and works on both Mac’s and PC’s.

So there you have it – It looks like Blackmagic Design are really serious about taking a big chunk of Adobe Premiere’s users. Resolve and Fusion are cross platform so, like Adobe’s products it doesn’t matter whether you want to use a Mac or a PC. But for me the big thing is you own the software. You are not going to be paying out rather a lot of money month on month for something that right now is in my opinion somewhat flakey.

I’m not quite ready to cut my Creative Cloud subscription yet, maybe on the next version of Resolve. But it won’t be long before I do.

Sony Venice. Dual ISO’s, 1 stop ND’s and Grading via Metadata.

With the first of the production Venice cameras now starting to find their way to some very lucky owners it’s time to take a look at some features that are not always well understood, or that perhaps no one has told you about yet.

Dual Native ISO’s: What does this mean?

An electronic camera uses a piece of silicon to convert photons of light into electrons of electricity. The efficiency at doing this is determined by the material used. Then the amount of light that can be captured and thus the sensitivity is determined by the size of the pixels. So, unless you physically change the sensor for one with different sized pixels (which will in the future be possible with Venice) you can’t change the true sensitivity of the camera. All you can do is adjust the electronic parameters.

With most video cameras the ISO is changed by increasing the amount of amplification applied to the signal coming off the sensor. Adding more gain or increasing the amplification will result in a brighter picture. But if you add more amplification/gain then the noise from the sensor is also amplified by the same amount. Make the picture twice as bright and normally the noise doubles.

In addition there is normally an optimum amount of gain where the full range of the signal coming from the sensor will be matched perfectly with the full recording range of the chosen gamma curve. This optimum gain level is what we normally call the “Native ISO”. If you add too much gain the brightest signal from the sensor would be amplified too much and exceed the recording range of the gamma curve. Apply too little gain and your recordings will never reach the optimum level and darker parts of the image may be too dark to be seen.

As a result the Native ISO is where you have the best match of sensor output to gain. Not too much, not too little and hopefully low noise. This is typically also referred to as 0dB gain in an electronic camera and normally there is only 1 gain level where this perfect harmony between sensor, gain and recording range is achieved, this becoming the native ISO.

Side Note: On an electronic camera ISO is an exposure rating, not a sensitivity measurement. Enter the cameras ISO rating into a light meter and you will get the correct exposure. But it doesn’t really tell you how sensitive the camera is as ISO has no allowance for increasing noise levels which will limit the darkest thing a camera can see.

Tweaking the sensor.

However, there are some things we can tweak on the sensor that effect how big the signal coming from the sensor is. The sensors pixels are analog devices. A photon of electricity hits the silicone photo receptor (pixel) and it gets converted into an electron of electricity which is then stored within the structure of the pixel as an analog signal until the pixel is read out by a circuit that converts the analog signal to a digital one, at the same time adding a degree of noise reduction. It’s possible to shift the range that the A to D converter operates over and the amount of noise reduction applied to obtain a different readout range from the sensor. By doing this (and/or other similar techniques, Venice may use some other method) it’s possible to produce a single sensor with more than one native ISO.

A camera with dual ISO’s will have two different operating ranges. One tuned for higher light levels and one tuned for lower light levels. Venice will have two exposure ratings: 500 ISO for brighter scenes and 2500 ISO for shooting when you have less light. With a conventional camera, to go from 500 ISO to 2500 ISO you would need to add just over 12dB of gain and this would increase the noise by a factor of more than 4. However with Venice and it’s dual ISO’s, as we are not adding gain but instead altering the way the sensor is operating the noise difference between 500 ISO and 2500 ISO will be very small.

You will have the same dynamic range at both ISO’s. But you can choose whether to shoot at 500 ISO for super clean images at a sensitivity not that dissimilar to traditional film stocks. This low ISO makes it easy to run lenses at wide apertures for the greatest control over the depth of field. Or you can choose to shoot at the equivalent of 2500 ISO without incurring a big noise penalty.

One of Venice’s key features is that it’s designed to work with Anamorphic lenses. Often Anamorphic lenses are typically not as fast as their spherical counterparts. Furthermore some Anamorphic lenses (particularly vintage lenses) need to be stopped down a little to prevent excessive softness at the edges. So having a second higher ISO rating will make it easier to work with slower lenses or in lower light ranges.

COMBINING DUAL ISO WITH 1 STOP ND’s.

Next it’s worth thinking about how you might want to use the cameras ND filters. Film cameras don’t have built in ND filters. An Arri Alexa does not have built in ND’s. So most cinematographers will work on the basis of a cinema camera having a single recording sensitivity.

The ND filters in Venice provide uniform, full spectrum light attenuation. Sony are incredibly fussy over the materials they use for their ND filters and you can be sure that the filters in Venice do not degrade the image. I was present for the pre-shoot tests for the European demo film and a lot of time was spent testing them. We couldn’t find any issues. If you introduce 1 stop of ND, the camera becomes 1 stop less sensitive to light.  In practice this is no different to having a camera with a sensor 1 stop less sensitive. So the built in ND filters, can if you choose, be used to modify the base sensitivity of the camera in 1 stop increments, up to 8 stops lower.

So with the dual ISO’s and the ND’s combined you have a camera that you can setup to operate at the equivalent of 2 ISO all the way up to 2500 ISO in 1 stop steps (by using 2500 ISO and 500 together you can have approximately half stops steps between 10 ISO and 650 ISO). That’s an impressive range and at no stage are you adding extra gain. There is no other camera on the market that can do this.

On top of all this we do of course still have the ability to alter the Exposure Index of the cameras LUT’s to offset the exposure to move the exposure mid point up and down within the dynamic range. Talking of LUT’s I hope to have some very interesting news about the LUT’s for Venice. I’ve seen a glimpse of the future and I have to say it looks really good!

METADATA GRADING.

The raw and X-OCN material from a Venice camera (and from a PMW-F55 or F5 with the R7 recorder) contains a lot of dynamic metadata. This metadata tells the decoder in your grading software exactly how to handle the linear sensor data stored in the files. It tells your software where in the recorded data range the shadows start and finish, where the mid range sits and where the highlights start and finish. It also informs the software how to decode the colors you have recorded.

I recently spent some time with Sony Europe’s color grading guru Pablo Garcia at the Digital Motion Picture Center in Pinewood. He showed me how you can manipulate this metadata to alter the way the X-OCN is decoded to change the look of the images you bring into the grading suite. Using a beta version of Black Magic’s DaVinci Resolve software, Pablo was able to go into the clips metadata in real time and simply by scrubbing over the metadata settings adjust the shadows, mids and highlights BEFORE the X-OCN was decoded. It was really incredible to see the amount of data that Venice captures in the highlights and shadows. By adjusting the metadata you are tailoring the the way the file is being decoded to suit your own needs and getting the very best video information for the grade. Need more highlight data – you got it. Want to boost the shadows, you can, at the file data level before it’s converted to a traditional video signal.

It’s impressive stuff as you are manipulating the way the 16 bit linear sensor data is decoded rather than a traditional workflow which is to decode the footage to a generic intermediate file and then adjust that. This is just one of the many features that X-OCN from the Sony Venice offers. It’s even more incredible when you consider that a 16 bit linear  X-OCN LT file is similar in size to 10 bit XAVC-I(class 480) and around half the size of Apples 10 bit ProRes HQ.  X-OCN LT looks fantastic and in my opinion grades better than XAVC S-Log. Of course for a high end production you will probably use the regular X-OCN ST codec rather than the LT version, but ST is still smaller than ProRes HQ. What’s more X-OCN is not particularly processor intensive, it’s certainly much easier to work with X-OCN than cDNG. It’s a truly remarkable technology from Sony.

Next week I will be shooting some more test with a Venice camera as we explore the limits of what it can do. I’ll try and get some files for you to play with.

Why do we strive to mimic film? What is the film look anyway?

 

Please don’t take this post the wrong way. I DO understand why some people like to try and emulate film. I understand that film has a “look”. I also understand that for many people that look is the holy grail of film production. I’m simply looking at why we do this and am throwing the big question out there which is “is it the right thing to do”? I welcome your comments on this subject as it’s an interesting one worthy of discussion.

In recent years with the explosion of large sensor cameras with great dynamic range it has become a very common practice to take the images these cameras capture and apply a grade or LUT that mimics the look of many of todays major movies. This is often simply referred to as the “film look”.

This look seems to be becoming more and more extreme as creators attempt to make their film more film like than the one before, leading to a situation where the look becomes very distinct as opposed to just a trait of the capture medium. A common technique is the “teal and orange” look where the overall image is tinted teal and then skin tones and other similar tones are made slightly orange. This is done to create colour contrast between the faces of the cast and the background as teal and orange are on opposite sites of the colour wheel.

Another variation of the “film look” is the flat look. I don’t really know where this look came from as it’s not really very film like at all. It probably comes from shooting with a log gamma curve, which results in a flat, washed out looking image when viewed on a conventional monitor. Then because this look is “cool” because shooting on log is “cool” much of the flatness is left in the image in the grade because it looks different to regular TV ( or it may simply be that it’s easier to create a flat look than a good looking high contrast look). Later in the article I have a nice comparison of these two types of “film look”.

Not Like TV!

Not looking like TV or Video may be one of the biggest drivers for the “film look”. We watch TV day in, day out. Well produced TV will have accurate colours, natural contrast (over a limited range at least) and if the TV is set up correctly should be pretty true to life. Of course there are exceptions to this like many daytime TV or game shows where the saturation and brightness is cranked up to make the programmes vibrant and vivid.  But the aim of most TV shows is to look true to life. Perhaps this is one of the drivers to make films look different, so that they are not true to life, more like a slightly abstract painting or other work of art. Colour and contrast can help setup different moods, dull and grey for sadness, bright and colourful for happy scenes etc, but this should be separate from the overall look applied to a film.

Another aspect of the TV look comes from the fact that most TV viewing takes place in a normal room where light levels are not controlled. As a result bright pictures are normally needed, especially for daytime TV shows.

But What Does Film Look Like?

But what does film look like? As some of you will know I travel a lot and spend a lot of time on airplanes. I like to watch a film or 2 on longer flights and recently I’ve been watching some older films that were shot on film and probably didn’t have any of the grading or other extensive manipulation processes that most modern movies go through.

Lets look at a few frames from some of those movies, shot on film and see what they look like.

Lawrence-of-Arabia-01-1024x576 Why do we strive to mimic film? What is the film look anyway?
Lawrence of Arabia.

The all time classic Lawrence of Arabia. This film is surprisingly colourful. Red, blues, yellows are all well saturated. The film is high contrast. That is, it has very dark blacks, not crushed, but deep and full of subtle textures. Skin tones  are around 55 IRE and perhaps very slightly skewed towards brown/red, but then the cast are all rather sun tanned. But I wouldn’t call the skin tones orange. Diffuse whites typically around 80 IRE and they are white, not tinted or coloured.

braveheart1-1024x576 Why do we strive to mimic film? What is the film look anyway?
Braveheart.

When I watched Braveheart, one of the things that stood out to me was how green the foliage and grass was. The strong greens really stood out in this movie compared to more modern films. Overall it’s quite dark, skin tones are often around 45 IRE and rarely more than 55 IRE, very slightly warm/brown looking, but not orange. Again it’s well saturated and high contrast with deep blacks. Overall most scenes have a quite low peak and average brightness level. It’s quite hard to watch this film in a bright room on a conventional TV, but it looks fantastic in a darkened room.

Indy_cuts_bridge Why do we strive to mimic film? What is the film look anyway?
Raiders Of The Lost Ark

Raiders of the Lost Ark does show some of the attributes often used for the modern film look. Skin tones are warm and have a slight orange tint and overall the movie is very warm looking. A lot of the sets use warm colours with browns and reds being prominent. Colours are well saturated. Again we have high contrast with deep blacks and those much lower than TV skin tones, typically 50-55IRE in Raiders. Look at the foliage and plants though, they are close to what you might call TV greens, ie realistic shades of green.

A key thing I noticed in all of these (and other) older movies is that overall the images are darker than we would use for daytime TV. Skin tones in movies seem to sit around 55IRE. Compare that to the typical use of 70% zebras for faces on TV. Also whites are generally lower, often diffuse white sitting at around 75-80%. One important consideration is that films are designed to be shown in dark cinema theatres where  white at 75% looks pretty bright. Compare that to watching TV in a bright living room where to make white look bright you need it as bright as you can get. Having diffuse whites that bit lower in the display range leaves a little more room to separate highlights from whites giving the impression of a greater dynamic range. It also brings the mid range down a bit so the shadows also look darker without having to crush them.

Side Note: When using Sony’s Hypergammas and Cingeammas they are supposed to be exposed so that white is around 70-75% with skin tones around 55-60%. If used like this with a sutable colour matrix such as “cinema” they can look quite film like.

If we look at some recent movies the look can be very different.

the_revenant Why do we strive to mimic film? What is the film look anyway?
The Revenant

The Revenant is a gritty film and it has a gritty look. But compare it to Braveheart and it’s very different. We have the same much lower skin tone and diffuse white levels, but where has the green gone? and the sky is very pale.  The sky and trees are all tinted slightly towards teal and de-saturated. Overall there is only a very small colour range in the movie. Nothing like the 70mm film of Laurence of Arabia or the 35mm film of Braveheart.

deadmen-1024x576 Why do we strive to mimic film? What is the film look anyway?
Dead Men Tell No Tales.

In the latest instalment of the Pirates of the Caribbean franchise the images are very “brown”. Notice how even the whites of the ladies dresses or soldiers uniforms are slightly brown. The sky is slightly grey (I’m sure the sky was much bluer than this). The palm tree fronds look browner than green and Jack Sparrow looks like he’s been using too much fake tan as his face is border line orange (and almost always also quite dark).

wonder_woman_still_6 Why do we strive to mimic film? What is the film look anyway?
Wonder Woman.

Wonder woman is another very brown movie. In this frame we can see that the sky is quite brown. Meanwhile the grass is pushed towards teal and de-saturated, it certainly isn’t the colour of real grass.  Overall colours are subdued with the exception of skin tones.

These are fairly typical of most modern movies. Colours generally quite subdued, especially greens and blues. The sky is rarely a vibrant blue, grass is rarely a grassy green. Skin tones tend to be very slightly orange and around 50-60IRE. Blacks are almost always deep and the images contrasty. Whites are rarely actually white, they tend to be tinted either slightly brown or slightly teal. Steel blues and warm browns are favoured hues. These are very different looking images to the movies shot on film that didn’t go through extensive post production manipulation.

So the film look, isn’t really about making it look like it was shot on film, it’s a stylised look that has become stronger and stronger in recent years with most movies having elements of this look. So in creating the “film look” we are not really mimicking film, but copying a now almost standard colour grading recipe that has some film style traits.

BUT IS IT A GOOD THING?

In most cases these are not unpleasant looks and for some productions the look can add to the film, although sometimes it can be taken to noticeable and objectionable extremes. However we do now have cameras that can capture huge colour ranges. We also have the display technologies to show these enormous colour ranges. Yet we often choose to deliberately limit what we use and very often distort the colours in our quest for the “film look”.

HDR TV’s with Rec2020 colour can show both a greater dynamic range and a greater colour range than we have ever seen before. Yet we are not making use of this range, in particular the colour range except in some special cases like some TV commercials as well as high end wild life films such as Planet Earth II.

This TV commercial for TUI has some wonderful vibrant colours that are not restricted to just browns and teal yet it looks very film like. It does have an overall warm tint, but the other colours are allowed to punch through. It feels like the big budget production that it clearly was without having to resort to  the modern defacto  restrictive film look colour palette. Why can’t feature films look like this? Why do they need to be dull with a limited colour range? Why do we strive to deliberately restrict our colour pallet in the name of fashion?

What’s even more interesting is what was done for the behind the scenes film for the TUI advert…..

The producers of the BTS film decided to go with an extremely flat, washed out look, another form of modern “film look” that really couldn’t be further from film. When an typical viewer watches this do they get it in the same way as we that work in the industry do?  Do they understand the significance of the washed out, flat, low contrast pictures or do they just see weird looking milky pictures that lack colour with odd skin tones? The BTS film just looks wrong to me. It looks like it was shot with log and not graded.  Personally, I don’t think it looks cool or stylish, it just looks wrong and cheap compared to the lush imagery in the actual advert (perhaps that was the intention).

I often see people looking for a film look LUT. Often they want to mimic a particular film. That’s fine, it’s up to them. But if everyone starts to home in on one particular look or style then the films we watch will all look the same. That’s not what I want. I want lush rich colours where appropriate. Then I might want to see a subdued look in a period piece or a vivid look for a 70’s film. Within the same movie colour can be used to differentiate between different parts of the story. Take Woody Allen’s Cafe Society, shot by Vittorio Storaro for example. The New York scenes are grey and moody while the scenes in LA that portray a fresh start are vibrant and vivid. This is I believe important, to use colour and contrast to help tell the story.

Our modern cameras give us an amazing palette to work with. We have the tools such as DaVinci Resolve to manipulate those colours with relative ease. I believe we should be more adventurous with our use of colour. Reducing exposure levels a little compared to the nominal TV and video – skin tones at 70% – diffuse whites at 85-90%, helps replicate the film look and also leaves a bit more space in the highlight range to separate highlights from whites which really helps give the impression of a more contrasty image. Blacks should be black, not washed out and they shouldn’t be crushed either.

Above all else learn to create different styles. Don’t be afraid of using colour to tell your story and remember that real film isn’t just brown and teal, it’s actually quite colourful. Great artists tend to stand out when their works are different, not when they are the same as everyone else.

 

Big Update for Sony Raw Viewer.

rawviewer-01-large-e1480363307344 Big Update for Sony Raw Viewer.
Sony’s Raw Viewer for raw and X-OCN file manipulation.

Sony’s raw viewer is an application that has just quietly rumbled away in the background. It’s never been a headline app, just one of those useful tools for viewing or transcoding Sony’s raw material. I’m quite sure that the majority of users of Sony’s raw material do their raw grading and processing in something other than raw viewer.

But this new version (2.3) really needs to be taken very seriously.

Better Quality Images.

For a start Sony have always had the best de-bayer algorithms for their raw content. If you de-bayer Sony raw in Resolve and compare it to the output from previous versions of Raw Viewer, the raw viewer content always looked just that little bit cleaner. The latest versions of Raw Viewer are even better as new and improved algorithms have been included! It might not render as fast, but it does look very nice and can certainly be worth using for any “problem” footage.

Class 480 XAVC and X-OCN.

Raw Viewer version 2.3 adds new export formats and support for Sony’s X-OCN files. You can now export to both XAVC class 480 and class 300, 10 or 12bit ProRes (HD only unfortunately), DPX and SStP.  XAVC Class 480 is a new higher quality version of XAVC-I that could be used as a ProResHQ replacement in many instances.

Improved Image Processing.

Color grading is now easier than ever thanks to support for Tangent Wave tracker ball control panels along with new grading tools such as Tone Curve control. There is support for EDL’s and batch processing with all kind of process queue options allowing you to prioritise your renders. Although Raw Viewer doesn’t have the power of a full grading package it is very useful for dealing with problem shots as the higher quality de-bayer provides a cleaner image with fewer artefacts. You can always take advantage of this by transcoding from raw to 16 bit DPX or Open EXR so that the high quality de-bayer takes place in Raw Viewer and then do the actual grading in your chosen grading software.

HDR and Rec.2100

If you are producing HDR content version 2.3 also adds support for the PQ and HLG gamma curves and Rec.2100 It also now includes HDR waveform displays. You can use Raw Viewer to create HDR LUT’s too.

So all-in-all Raw Viewer has become a very powerful tool for Sony’s raw and XOCN content that can bring a noticeable improvement in image quality compared to de-bayering in many of the more commonly used grading packages.

Download Link for Sony Raw Viewer: http://www.sonycreativesoftware.com/download/rawviewer

 

ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.