I have written about this many times before, but I’ll try to be a bit more concise here.
So – You have recorded S-Log2 or S-Log3 on your Sony camera and at the same time recorded on an external ProRes Recorder such as an Atomos, Blackmagic or other ProRes recorder. But the pictures look different and they don’t grade in the same way. It’s a common problem. Often the external recording will look more contrasty and when you add a LUT the blacks and shadow areas come out very differently.
Video signals can be recorded using a several different data ranges. S-Log2 and S-Log3 signals are always Data Range. When you record in the camera the cameras adds information to the recording called metadata that tells your editing or grading software that the material is Data Range. This way the edit and grading software knows how to correctly handle the footage and how to apply any LUT’s.
However when you record to an external recorder the external recorder doesn’t have this extra metadata. So the recorder will record the Data Range signal that comes from the camera but it doesn’t add the metadata. The ProRes codec is normally used for Legal Range video and by default, unless there is metadata that says otherwise, edit and grading software will assume any ProRes recordings to be Legal Range.
So what happens is that your edit software takes the file, assumes it’s Legal Range and handles it as a Legal Range file when in fact the data in the file is Data Range. This results in the recording levels being transposed into incorrect levels for processing. So when you add a LUT it will look wrong, perhaps with very dark shadows or very bright over exposed looking highlights. It can also limit how much you can grade the footage.
What Can We Do About It?
Premiere CC.
You don’t need to do anything in Premiere for the internal .mp4 or MXF recordings. They are handled correctly but Premiere isn’t handling the ProRes files correctly.
My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect.
Using the legacy “fast color corrector” filter to transform the external recording to the correct range within Premiere.
Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction.
DaVinci Resolve:
It’s very easy to tell Resolve to treat the clips as Data Range recordings. In the media bin, right click on the clip and under “clip attributes” change the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.
This comes up again and again, hence why I am writing about it once again.
Raw should never be converted to log before recording if you want any benefit from the raw. You may as well just record the 10 bit log that most cameras are capable of internally. Or take log and output it via the cameras 10 bit output (if it has one) and record that directly on the ProRes recorder. It doesn’t matter how you do it but if you convert between different recording types you will always reduce the image quality and this is as bad a way to do it as you can get. This mainly relates to cameras like the PXW-FS7. The FS5 is different because it’s internal UHD recordings are only 8 bit, so even though the raw is still compromised by converting it to ProRes log, this can still be better than the internal 8 bit log.
S-Log like any other log is a compromise recording format. Log was developed to squash a big dynamic range into the same sized recording bucket as would normally be used for conventional low dynamic range gammas. It does this by discarding a lot of tonal and textural information from everything brighter than 1 stop above middle grey, instead of the amount of data doubling for each stop up you go in exposure, it’s held at a constant amount. Normally this is largely transparent as human vision is less acute in the highlight range, but it is still a compromise.
The idea behind Linear raw is that it should give nothing away, each stop SHOULD contain double the data as the one below. But if you only have 12 bit data that would only allow you to record 11 stops of dynamic range as you would quickly run out of code values. So Sony have to use floating point math or something very similar to reduce the size of each stop by diving down the number of code values each stop has. This has almost no impact on highlights where you start off with 100’s or 1000’s values but in the shadows where a stop may only have 8 or 16 values dividing by 4 means you now only have 2 or 4 tonal levels. So once again this is a compromise recording format. To record a big dynamic range using linear what you really need is 16 bit data.
In summary so far:
S-Log reduces the number of highlight tonal values to fit it a big DR in a normal sized bucket.
Sony’s FSRaw, 12 Bit Linear reduces the number of tonal Values across the entire range to fit it in a compact 12 bit recording bucket, but the assumption is that the recording will be at least 12 bit. The greatest impact of the reduction is in the shadows.
Convert 12 bit linear to 10 bit S-Log and now you are compromising both the highlight range and the shadow range. You have the worst of both, you have 10 bit S-Log but with much less shadow data than the S-log straight from the camera. It’s really not a good thing to do and the internally generated S-Log won’t have shadows compromised in the same way.
If you have even the tiniest bit of under exposure or you attempt to lift the shadows in any way this will accentuate the reduced shadow data and banding is highly likely as the values become stretched even further apart as you bring them up the output gamma range.
If you expose brightly and then reduce the shadows this has the effect of compressing the values closer together or pushing them further down the output curve, closing them together as they go down the output gamma range, this reduces banding. This is one of the reasons why exposing more brightly can often help both log and raw recordings. So a bit of over exposure might help, but any under exposure is really, really going to hurt. Again, you would probably be better off using the internally generated S-Log.
To make matters worse there is also often an issue with S-Log in a ProRes file.
If all that is not enough there is also a big problem in the way ProRes files record S-Log. S-Log should always be recorded as full range data. When you record an internal XAVC file the metadata in the clips tells the edit or grading software that the file is full range. Then when you apply a LUT or do your grading the correct transforms occur and all shadow textures are preserved. But ProRes files are by default treated as legal range files. So when you record full range S-Log inside a ProRes file there is a high likelihood that your edit or grading software will handle the data in the clip incorrectly and this too can lead to problems in the shadows including truncated data, clipping and banding, even though the actual recorded data may be OK. This is purely a metadata issue, grading software such as DaVinci resolve can be forced to treat the ProRes files as full range.
It’s a common problem. You are shooting a performance or event where LED lighting has been used to create dramatic coloured lighting effects. The intense blue from many types of LED stage lights can easily overload the sensor and instead of looking like a nice lighting effect the blue light becomes an ugly splodge of intense blue that spoils the footage.
Well there is a tool hidden away in the paint settings of many recent Sony cameras that can help. It’s called “adaptive matrix”.
When adaptive matrix is enabled, when the camera sees intense blue light such as the light from a blue LED light, the matrix adapts to this and reduces the saturation of the blue colour channel in the problem areas of the image. This can greatly improve the way such lights and lighting look. But be aware that if trying to shoot objects with very bright blue colours, perhaps even a bright blue sky, if you have the adaptive matrix turned on it may desaturate them. Because of this the adaptive matrix is normally turned off by default.
If you want to turn it on, it’s normally found in the cameras paint and matrix settings and it’s simply a case of setting adaptive matrix to on. I recommend that when you don’t actually need it you turn it back off again.
Most of Sony’s broadcast quality cameras produced in the last 5 years have the adaptive matrix function, that includes the FS7, FX9, Z280, Z450, Z750 and many others.
Sony have just released the latest version of their free viewing, copying and transcoding software Catalyst Browse and the more fully featured paid software Catalyst Prepare. These new versions includes support for the PXW-FX9’s metadata based image stabilisation. Hopefully the new Mac versions are also optimised for Catalina.
Sony’s XLR-K3M kit includes an MI Shoe relocation cable.
This is something a lot of people have been asking for. An extension or relocation cable that allows you to place devices that will be connected to a camera via the MI Shoe away from the shoe itself.
But in order to get the MI Shoe relocation cable you have to buy the whole XLR-K3M XLR adapter kit, you can’t get the cable on it’s own. This is a shame as I would like to use the cable with my UWP-D series radio mics. I’m not a fan of having the radio mic receiver right on top of the handle as it tends to stick out and get in the way when you put the camera into most camera bags. But, I don’t really need the XLR adapter.
Anyway, here’s a link to the XLR-K3M for those that really need that cable (or the new XLR adapter).
A completely useless bit of trivia for you is that the “E” in E-mount stands for eighteen. 18mm is the E-mount flange back distance. That’s the distance between the sensor and the face of the lens mount. The fact the e-mount is only 18mm while most other DSLR systems have a flange back distance of around 40mm means thare are 20mm or more in hand that can be used for adapters to go between the camera body and 3rd party lenses with different mounts.
Here’s a little table of some common flange back distances:
This topic comes up a lot. Whenever I have been in discussion with those that should know within Sony they have made it clear that the FS-Raw system is designed around S-Log2 for monitoring and post production etc. This stems from the fact that FS-Raw, the 12 bit linear raw from the FS700, FS7 and FS5 was first developed for the FS700 and that camera only had SGamut and S-Log2. S-Log3 didn’t come until a little later.
The idea is that if the camera is set to SGamut + S-Log2 it is optimised for the best possible performance. The raw signal is then passed to the raw recorder where it will be recorded. For a raw recorder that is going to convert the raw to ProRes or DNxHD the recorder converts the raw to S-SGamut + Log2 so that it will match any internal recordings.
Finally in post the grading software would take the FS-Raw and convert it to SGamut + S-Log2 for further grading. By keeping everything as SGamut and S-Log2 throughout the workflow your brightness levels, the look of the image and any LUT’s that you might use will be the same. Internal and external recordings will look the same. And this has been my experience. Use PP7 with SGamut and S-Log2 and the workflow works as expected.
What about the other Gamuts?
However: The FS5 also has SGamut3, SGamut3.cine and S-Log3 available in the picture profiles. When shooting Log many people prefer S-Log3 and SGamut3.cine. Some people find it easier to grade S-Log3 and there are more LUT’s available for S-Log3/SGamut3.cine than for SGamut and S-Log2. So there are many people that like to use PP8 or PP9 for internal S-Log.
However, switching the FS5’s gamma from S-log2 to S-log3 makes no difference to the raw output. And it won’t make your recorder convert the raw to ProRes/DNxHD as S-Log3 if that’s what you are hoping for. But changing the gamut does have an effect on the colors in the image.
But shouldn’t raw be just raw sensor data?
For me this is interesting, because if the camera is recording the raw sensor output, changing the Gamut shouldn’t really change what’s in the raw recording. So the fact that the image changes when you change the Gamut tells me that the camera is doing some form of processing or gain/gamma adjustment to the signal coming from the sensor. So to try and figure out what is happening and whether you should still always stick to SGamut I decided to do a little bit of testing. The testing was only done on an FS5 so the results are only applicable to the FS5. I can’t recall seeing these same changes with the FS7.
DSC Labs Chroma Tru Test Chart.
For the tests I used a DSC Labs Chroma-tru chart as this allows you to see how the colors and contrast in what you record changes both visually and with a vectorscope/waveform. As well as the chart that you shoot, you download a matching reference overlay file that you can superimpose over the clip in post to visually see any differences between the reference overlay and the way the shot has been captured and decoded. It is also possible to place another small reference chart directly in front of the monitor screen if you need to evaluate the monitor or any other aspects of your full end to end production system. It’s a very clever system and I like it because as well as being able to measure differences with scopes you can also see any differences quite clearly without any sophisticated measuring equipment.
Test workflow:
The chart was illuminated with a mix of mostly real daylight and a bit of 5600K daylight balanced light from a Stella LED lamp. I wanted a lot of real daylight to minimise any errors that could creep in from the spectrum of the LED light (The Stellas are very good but you can’t beat real daylight). The camera was set to 2000 ISO. The raw signal was passed from the camera to an Atomos Shogun Inferno where the clips were recorded as both ProRes Raw and also by using the recorders built in conversion to S-Log2 for internal recording as ProRes HQ. I did one pass of correctly exposed clips and a second pass where the clips were under exposed by 1 stop to asses noise levels. The lens was the 18-105mm kit lens, which without the cameras built in lens compensation does show a fair bit of barrel distortion as you will see!
The ProRes clips were evaluated in DaVinci Resolve using the DaVinci Color Managed workflow with the input colorspace set to S-Log2/Sgamut for every clip and output colorspace set to 709. I also had to set the input range of the ProRes clips to Full Range as this is what S-Log2 files always are. If I didn’t change the input range to Full Range the clips exhibited clipped and crushed black after conversion to 709, this confirms that the clips recorded by the Shogun were Full Range – which follows the S-Log specifications.
I did also take a look at the clips in Adobe Premiere and saw very similar results to Resolve.
I will do a separate report on my findings with the ProRes Raw in FCP as soon as I get time to check out the ProRes raw files properly.
So, what did I find?
In the images below the reference file has been overlaid on the very center of the clip. It can be a little hard to see. In a perfect system it would be impossible to see. But you can never capture the full contrast of the chart 1:1 and all cameras exhibit some color response imperfections. But the closer the center overlay is to the captured chart, the more accurate the system is. Note you can click on any of the capture examples to view a larger version.
This is the reference file (by the time it gets posted on my website as a jpeg I would no longer guarantee the colors etc. But when you look at the images below you will see this superimposed over the center of the clips.
Below is Picture Profile 6 (PP6) SGamut with S-Log2. It’s pretty good match. The camera didn’t quite capture the full contrast of the chart and that’s to be expected, reflections etc make it very difficult to get perfect blacks and shadow areas. But color wise it looks quite reasonable although the light blue’s are a little weak/pink.
SGamut and S-Log2
Below is Picture Profile 7 (PP7) SGamut3 with S-Log3. Straight away we can see that even though the camera was set to S-log3, the contrast is the same in the S-Log2 color managed workflow proving that the gamma of the recording is actually ProRes recording from the Shogun is S-log2, confirming what we already know which is that changing the log curve in the camera makes no difference to the raw recording and no difference to the raw to ProRes conversion in the recorder.
Note the extra noise in the greens. The greens appear to have more color, but they also appear a little darker. If you reduce the brightness of a color without altering the saturation the color appears to be deeper and I think that is what is happening here, it is a lightness change rather than just a saturation change. There is also more noise in the darker bars, grey and black really are quite noisy. Light blues have the same weak/pink appearance and there is a distinct green tint to the white, grey and black bars.
SGamut3 with S-Log3
Below is when the camera was set to SGamut3.cine with S-Log3. Again we can see that the recording gamma is obviously S-Log2. The greens are still a touch stronger looking but now there is less noise in the greens. Cyan and reds are slightly lighter than SGamut and yellows appear a bit darker. This is also a little more noisy overall than SGamut, but not as bad as SGamut3. When you play the 3 clips, overall SGamut has the least noise, SGamut3.cine is next and then SGamut3 is clearly the noisiest. As with SGamut there is a distinct green tint to the white, grey and black bars.
SGamut3.cine with S-Log3
So that’s what the images look like, what do the scopes tell us. Again I will start with SGamut and we can see that the color response is pretty accurate. This suggests that Atomos do a good job of converting the raw to S-Log2/SGamut before it’s recorded and confirms what we already know which is this is that this is clearly how the system is designed to work. Note how the Red strip falls very close to the R box on the 2x vectorscope, yello almost in Y, green very close to G, Blue almost in B. Magenta isn’t so clever and this probably explains why the pinky blues at the top of the chart are not quite right. Do remember that all these test were done with the preset white balance so it’s not surprising to see some small offsets as the white balance won’t have been absolutely perfect. But that imperfection will be the same across all of my test examples.
SGamut + S-Log2
Below is SGamut3. The first thing I noticed was all the extra noise on the right side of the waveform where the greens are. The waveform also shows the difference in lightness compared to SGamut with different colors being reproduced at different brightness levels. The greens are being reproduced at a slightly lower luma level and this is probably why the greens appear more saturated. Also notice how much more fuzzy the vectorscope is, this is due to some extra chroma noise. There is a bit more red and magenta is closer to it’s target box, but all the other key colors are further from their boxes. Yellow and Green and Cyan are all a long way from their target boxes. Overall the color is much less accurate than SGamut and there is more chroma noise.
And finally below is SGamut3.cine. There is less noise on the green side of the waveform than SGamut and SGamut3 but we still have a slightly lower luma level for green, making green appear more saturated. Again overall color accuracy is not as good as SGamut. But the vector scope is still quite fuzzy due to chroma noise.
Under Exposure:
I just want to show you a couple of under exposed examples. These have had the under exposure corrected in post. Below is SGamut and as you can see it is a bit noisy when under exposed. That shouldn’t be a surprise, under expose and you will get noisy pictures.
SGamut with S-Log2 1 stop under (exposure corrected in post)
Below is SGamut3 and you can really see how much noisier this is than SGamut. I recommend clicking on the images to see a full screen version. You will see that as well as the noise in the greens there is more chroma noise in the blacks and greys. There also seems to be a stronger shift towards blue/green in the whites/greys in the under exposed SGamut3.
SGamut3 with S-Log3 1 stop under (exposure corrected in post)
Conclusions:
Clearly changing the gamut makes a difference to the raw output signal. In theory this shouldn’t really happen. Raw is supposed to be the unprocessed sensor output. But these test show that there is a fair bit of processing going on in the FS5 before the raw is output. It’s already known that the white balance is baked in. This is quite easy to do as changing the white balance is largely just a matter of changing the gain on the pixels that represent red and blue relative to green. This can be done before the image is converted to a color image.
What I believe I am seeing in this test is something more complex than that. I’m seeing changes in the luminance and gain levels of different colors relative to each other. So what I suspect is happening is that the camera is making some independent adjustments to the gamma of the Red, Blue and Green pixels before the raw signal is output. This is probably a hang over from adjustments that need to be made when recording S-Log2 and S-Log3 internally rather than something being done to deliberately adjust the raw output. But I didn’t design the camera so I can’t be sure that this is really the case. Only Sony would know the truth.
Does it matter?
Yes and no. If you have been using SGamut3.cine and have been getting the results you want, then, no it doesn’t really matter. I would probably avoid SGamut3. It really is very noisy in the greens and shadows compared to the other two. I would be a little concerned by the green tint in the parts of the image that should be colour free in both SGamut3 and SGamut3.cine. That would make grading a little tougher than it should be.
So my advice remains unchanged and continues to match Sony’s recommendation. This is that you should use PP7 with SGamut and S-Log2 when outputting raw. That doesn’t mean you can’t use the other Gamuts and your milage may vary, but these tests do for me at least confirm my reasons for sticking with PP7.
Both Premiere and Resolve show the same behaviour. Next I want to take a look at what happens in FCP with the ProRes Raw clips. This could prove interesting as FCP decodes and converts the FS-Raw to S-Log3 and SGamut3.cine rather than S-Log2/Sgamut by default. Whether this will make any difference I don’t know. What I do know is that having a recorder that’s converting to S-Log2 for display and software that converts to S-Log3 is very confusing as you need different LUT’s for post and the recorder if you want to use LUT’s for your monitoring. But FCP will have to wait for another day. I have paying work to do first.
There is a video on YouTube right now where the author claims that the Sony Alpha cameras don’t record correctly internally when shooting S-Log2 or S-Log3. The information contained in this video is highly miss-leading and the conclusion that the problem is with the way Sony record internally is incorrect. There really isn’t anything wrong with the way Sony do their recordings. Neither is there anything wrong with the HDMI output. While centered around the Alpha cameras the information below is also important for anyone that records S-Log2 or S-log3 externally with any other camera.
Some background: Within the video world there are 2 primary ranges that can be used to record a video signal.
Legal Range uses code value 16 for black and code value 235 for white (anything above CV235 is classed as a super-white and these can still be recorded but considered to be beyond 100%).
Full or Data Range uses code value 0 for black and code value 255 for white or 100%.
Most cameras and most video systems are based on legal range. ProRes recordings are almost always legal range. Most Sony cameras use legal range and do include super-whites for some of the curves such as Cinegammas or Hypergammas to gain a bit more dynamic range. The vast majority of video recordings use legal range. So most software defaults to legal range.
But very, very importantly – S-log2 and S-log is always full/data range.
Most of the time this doesn’t cause any issues. When you record internally in the camera the internal recordings have metadata that tells the playback, editing or grading software that the S-Log files have been recorded using full range. Because of this metadata the software will play the files back and process them at the correct levels. However if you record the S-Log with an external recorder the recorder doesn’t always know that what it is getting is full range and not legal range, it just records it, as it is, exactly as it comes out of the camera. That then causes a problem later on because the externally recorded file doesn’t have the right metadata to ensure that the full range S-Log material is handled correctly and most software will default to legal range if it knows no different.
Lets have a look at what happens when you import an internally recorded S-Log2 .mp4 file from a Sony A7S into Adobe Premiere:
Internal S-Log2 in Premiere.
A few things to note here. One is Adobe’s somewhat funky scopes where the 8 bit code values don’t line up with the normally used IRE values used for video productions. Normally 8 bit code value 235 would be 100IRE or 100%, but for some reason Adobe have code value 255 lined up with 100%. My suspicion is that the scope % scale is not video % or IRE but instead RGB%. This is really confusing. A further complication is that Adobe have code value 0 as black, again, I think, but am not sure that this is RGB code value 0. In the world of video Black should be code value 16. But the scopes appear to work such that 0 is black and that 100 is full scale video out. Anything above 100 and below 0 will be clipped in any file you render out.
Looking at the scopes in the screen grab above, the top step on the grey scale chart is around code value 252. That is the code value you would expect it to be, that lines up just nicely with where the peak of an S-Log2 recording should be. This all looks correct, nothing goes above 100 or below 0 so nothing will be clipped.
So now lets look at an external ProRes recording, recorded at exactly the same time as the internal recording and see what Premier does with that:
External ProRes in Adobe Premiere
OK, so we can see straight away something isn’t quite right here. In an 8 bit recording it should be impossible to have a code value higher that 255, but the scopes are suggesting that the recording has a peak code value of something around CV275. That is impossible, so alarm bells should be ringing. Something is not quite right here. In addition the S-Log2 appears to be going above 100, so that means if I were to simply export this as a new file, the top of the recording will be clipped and it won’t match the original. This is very clearly not right.
Now lets take a look at what happens in Adobe Premiere when you apply Sony’s standard S-Log2 to Rec-709 LUT to a correctly exposed internal recording:
Internal S-Log2 with 709 LUT applied.
This all looks good and as expected. Blacks are sitting down just above the 0 line (which I think we can safely assume is black) and the whites of the picture are around code value 230 or 90, whatever that means. But they are certainly nice and bright and are not in the range that will be clipped. So I can believe this as being more or less correct and as expected.
So next I’m going to add the same standard LUT to the external recording to see what happens.
External S-Log2 with standard 709 LUT applied.
OK, this is clearly not right. Our blacks now go below the 0 line and they look clipped. The highlights don’t look totally out of place, but clearly there is something going very, very wrong when we this normal LUT to this correctly exposed external recording. There is no way our blacks should be going below zero and they look crushed/clipped. The internal recording didn’t behave like this. So what is going on with the external recording?
To try and figure this out lets take a look at the same files in DaVinci Resolve. For a start I trust the scopes in Resolve much more and it is a far better programme for managing different types of files. First we will look at the internal S-Log2 recording:
Internal S-Log2, all looks good.
Once again the levels of the internal S-Log2 recordings look absolutely fine. Our peak is around code value 1010 which would be 252 in 8 bit. Right where the brightest bits of an S-log2 file should be. Now lets take a look at the external recording.
External ProRes S-Log2 (Full Range)
If you compare the two screen grabs above you can see that the levels are exactly the same. Our peak level is around CV1010/CV252, just where it should be and the blacks look the same also. The internal and external recordings have the same levels and look the same. There is no difference (other then perhaps less compression and fewer artefacts in the ProRes file). There is nothing wrong with either of these recordings and certainly nothing wrong with the way Sony record S-Log2 internally. This is absolutely what I expect to see.
BUT – I’ve been a little bit sneaky here. As I knew that the external recording was a full range recording I told DaVinci Resolve to treat it as a full range recording. In the media bin I right clicked on the clip and under “clip attributes” I changed the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.
This is what it looks like if you allow Resolve to guess at what range the S-Log2 full range clip is by leaving the input range setting to “auto”:
External ProRes S-Log2 Auto Range
In the above image we can see how in Resolve the clip becomes clipped because in a legal range recording anything over CV235/CV940 would be an illegal super white. Resolve is scaling the clip and pushing anything in the original file that was above CV235/CV940 off the top of the scale. The scaling is incorrect because Resolve doesn’t know the clip is supposed to be full range and therefore not scaled. If we compare this to what Premiere did with the external recording it’s actually very similar. Premiere also scaled the clip, only Premiere will show all those “illegal” levels above it’s 100 line instead of clipping then as Resolve does. That’s why Premiere can have those “impossible” 8 bit code values going up to CV275.
Just to be complete here, I did also test the internal .mp4 recordings in Resolve switching between “auto” and “full” range and in both cases the levels stayed exactly the same. This shows that Resolve is correctly handling the internally record full range S-Log as full range.
What about if you add a LUT? Well you MUST tell Resolve to treat the S-Log2 ProRes clip as a full range clip otherwise the LUT will not be right, if your footage is S-Log3 you also have to tell Resolve that it is full range:
Resolve: Internal recording with the standard 709 LUT applied, all is exactly as expected. Deep shadows and white right at the top of the range.Resolve: External recording with the standard 709 LUT applied, clip input range set to “full”. Everything is once again as you would expect. Deep shadows and white at the top of the range. Also not that it is near perfect match to the internal recording. No hue or color shift (Premiere introduces a color shift, more on that later).Resolve: External recording with the standard 709 LUT applied, clip input range set to “auto”. This is clearly not right. The highlights are clipped and the blacks are crushed and clipped. It is so important to get the input range right when working with LUT’s!!
CONCLUSIONS:
Both the internal and external recordings are actually exactly the same. Both have the same levels, both use FULL range. There is absolutely nothing wrong with Sony’s internal recordings. The problem stems from the way most software will assume that the ProRes files are legal range. But if it’s an S-Log2 or S-Log3 recording it will in fact be full (data) range. Handling a full range clip as legal range means that highlights will be too high/bright or clipped and blacks will be crushed. So it’s really important that your software handles the footage correctly. If you are shooting using S-Log3 this problem is harder to spot as S-Log3 has a peak recording level that is well with the legal range, so you often won’t realise it’s being scaled incorrectly as it won’t necessarily look clip. If you use LUT’s and your ProRes clips look crushed or highlights look clipped you need to check that the input scaling is correct. It’s really important to get this right.
Why is there no difference between the levels when you shoot with a Cinegamma? Well when you shoot with a cinegamma the internal recordings are legal range so the internal recordings get treated as legal range and so do the external recordings, so they don’t appear to be different (In the YouTube video that led to this post the author discovers that if you record with a normal profile first and then switch to a log profile while recording the internal and external files will match. But this is because now the internal recording has the incorrect metadata, so it too gets scaled incorrectly, so both the internal and external files are now wrong – but the the same).
Once again: There is nothing wrong with the internal recordings. The problem is with the way the external recordings are being handled. The external recordings haven’t been recorded incorrectly, they have been recorded as they should be. The problem is the edit software is incorrectly interpreting the external recordings. The external recordings don’t have the necessary metadata to mark the files as full range because the recorder is external to the camera and doesn’t know what it’s being sent by the camera. This is a common problem when using external recorders.
What can we do in Premiere to make Premiere work right with these files?
You don’t need to do anything in Premiere for the internal .mp4 recordings. They are handled correctly but Premiere isn’t handling the full/data range ProRes files correctly.
My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect. So if anyone knows of a better way to do this do please let me know.
Using the legacy “fast color corrector” filter to transform the external recording to the correct range within Premiere.
Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction.
This has been asked a couple of times. How do I record the slow motion S&Q output of my PXW-FS5 to an external recorder if I don’t have the raw option or don’t want to use raw.
Well it is possible and it’s quite easy to do. You can do it with either an SDI or HDMI recorder, both will work. The example here is for the new Atomos Ninja V recorder, but the basic idea is the same for most recorders.
Just to be absolutely clear this isn’t a magic trick to give you raw with a conventional non raw recorder. But it will allow you to take advantage of the higher quality codec (normally ProRes) in the external recorder.
Oh and by the way – The Ninja V is a great external monitor and recorder if you don’t want raw or you need something smaller than the Inferno.
So here’s how you do it:
In the camera menu and “Rec Set” – set the file format to XAVC HD and the Rec Format to 1080/50p or 1080/60p it MUST be 50p or 60p for this to work correctly.
In “Video Out” select the HDMI (for the Ninja, if you recorder has SDI then this works with SDI too).
Set the SDI/HDMI to 1080p/480i or 1080p/560i it MUST be p not i
Set HDMI TC Output to ON
Set SDI/HDMI Rec Control to ON
Connect the Ninja (or other recorder) via HDMI and on the Ninja under the input settings set the record trigger to HDMI – ON. If you are using a recorder with SDI you should have similar options for the SDI input.
So now what will happen is when you use the S&Q mode at 100fps or higher the camera will act as normally, you will still need a SD card in the camera. But when the camera copies the slow motion footage from the internal buffer to the SD card the external recorder will automatically go into record at the same time and record the output stream of the buffer. Once the buffer stream stops, the recorder will stop.
The resulting file will be 50p/60p. So if you want to use it in a 24/25/30p project and get the full slow-mo benefit you will need to tell the edit software to treat the file as a 24/25/30p file to match the other clips in your project. Typically this is done by right clicking on the clip and using the “interpret footage” function to set the frame rate to match the frame rate of your project or other footage.
And that’s it. It’s pretty simple to do and you can improve the quality of your files over the internal recordings, although I have to say you’ll be hard pushed to see any difference in most cases as the XAVC is already pretty good.
“Color Science” is one of those currently in fashion phrases that gets thrown around all over the place today. First of all – what the heck is color science anyway? Simply put it’s how the camera sees the colors in a scene, mixes them together, records them – and then how your editing or grading software interprets what is in the recording and finally how the TV or other display device turns the digital values it receives back into a color image. It’s a combination of optical filters such as the low pass filter, color filters, sensor properties, how the sensor is read out and how the signals are electronically processed both in the camera, by your edit/grading system and by the display device. It is no one single thing, and it’s important to understand that your edit process also contributes to the overall color science.
Color Science is something we have been doing since the very first color cameras, it’s not anything new. However us end users now have a much greater ability to modify that color science thanks to better post production tools and in camera adjustments such as picture profiles or scene files.
Recently, Sony cameras have sometimes been seen by some as having less advanced or poor color science compared to cameras from some other manufacturers. Is this really the case? For Sony part of the color science issue is that historically Sony have deliberately designed their newest cameras to match previous generations of cameras so that a large organisation with multiple cameras can use new cameras without having them look radically different to their old ones. It has always been like this and all the manufacturers do this, Panasonic cameras have a certain look as do Canon etc. New and old Panasonics tend to look the same as do old and new Canon’s, but the Canon’s look different to the Panasonics which look different to the Sony’s.
Sony have a very long heritage in broadcast TV and that’s how their cameras look out of the box, like Rec-709 TV cameras with colors that are similar to the tube cameras they were producing 20 years ago. Sony’s broadcast color science is really very accurate – point one at a test chart such as a Chroma DuMonde and you’ll see highly repeatable, consistent and accurate color reproduction with all the vectors on a vector scope falling exactly where they should, including the skin tone line.
On the one hand this is great if you are that big multi-camera business wanting to add new cameras to old ones without problems, where you want your latest ENG or self-shooters cameras to have the same colors as your perhaps older studio cameras so that any video inserts into a studio show cut in and out smoothly with a consistent look.
But on the other hand it’s not so good if you are a one man band shooter that wants something that looks different. Plus accurate is not always “pretty” and you can’t get away from the fact that the pictures look like Rec-709 television pictures in a new world of digital cinematography where TV is perhaps seen as bad and the holy grail is now a very different kind of look that is more stylised and much less true to life.
So Sony have been a bit stuck. The standard look you get when you apply any of the standard off-the shelf S-Log3 or S-Log2 LUT’s will by design be based on the Sony color science of old, so you get the Sony look. Most edit and grading applications are using transforms for S-Log2/3 based on Sony’s old standard Rec-709 look to maintain this consistency of look. This isn’t a mistake. It’s by design, it’s a Sony camera so it’s supposed to look like other Sony cameras, not different.
But for many this isn’t what they want. They want a camera that looks different, perhaps the “film look” – whatever that is?
Recently we have seen two new cameras from Sony that out of the box look very different from all the others. Sony’s high end Venice camera and the lower cost FS5 MKII. The FS5 MKII in particular proves that it’s possible to have a very different look with Sony’s existing colour filters and sensors. The FS5 MK II has exactly the same sensor with exactly the same electronics as the MK I. The only difference is in the way the RGB data from the sensor is being processed and mixed together (determined by the different firmware in the Mk1 and mk2) to create the final output.
The sensors Sony manufacture and use are very good at capturing color. Sony sensors are found in cameras from many different manufacturers. The recording systems in the Sony cameras do a fine job of recording those colors as data within the files the camera records as data with different code values representing what the sensor saw. Take that data into almost any half decent grading software and you can change the way it looks by modifying the data values. In post production I can turn almost any color I want into any other color. It’s really up to us as to how we translate the code values in the files into the colors we see on the screen, especially when recording using Log or raw. A 3D LUT can change tones and hues very easily by shifting and modifying the code values. So really there is no reason why you have to have the Sony 709 look.
My Venice emulation LUT’s will make S-Log3 from an FS5 or FS7 look quite different to the old Sony Broadcast look. I also have LUT’s for Sony cameras that emulate different Fuji and Kodak film stocks, apply one of these and it really looks nothing like a Sony broadcast camera. Another alternative is to use a color managed workflow such as ACES which will attempt to make just about every camera on the market look the same applying the ACES film style look and highlight roll-off.
We have seen it time and time again where Sony footage has been graded well and it then becomes all but impossible to identify what camera shot it. If you have Netflix take a look at “The Crown” shot on Sony’s F55 (which has the same default Sony look as the FS5 MK1, FS7 etc). Most people find it hard to believe the Crown was shot with a Sony because it has not even the slightest hint of the old Sony broadcast look.
If you use default settings, standard LUT’s etc it will look like a Sony, it’s supposed to! But you have the freedom to choose from a vast range of alternative looks or better still create your own looks and styles with your own grading choices.
But for many this can prove tricky as often they will start with a standard Sony LUT or standard Sony transform. So the image they start with has the old Sony look. When you start to grade or adjust this it can sometimes look wrong because you have perhaps become used to the original Sony image and then anything else just doesn’t seem right, because it’s not what you are used to. In addition if you add a LUT and then grade, elements of the LUT’s look may be hard to remove, things like the highlight roll off will be hard baked into the material, so you need to do need to think carefully about how you use LUT’s. So try to break away from standard LUT’s. Try ACES or try some other starting point for your grade.
Going forward I think it is likely that we will see the new Venice look become standard across all of the Cinema style cameras from Sony, but it will take time for this to trickle down into all the grading and editing software that currently uses transforms for s-Log2/3 that are based on the old Sony Rec-709 broadcast look. But if you grade your footage for yourself you can create just about any look you want.
Manage your privacy
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.