Category Archives: Technology

Why is log sometimes hard to grade?

This comes up a lot. People shoot in log, take it in to the grading suite or worse still the edit suite, try to grade it and are less than happy with the end result. Some people really struggle to make log look good.

Why is this? Well we normally view our footage on a TV, monitor or computer screen that uses a gamma curve that follows what is known as a “power law” curve. While this isn’t actually a true linear type of curve, it most certainly is not a log curve. Rec-709 is a “power law” curve.

The key thing about this when trying to understand why log can be tricky to grade is that in the real world, the world that we see, as you go up in brightness for each stop brighter you go, there is twice as much light. A power law gamma such as 709 follows this fairly closely as each brighter stop recorded uses a lot more data than the previous. But log is quite different, to save space, log uses more or less the same amount of data for each stop, with the exception of the darkest stops that have very little data anyway. So conventional gamma = much more data per brighter stop, log gamma = same data for each stop.

So time to sit down somewhere quiet before trying to follow this crude explanation. It’s not totally scientifically accurate, but I hope you will get the point and I hope you will see why trying to grade Log in a conventional edit suite might not be the best thing to try to do.

Lets consider a  scene where the brightness might be represented by some values and we record this scene with a convention gamma curve. The values recorded might go something like this, each additional stop being double the previous:

CONVENTIONAL RECORDING:  1  –  2  –  4  –  8  –  16

Then in post production we decide it’s a bit dark so we increase the gain by a factor of two to make the image brighter, the output becomes:

CONVENTIONAL AFTER 2x GAIN:  2  –  4  –  8  –  16  –  32

Notice that the number sequence uses the same numbers but they get bigger, doubling for each stop.  In an image this would equate to a brighter picture with the same contrast.

Now lets consider recording in log. Log uses the same amount of data per stop, so the recorded levels for exactly the same scene would be something like this:

LOG RECORDING (“2” for each stop):  1  –  2  –  4  –  6  –  8.

If in post production if we add a factor of two gain adjustment we will get the same brightness as our uncorrected conventional recording, both reach 16, but look at the middle numbers they are different, the CONTRAST will be different.

LOG AFTER 2x GAIN:  2  –  4  –  8  –  12  –  16.

It gets even worse if we want to make the log footage as bright as the corrected conventional footage. To make the log image equally bright to the corrected conventional footage we have to use  4x gain. Then we get:

LOG AFTER 4x GAIN:  4  –  8  –  16  –  24  –  32

So now we have the same final brightness for both the corrected conventional and corrected log footage but the contrast is very different. The darks and mids from the log have become brighter than they should be, compare this to the conventional after 2x gain. The contrast has changed. This is the problem with log. Applying simple gain adjustments to log footage results in both a contrast and brightness change.

So when grading log footage you will typically need to make separate corrections to the low middle and high range. You want a lift control to adjust the blacks and deep shadows, a mid point level shift for the mid range and a high end level shift. You don’t want to use gain as not only will it make the picture brighter and darker but it will also make it more or less contrasty.

One way to grade log is to use a curve tool to alter the shape of the gamma curve, pulling down the blacks while stretching out the whites. In DaVinci Resolve you have a set of log grading color wheels as well as the conventional primary color wheels. Another way to grade log is to apply a LUT to it and then grade in more conventional 709 space, although arguably any grading is best done prior to the LUT.

Possibly the best way is to use ACES. The Academy of Motion Pictures workflow takes your footage, whether log or linear and converts it to true linear within the software. Then all corrections take place in linear space where it is much more intuitive before finally be output from ACES with a film curve applied.

More info on CMOS sensor grid artefacts.

Cameras with bayer CMOS sensors can in certain circumstances suffer from an image artefact that appears as a grid pattern across the image. The actual artefact is normally the result of red and blue pixels that are brighter than they should be which gives a magenta type flare effect. However sometimes re-scaling an image containing this artefact can result in what looks like a grid type pattern as some pixels may be dropped or added together during the re scaling and this makes the artefact show up as a grip superimposed over the image.

Grid type artefact.
Grid type artefact.

The cause of this artefact is most likely off-axis light somehow falling on the sensor. This off axis light could come from an internal reflection within the camera or the lens. It’s known that with the F5/F55 and FS7 cameras that a very strong light source that is just out of shot, just above or below the image frame can in some circumstances with some lenses result in this artefact. But this problem can occur with almost any CMOS Bayer camera, it’s not just a Sony problem.

The cure is actually very simple, use a flag or lens hood to prevent off axis light from entering the lens. This is best practice anyway.

So what’s going on, why does it happen?sony-grid-artefact-explained

When white light falls on a bayer sensor it passes through color filters before hitting the pixel that measures the light level. The color filters are slightly above the pixels. For white light the amount of light that passes through each color filter is different.  I don’t know the actual ratios of the different colors, it will vary from sensor to sensor, but green is the predominant color with red and blue being considerably lower, I’ve used some made up values to illustrate what is going on, these are not the true values, but should illustrate the point.

In the illustration above when the blue pixel see’s 10%, green see 70% and red 20%, after processing the output would be white. If the light falling on the sensor is on axis, ie coming directly, straight through the lens then everything is fine.

But if somehow the light falls on the sensor off axis at an oblique angle then it is possible that the light that passes through the blue filter may fall on the green pixel, or the light from the green filter may fall on the red pixel etc. So instead of nice white light the sensor pixels would think they are seeing light with an unusually high red and blue component. If you viewed the image pixel for pixel it would have very bright red pixels, bright blue pixels and dark green pixels. When combined together instead of white you would get Pink or Blue. This is the kind of pattern that can result in the grid type artefact seen on many CMOS bayer sensors when there are problems with off axis light.

This is a very rare problem and only occurs in certain circumstances. But when it does occur it can spoil an otherwise good shot. It happens more with full frame lenses than with lenses designed for super 35mm or APSC and wide angles tend to be the biggest offenders as their wide Field of View  (FoV) allows light to enter the optical path at acute angles. It’s a problem with DSLR lenses designed for large 4:3 shaped sensors rather than the various wide screen format that we shoot video in today. All that extra light above and below the desired widescreen frame, if it isn’t prevented from entering the lens has to go somewhere. Unfortunately once it enters the cameras optical path it can be reflected off things like the very edge of the optical low pass filter, the ND filters or the face of the sensor itself.

The cure is very simple and should be standard practice anyway. Use a sun shade, matte box or other flag to prevent light from out of the frame entering the lens. This will prevent this problem from happening and it will also reduce flare and maximise contrast. Those expensive matte boxes that we all like to dress up our cameras with really can help when used and adjusted correctly.

I have found that adding a simple mask in front of the lens or using a matte box such as any of the Vocas matte boxes with eyebrows will eliminate the issue. Many matte boxes will have the ability to be fitted with a 16:9 or 2.40:1 mask ( also know as Mattes hence the name Matte Box) ahead of the filter trays. It’s one of the key reason why Matte Boxes were developed.

Note the clamp inside the hood for holding a mask in front of the filters on this Vocas MB216 Matte Box. Not also how the Matte Box’s aperture is 16:9 rather than square to help cut out of frame light.
Arri Matte Box with Matte selection.

You should also try to make sure the size of the matte box you use is appropriate to the FOV of the lenses that you are using. An excessively large Matte Box isn’t going to cut as much light as a correctly sized one.  I made a number of screw on masks for my lenses by taking a clear glass or UV filter and adding a couple of strips of black electrical tape to the rear of the filter to produce a mask for the top and bottom of the lens. With zoom lenses if you make this mask such that it can’t be seen in the shot at the wide end the mask is effective throughout the entire zoom range.

f5-f55-mask

Many cinema lenses include a mask for 17:9 or a similar wide screen aperture inside the lens.

 

What is “Exposure”

What do we really mean when we talk about exposure?

If you come from a film background you will know that exposure is the measure of how much light is allowed to fall on the film. This is controlled by two things, the shutter speed and the aperture of the lens. How you set these is determined by how sensitive the film stock is to light.

But what about in the video world? Well exposure means exactly the same thing, it’s how much light we allow our video sensor to capture. Controlled by shutter speed and aperture. The amount of light we need to allow to fall on the sensor is dependant on the sensitivity of the sensor, much like film. But with video there is another variable and that is the gamma curve…. or is it????

This is an area where a lot of video camera operators have trouble, especially when you start dealing with more exotic gamma curves such as log. The reason for the problem is down to the fact that most video camera operators are taught or have learnt to expose their footage at specific video levels. For example if you’re shooting for TV it’s quite normal to shoot so that white is around 90%, skin tones are around 70% and middle grey is in the middle, somewhere around the 45% mark. And that’s been the way it’s been done for decades. It’s certainly how I was taught to expose a video camera.

If you have a video camera with different gamma curves try a simple test. Set the camera to its standard TV gamma (rec-709 or similar). Expose the shot so that it looks right, then change the gamma curve without changing the aperture or shutter speed. What happens? Well the pictures will get brighter or darker, there will be brightness differences between the different gamma curves. This isn’t an exposure change, after all you haven’t changed the amount of light falling on the sensor, this is a change in the gamma curve and the values at which it records different brightnesses.

An example of this would be setting a camera to Rec-709 and exposing white at 90% then switching to S-log3 (keeping the same ISO for both) and white would drop down to 61%. The exposure hasn’t changed, just the recording levels.

It’s really important to understand that different gammas are supposed to have different recording levels. Rec-709 has a 6 stop dynamic range (without adding a knee). So between 0% and around 100% we fit 6 stops with white falling at 85-90%. So if we want to record 14 stops where do we fit in the extra 8 stops that S-Log3 offers when we are already using 0 to 100% for 6 stops with 709?? The answer is we shift the range. By putting the 6 stops that 709 can record between  around 15% and 68% with white falling at 61% we make room above and below the original 709 range to fit in another 8 stops.

So a difference in image brightness when changing gamma curves does not represent a change in exposure, it represents a change in recording range. The only way to really change the exposure is to change the aperture and shutter speed. It’s really, really important to understand this.

Furthermore your exposure will only ever look visibly correct when the gamma curve of the display device is the same as the capture gamma curve. So if shooting log and viewing on a normal TV or viewfinder that typically has 709 gamma the picture will not look right. So not only are the levels different to those we have become used to with traditional video but the picture looks wrong too.

As more and more exotic (or at least non-standard) gamma curves become common place it’s very important that we learn to think about what exposure really is. It isn’t how bright the image is (although this is related to exposure) it is about letting the appropriate amount of light fall on the sensor. How do we determine the correct amount of light? Well we need to measure it using a waveform scope, zebras etc, BUT you must also know the correct reference levels for the gamma you are using for a white or middle grey target.

You might also like to read this article on understanding log and exposure levels.

Ultimate Guide for Cine EI on the Sony PXW-FS7

Ultimate Guide to CineEI on the PXW-FS7 (Updated May 2016).

INTRODUCTION:

This guide to Cine-EI is based on my own experience with the Sony PXW-FS7. There are other methods of using LUT’s and CineEI. The method I describe below, to the best of my knowledge, follows standard industry practice for working with a camera that uses EI gain and LUT’s.

If you find the guide useful, please consider buying me a beer or a coffee. It took quite a while to prepare this guide and writing can be thirsty work.


Type



Through this guide I hope to help you get the very best from the Cine EI mode on the PXW-FS7.

The camera has two very distinct shooting mode, Cine EI and Custom Mode. In custom mode the camera behaves much like any other traditional video camera where what you see in the viewfinder is what’s recorded on the cards. In custom mode you can change many of the cameras settings such as gamma, matrix, sharpness etc to create the look you are after in-camera. “Baking-in” the look of your image in camera is great for content that will go direct to air or for fast turn around productions. But a baked-in look can be difficult to alter in post production. In addition it is very hard to squeeze every last drop of the picture information that the sensor can capture in to the recordings in this mode.

The other mode, Cine-EI, is primarily designed to allow you to record as much information about the scene as possible. The footage from the camera becoming, in effect a “digital negative” that can then be developed in post production and the final, highly polished look of the film or video created in post. In addition the Cine-EI mode mimics the way a film camera works giving the cinematographer the ability to rate the camera at different ISO’s to those specified by Sony. This can be used to alter the relative noise levels in the footage or to help deal with difficult lighting situations.

One further “non-standard” way to use Cine-EI is to use a LUT (Look Up Table) to create an in-camera look that can be baked in to the footage while you shoot. This offers an alternative to custom mode. Some users will find it easier to create a specific look for the camera using a LUT than they would by adjusting camera settings such as gamma and matrix.

MLUT’s and LOOK’s (both are types of Look Up Tables) are only available in the Cine-EI mode.

 

 

THE SIMPLIFIED VERSION:

Before I go through all the “why’” and “hows” first of all let me just say that actually, CineEI is easy. I’ve gone in to a lot of extra detail here so that you can fully master the mode and the concepts behind it.

But in it’s simplest form, all you need to do is to turn on the MLUT’s. Choose the MLUT that you like the look of, or is closest to the final look you are after. Expose so that the picture in the viewfinder or on your monitor looks how you want and away you go.

Then in post production bring in your S-log footage. Apply the same LUT as you used when you shot and the footage will look as shot. Or just grade the footage as desired without a LUT, it is not essential to use a LUT in post production.  As the footage you have shot is either raw or Slog you have a huge range of adjustment available to you in post.

THAT’S IT! If you want, it’s that simple (well almost).

If you want to get fancy you can create your own LUT and that’s really easy too (see the end of the document). If you want less noise in your pictures use a lower EI. I shoot using 800EI on my FS7 almost all the time.

Got an issue with a very bright scene and strong highlights, shoot with a high EI (this should only ever be a last resort, try to avoid using an EI higher than 2000EI).

Again, it’s really simple.

But anyway, lets learn more about it and why it works the way it works.

LATITUDE AND SENSITIVITY.

The latitude and sensitivity of the PXW-FS7, like most cameras is primarily governed by the latitude and sensitivity of the sensor. The latitude of the sensor in the FS7 is around 14 stops. Adding different amounts of conventional camera gain or using  different ISO’s does not alter the sensors actual sensitivity to light, only how much the signal from the sensor is amplified. This is like turning up or down the volume on a radio, the sound level gets higher or lower, but the strength of the radio signal is just the same. Turn it up loud and not only does the music get louder but also any hiss or noise, the ratio of signal to noise does not change, so BOTH the noise and the music get louder. Turn it up too loud and it will distort. If you don’t turn it up loud enough, you can’t hear it, but the radio signal itself does not change. It’s the same with a video cameras sensor. It always has the same sensitivity, With a conventional camera, or when the FS7 is in Custom Mode we can add or take away gain (volume control?) to make the pictures brighter or darker (louder?) but the noise levels will go up and down too.

NATIVE ISO:

Sony’s native ISO rating for the FS7 of 2000 ISO has been chosen by Sony to give a good trade off between sensitivity, noise and over/under exposure latitude. In general the native ISO will give excellent results. But there may be situations where you want or need different performance. For example you might prefer to trade off a little bit of over exposure headroom for a better signal to noise ratio, giving a cleaner, lower noise picture. Or you might need a very large amount of over exposure headroom to deal with a scene with lots of bright highlights.

The Cine EI mode allows you to change the effective ISO rating of the camera, without altering the dynamic range.

With film stocks the film manufacturer will determine the sensitivity of the film and give it an Exposure Index which is normally the equivalent of the films measured ASA/ISO.  It is possible for a skilled cinematographer to rate the film stock with a higher or lower ISO than the manufacturers rating to vary the look or compensate for filters and other factors. You then adjust the film developing and processing to give a correctly exposed looking image. This is a common tool used by cinematographers to modify the look of the film, but the film stock itself does not actually change it’s base sensitivity, it’s still the same film stock with the same base ASA/ISO.

Sony’s Cine EI mode and the EI modes on Red and Arri cameras are very similar. While it has many similarities to adding conventional video camera gain, the outcome and effect can be quite different. If you have not used it before it can be a little confusing, but once you understand the way it works it is very useful and a great way to shoot. Again, a key thing to remember that the actual sensitivity of the sensor itself never changes.

 

CONVENTIONAL VIDEO CAMERA GAIN.

Increasing conventional camera gain will reduce the cameras dynamic range as something that is recorded at maximum brightness (109%) at the native ISO or 0db would be pushed up above the peak recording level and we can’t record a signal larger than 109%. But as the true sensitivity of the sensor does not change, the darkest object the camera can actually detect remains the same. Dark objects may appear a bit brighter, but there is still a limit to how dark an object the camera can actually see and this is governed by the sensors noise floor and signal to noise ratio (how much noise there is in the image coming from the sensor).

Any very dark picture information will be hidden in the sensors noise. Adding gain will bring up both the noise and darkest picture information, so anything hidden in the noise at the native ISO (or 0db) will still be hidden in the noise at a higher gain or ISO as both the noise and small signal are amplified by the same amount. So adding gain does not extend the the ability to see further into the shadows, but does decrease the ability to record bright highlights. The net result of adding gain is a decrease in dynamic range.

Using negative gain or going lower than the native ISO may also reduce the dynamic range as picture information very close to black will be shifted down below black when you subtract gain or lower the ISO. At the same time there is a limit to how much light the sensor can deal with before the sensor itself overloads. So even though reducing the ISO or gain may make the picture darker, the sensors clipping/overload point remains the same, so there is no change to the upper dynamic range, just a reduction in recording level. The net result is you loose shadow information, don’t gain any highlight information, this again means a reduction in dynamic range.

See also this article on gain and dynamic range.

As Sony’s Slog2 and Slog3 are tailored to capture the cameras full 14 stop range this means that when shooting with Slog2 or Slog3 the gamma curve will ONLY work as designed and deliver the maximum dynamic range when the camera is at it’s native ISO. At any other recording ISO or gain level the dynamic range will be reduced. IE: If you were to use SLog2 or SLog3 with the camera in custom mode and not use the native ISO by adding gain or changing the ISO away from 2000, you will not get the full 14 stop range that the camera is capable of delivering.

EXPOSURE LEVELS FOR DIFFERENT GAMMA CURVES AND CONTRAST RANGES.

It’s important to understand that different gamma curves with different contrast ranges will require different exposure levels. The TV system that we use today is currently based around a standard known as Rec-709. This standard specifies the contrast range that a TV set or monitor can show and which recording levels represent which display brightness levels. Most traditional TV cameras are also based on this standard. Rec-709 does have some serious restrictions, the brightness and contrast range is very limited as these standards are based around TV standards and technologies developed 50 years ago. To get around this issue most TV cameras use methods such as a “knee” to compress together some of the brighter part of the scene in to a very small recording range.

A traditional TV camera with a limited dynamic range compresses only a small highlight range.

A traditional TV camera with a limited dynamic range compresses only a small highlight range.
A traditional TV camera with a limited dynamic range compresses only a small highlight range.

As you can see in the illustration above only a very small part of the recording “bucket” is used to hold a moderately large compressed highlight range. In addition a typical TV camera can’t capture all of the range in many scenes anyway. The most important parts of the scene, from black to white (such as a white piece of paper), is captured more or less “as is”. This leaves just a tiny bit of space above white to squeeze in a few highly compressed highlights. The black to white range represents about 5 stops, these are the most important stops as the majority of things that are important fall in this range. Faces, skin tones, plants, buildings etc all fall within the black to white range. Anything brighter than white must be a direct light source such as the sky, a reflection or lamp.

The signal from the TV camera is then passed directly to the TV and as the shadows, mid range and skin tones etc are all at more or less the same level as captured the bulk of scene looks OK on the TV/Monitor. Any highlights or other brighter than white such as direct light sources may look a little “electronic” due to the very large amount of compression used.

But what happens if we want to record more of the scenes range or compress the highlights less? As the size of the recording “bucket”, the codec etc, does not change, in order to capture a greater range and fit it in to the same space, we have to re-distribute how we record things.

Recording a greater dynamic range into the same sized bucket.

Recording a greater dynamic range into the same sized bucket.
Recording a greater dynamic range into the same sized bucket.

Above you can see instead of just compressing a small part of the highlights we are now capturing the full dynamic range of the scene. To do this we have altered the levels that everything is recorded at. Blacks and shadows are recorded lower, greys and mids are lower and white is a lot lower. By bringing all these levels down, we make room in our recording bucket for the highlights and the really bright stuff without them being excessively compressed.

The problem with this though is that when you output the picture to a monitor or TV it looks odd. It will lack contrast as the really bright stuff is displayed at the same brightness as the conventional 709 highlights. White is now darker then faces would be with a conventional TV camera.

This is how the Hypergammas work:

This is how the Hypergamma works. By re-distributing the recording levels we can squeeze a much bigger dynamic range into the same size recording bucket. But it won’t look right when viewed directly on a standard TV or monitor. It may a little look dark and perhaps a bit washed out. This is because the cameras gamma curve now no longer matches the monitors gamma curve.

I hope you can also see from this that whenever the cameras gamma curve does not match that of the TV/Monitor, the picture might not look quite right. Even when correctly exposed, white may be at different levels, depending on the gamma being used, especially if the gamma curve has a greater range than the normal Rec-709 used in old school TV cameras.

S-Log uses recording levels very different to conventional gammas.

S-Log takes this a step further and instead of using a highlight roll off, knee or other form of highlight compression, S-Log takes every stop that is brighter than middle grey and records each with the same amount of data. This is “log” encoding and is very different to the way a conventional gamma curve works. To fit a big dynamic range into our restricted range recording bucket each of the recorded stops is kept relatively small. Because the way the recording is made and the way the data is distributed is very different to the levels that a normal Rec-709 TV expects, when viewed on a Rec-709 TV, S-log doesn’t look great. It looks flat and lacks contrast. However it is worth understanding that if the TV actually had S-Log as it’s gamma curve the picture would look no different to a picture recorded with normal rec-709, it would not be flat, it would have lots of contrast. The only difference is that it would have a bigger dynamic range, so if the TV could do it, the highlights would be brighter.

THE CORRECT EXPOSURE LEVELS FOR SLOG-2 and SLOG-3.

Before we go any further lets just look at the correct exposure levels for SLog-2 and SLog-3 as recommended by Sony. As these gamma curves have a very large dynamic range the recording levels that they use are very different to the levels used by the normal 709 gamma curve used for conventional TV. As a result when correctly exposed, Slog looks flat and low contrast on a conventional monitor or in the viewfinder. The table below has the correct levels for middle grey (grey card) and 90% reflectance white (a white card or white piece of paper) for the different types of Slog.

Correct exposure levels for Sony's Slog.
Correct exposure levels for Sony’s Slog.

Correct exposure levels for Sony’s Slog.

The white level in particular is a lot lower than we would normally use for TV gamma. This is done to give extra space above white in the recording bucket to fit in the extended range that the camera is capable of capturing, all those bright highlights, bright sky and clouds and other things that cameras with a smaller dynamic range struggle to capture.

SETTING THE CORRECT EXPOSURE.

Let’s now take a look at how to set the correct starting point exposure for SLog-3. You can use a light meter if you wish, but if you do want to use a light meter I would first suggest you check the calibration of the light meter by using the grey card method below and comparing what the light meter tells you with the results you get with a grey or white card.

The most accurate method is to use a good quality grey card and a waveform display. For the screen shots seen here I used a Lastolite EzyBalance Calibration Card. This is a pop up grey card/white card that fits in a pocket but expands to about 30cm/1ft across giving a decent sized target. It has middle grey on one side and 90% reflectance white on the other. With the MLUT’s off, set the exposure so that the grey card is exposed at the appropriate level (see table above). If the firmware in your camera is up to date (at least version 3.0) you can set the zebras to 32% or 41% to do this or use an external monitor with a waveform display. The FS7’s built in waveform display is very had to use as it is so small and has no scale. I also recommend the use of a DSC Labs “One Shot” chart. The front of the chart has a series of color references that can be used in post production to set up your base color correction while the rear of the chart has both a large middle grey and 90% white square.

USING THE FS7’s WAVEFORM MONITOR OR ZEBRAS TO SET THE CORRECT BASE S-LOG3 EXPOSURE.

IMPORTANT NOTE: If you use a LUT, The Zebras measure the viewfinder image, so if a LUT is on for the the viewfinder, then the zebras measure the LUT. If there is no viewfinder LUT then the zebras measure the S-Log.

The Waveform Monitor and Histogram measure the SDI2 levels. So if you have a LUT on for SDI2 then the LUT levels are measured. If there is no LUT on SDI2 then the S-Log levels are measured.

See this video for more information on the Waveform, Histogram and Zebras:

The internal waveform display settings are found in the menu under:

VF: Display On/Off: Video Signal Monitor.

Setting the correct exposure for Slog-3 using a grey card. Middle grey should be 41%
Setting the correct exposure for Slog-3 using a grey card. Middle grey should be 41%

Setting the correct exposure for Slog-3 using a grey card. Middle Grey should be 41%

If you don’t have access to a better waveform display you can use a white card or grey card and zebras. When using zebras I prefer to use white as the reference as it is easier to see the zebras on a white target than a grey one. By setting up the Zebras with a narrow aperture window of around 3% you can get a very accurate exposure assessment for white. For SLog-3 set the Zebras to 3% aperture and the level at 61%.  For Slog-2 set the zebra level to 59%. To be honest, if you were to set the zebras to 60% this is going to work for both S-Log2 and S-Log3, a 1% error is too small to make any difference and variations in lighting or the white target will be greater than 1% anyway.

Setting up the Zebras to measure S-Log3 exposure of white card (90% reflectance white card).

Setting up the Zebras to measure S-Log3 exposure of white card (90% reflectance white card).
Setting up the Zebras to measure S-Log3 exposure of white card (90% reflectance white card).

Correct exposure for S-Log3 when using a 90% reflectance white target.

Correct exposure for S-Log3 when using a 90% reflectance white target.
Correct exposure for S-Log3 when using a 90% reflectance white target.

The image above shows the use of both the Zebras and Waveform to establish the correct exposure level for S-Log3 when using a 90% reflectance white card or similar target. Please note that very often a piece of white paper or a white card etc will be a little bit brighter than a calibrated 90% white card. If using typical bleached white printer paper I suggest you add around 4% to the white values in the above chart to prevent under exposure.

This will get you to the base exposure recommended by Sony, without using a LUT. But very often we want to expose brighter than this to improve the signal to noise ratio.

See also the video below for information on how to setup and use S-Log2 and S-Log3 in the CineEI mode:

 

USING LUTS’s and CINE EI:

SO HOW DOES CINE-EI WORK?

Selecting Cine EI in base settings on the PXW-FS7
Selecting Cine EI in base settings on the PXW-FS7

Cine EI is selected in the Base Settings page. It works in YPbPr, RGB and Raw main operation modes.

Cine-EI (Exposure Index) works differently to conventional camera gain. It’s operation is similar in other cameras that use Cine-EI or EI gain such as the F5, F55, F3, F65, Red or Alexa. You enable Cine-EI mode in the camera menus Base Settings page. On the F5 and F55 it works in YPbPr, RGB and RAW modes.

IMPORTANT: In the Cine-EI mode the ISO of the recordings remains fixed at the cameras native ISO (unless baking in a LUT,  more on that later). By always recording at the cameras native ISO you will always have 14 stops of dynamic range.

YOU NEED TO USE A LUT FOR CINE EI TO WORK:

You can only use LUT’s in the CineEI mode. In addition in order to be able to have LUT’s on for the Viewfinder, HDMI / SDI2, but NOT on the SD1 & Internal Rec you cannot set the HDMI to output 4K, you can only use HD or 2K.

PXW-FS7 output options.
PXW-FS7 output options.

So for most applications you will want to set your SDI and HDMI outputs to HD/2K in order to ba able to use the LUT system as designed for CineEI. For reference (2-3PD) means 2-3 pull down is added for 24p footage, so the output will be 60i with 24p footage sgown using pull down. PsF means progressive segmented frame which is the normal HDSDI standard for progressive output. Any of the HD or 2K output modes will allow the use of LUT’

Important: For Cine-EI mode to work as expected you MUST monitor your pictures in the viewfinder or via the SDI/HDMI output through one of the cameras built in MLUT’s (Look Up Table), LOOK’s or User3D LUT’s. So make sure you have the MLUT’s turned on. If you don’t have a LUT then it won’t work as expected because the EI gain is applied to the cameras LUT’s.

At this stage just set the MLUT’s to on for the Sub&HDMI output and the Viewfinder out.

PXW-FS7 Lut selection settings.
PXW-FS7 Lut selection settings.

The LUT’s are then turned on in the VIDEO: Monitor LUT: settings page of the menu. You will normally want to turn ON LUT’s for SDI2, HDMI and the VIEWFINDER (not seen in the image above, simply scroll down to the bottom of the page to see the VIEWFINDER option). For normal CinEI use you should leave SD1 & Internal Rec OFF as we don’t want to record the LUT, just monitor via the LUT.

EXPOSING VIA THE LUT/LOOK.

When viewing or monitoring via a LUT you should adjust your exposure so that the picture in the viewfinder looks correctly exposed. If the LUT is correctly exposed then The S-Log recording will also be correctly exposed. As a point of reference, middle grey for Rec-709 and the 709(800) LUT should be at, or close to 44% and white will be 90%. Skin tones and faces will be at the normal TV level of around 65-70%. As these levels are waht we are used to seeing with a conventional video camera, this makes judging exposure easy.

This is really quite simple, generally speaking when using a Rec709  LUT, if it looks right in the viewfinder, it probably is right. However it is important to note that different LUT’s will have slightly different optimum exposure levels. For example the 709(800) LUT is designed to be a very close match to the 709 gamma curve used in the majority of monitors, so this particular LUT is really simple to use because if the picture looks normal on the monitor then your exposure will also be normal. The included 709(800) LUT is the most accurate LUT for exposure as this matches the gamma used in the majority of monitors. This LUT produces a nice contrasty image that is easy to focus. It is not meant to be pretty! It is a tool to help you get accurate exposure simply and easily.

Correct exposure of Middle Grey for the 709(800) MLUT. Middle Grey should be 44%. 90% white (a white piece of paper) will be 90% and skin tones will be around 65-70%.

Correct exposure of the 709(800) LUT using a 90% white card, white will be 90%. You can use zebras at 90% to check this level (remember zebras etc measure the LUT exposure level when LUT’s are turned on).

Correct exposure of Middle Grey for the 709(800) MLUT. Middle Grey should be 42%. 90% will be 90%.
Correct exposure of Middle Grey for the 709(800) MLUT. Middle Grey should be 42%. 90% will be 90%.
Correct exposure of the 709(800) LUT using a 90% white card, white will be 90%. You can use zebras at 90% to check this level.
Correct exposure of the 709(800) LUT using a 90% white card, white will be 90%. You can use zebras at 90% to check this level.

The above images show the correct exposure levels for the 709(800) LUT. Middle grey should be 44% and 90% white is… well 90%. Very simple and you can easily use zebras to check the white level by setting them to 90%. As middle grey is where it normally is on a TV or monitor and white is also where you would expect to see it, when using the 709(800) LUT, if the picture looks right in the viewfinder then it generally is right. This means that the 709(800) LUT is particularly well suited to being used to set exposure as a correctly exposed scene will look “normal” on a 709 TV or monitor. SIMPLE!

I don’t recommend the use of any of the other LUT’s to set exposure because all of the other LUT’s have brightness ranges that are different to Rec-709. As a result the LUT has to be exposed at non standard levels to ensure the S-Log is exposed correctly. You can use any of the other LUT’s or LOOK if you really wish, but you will need to figure out the correct exposure levels for each LUT.

The LC709-TypeA Look is very popular as a LUT for the FS7 as it closely mimics the images you get from an Arri Alexa (“type A” = type Arri).

The “LC” part of the Look’s name means Low Contrast and this also means – big dynamic range. Whenever you take a big dynamic range (lots of shades) and show it on a display with a limited dynamic range (limited shades) all the shades in the image get squeezed together to fit into the monitors limited range and as a result the contrast gets reduced. This also means that middle grey and white are also squeezed closer together. With conventional 709 middle grey would be 42% and white around 80-90%, but with a high dynamic range/low Contrast gamma curve white gets squeezed closer to grey to make room for the extra dynamic range. This means that middle grey will remain close to 42% but white reduces to around 72%. So for the LC709 Looks in the FS7 optimum exposure is to have middle grey at 42% and white at 72%. Don’t worry too much if you don’t hit those exact numbers, a little bit either way does little harm.

Correct white level for the LC709 LOOK’s. White should be around 72%

Correct white level for the LC709 LOOK's. White should be around 72%
Correct white level for the LC709 LOOK’s. White should be around 72%

Top Tip: Not sure how many people are aware of this function and how it works, but it’s a great way to get around the inability to easily turn the LUT’s on and off in the CineEI mode.

Assign the Hi/Low Key option to one of your assignable buttons. So when using the 709(800) LUT (or any other LUT for that matter) the first press of the button darkens the VF image so you can see what highlights beyond the range of the LUT are doing exposure wise. This allows you to check for clipping that may be present in the much wider range S-log recordings. Press it again and you will see the image brighten allowing you to see further into the shadows so you can see the darkest things being captured by the S-log recordings. The Hi/Low Key function is a great way of seeing your full available exposure range without needing to turn the LUT on and off.

LUT EXPOSURE LEVELS FOR THE OTHER LUTS.

Here are some white levels for some of the built in LUT’s. The G40 or G33 part of the HG LUT’s is the recommended value for middle grey. Use these levels for the zebras if you want to check the correct exposure of a 90% reflectance white card. I have also include an approximate zebra value for a piece of typical white printer paper.

709(800) = Middle Grey 42%. 90% Reflectance white 90%, white paper 92%.

HG8009(G40) = Middle Grey 40%. 90% Reflectance white 83%, white paper 86%.

HG8009(G33) = Middle Grey 33%. 90% Reflectance white 75%, white paper 80%.

The “LC709” LOOK’s = Middle Grey 42%. 90% Reflectance white 72%, white paper 77%.

DONT PANIC if you don’t get these precise levels! I’m giving them to you here so you have a good starting point. A little bit either way will not hurt. Again, generally speaking if it looks right in the viewfinder or on your monitor screen, it is probably close enough not to worry about it.

BUT, again I would suggest sticking to the 709(800) LUT for setting exposure. It’s not the prettiest LUT, but is the only one of the included LUT’s that gives the correct, normal, brightness and contrast range on a conventional monitor, viewfinder or TV. If you want to keep things simple and accurate use 709(800).

USING EI OR EXPOSURE INDEX.

What is EI? EI stands for Exposure Index. This is NOT the same thing as ISO.

ISO is the sensitivity of the camera. ISO is the sensitivity that the camera records at.

EI is the sensitivity of the LUT. EI is the brightness at which the LUT displays the scene.

The FS7 has a native ISO of 2000 and the camera always records at 2000 ISO in the Cine EI mode.

But the EI of the LUT can be varied to make the LUT brighter and darker. the only thing EI changes is the brightness of the LUT. But when exposing via the LUT, if the LUT is made darker, to compensate for the dark looking LUT you open the aperture to let in more light. This will make the LUT look correct again. It will also result in a recording that is brighter than normal as we have opened the aperture.

CHANGING THE EI.

Latitude Indication.

At the native 2000 EI you have 6 stops of over exposure latitude and 8 stops of under exposure latitude (6 stops above middle grey and 8 stops below middle grey). This is how much headroom your shot has. Your over exposure latitude is indicated whenever you change the EI level. In the image below you can see the EI 2000EI followed by a 6.0E the 6.0E is the over exposure latitude.

The EI and Lattitude indication on the FS7.
The EI and Lattitude indication on the FS7.

The EI gain is altered by changing the cameras gain switch and the EI levels assigned to each of the Hi/Mid/Low switch positions can be changed in the camera menu. I recommend setting the EI steps to H 2000, M 1000 and L 500 as this allows you to select the native EI plus 1 stop and 2 stops down (each time you halve the ISO you are shifting the exposure one stop down).

The PXW-FS7 EI settings for the gain switch.
The PXW-FS7 EI settings for the gain switch.

REDUCING THE EI.

So what happens when you halve the EI gain to 1000EI?  1 stop of gain/ISO will subtracted from the LUT. As a result the picture you see via the LUT becomes one stop darker (a good thing to know is that 1 stop of exposure is the same as 6db of gain or a doubling or halving of the ISO). So the picture in the viewfinder gets darker. But also remember that the camera will still be recording at the native ISO (unless baking-in the LUT).

 

 

Why does this happen and what’s happening to my pictures?

First of all lets take a look at the scene, as seen in the cameras viewfinder when we are at the native 2000 EI and then with the EI changed one stop down so it becomes 1000EI. The native ISO on the left, the one stop lower EI on the right.

2000EI and 1000EI as seen in the viewfinder with NO exposure change.
2000EI and 1000EI as seen in the viewfinder with NO exposure change.

2000EI and 1000EI as seen in the viewfinder with NO exposure change.

So, in the viewfinder, when you lower the EI by one stop (halving the EI) the picture becomes darker by 1 stop. If using an external monitor with a waveform display connected to SDI2 or the HDMI output this too would get darker and the waveform levels decrease by one stop.

As a camera operator, what do you do when you have a dark picture? Well most people would normally compensate for a dark looking image by opening the iris to compensate. As we have gone one stop darker with the EI gain, making the LUT 1 stop darker, to return the viewfinder image back to the same brightness as it was at the native EI you would open the iris by one stop.

So now, after reducing the EI by one stop and then compensating by opening the iris by 1 stop, the viewfinder image is the same brightness as it was when we started.

But what’s happening to my recordings?

Remember the recordings, whether on the XQD card (assuming the SD1 & Internal Rec LUT is OFF) are always at the cameras native 2000 ISO, no matter what the EI is set to. As a result, because you will have opened the iris by 1 stop to compensate for the dark viewfinder image the recording will have become 1 stop brighter. Look at the image below to see what we see in the viewfinder alongside what is actually being recorded. The EI offset exposure with aperture correction as seen in the viewfinder (left hand side) looks normal, while the actual native ISO recording (right hand side) is 1 stop brighter.

At 1000EI the Viewfinder image on the left is 1 stop darker than the actual recorded image (on the right) which is recorded at the native 2000 ISO.

VF-and-Internal

How does this help us, what are the benefits?

When you take this brighter recorded image in to post production the colorist will have to bring the levels back down to normal as part of the grading process. As he/she will be reducing the levels in post production by around 1 stop (6db) any noise in the picture will also be reduced by 6db. The end result is a picture with 6db less noise than if shot at the native EI. Another benefit may be that as the scene was exposed brighter you will be able to see more shadow information.

Is there a down side to using a low EI?

Because the actual recorded exposure is brighter by one stop you have one stop less headroom. However the PXW-FS7 has an abundance of headroom so the loss of one stop is often not going to cause a problem. I find that going between 1 and 1.5 EI stops down rarely results in any highlight issues. But when shooting very high contrast scenes and using a low EI it is worth toggling the LUT on and off to check for clipping in the SLog image.

It’s also worth noting the S-Log does not have a highlight roll off. Each stop above middle grey is recorded with the same amount of data, so exposing brighter by a stop or two does not alter the contrast as it would with a standard gamma. So over exposing log is NOT a bad thing. It is in fact in most cases highly beneficial.

Log gamma curves have very little picture information in the shadows, so if we can expose brighter our shadows will look much better. 

What is happening to my exposure range?

What you are doing is moving the mid point of your exposure range up in the case of a lower EI (up because you are opening the aperture, thus making the recordings brighter). This allows the camera to see deeper into the shadows increasing the under exposure latitude, but reduces the over exposure latitude. The reverse is also possible. If you use a higher EI you shift your mid point down. This gives you more headroom for dealing with very bright highlights, but you won’t see as far into the shadows and the final pictures will be a little noisier as in post production the overall levels will have to be brought up to compensate for the darker overall recordings.

Cine-EI allows us to shift our exposure mid point up and down.  Lowering the EI gain gives you a darker VF image so you compensate by opening the aperture which results in brightly exposed footage. This reduces over exposure headroom but increases under exposure range (and improves the signal to noise ratio). Adding EI gain gives a brighter Viewfinder image which makes you expose the recordings darker, which gives you more headroom but with less underexposure range (and a worse signal to noise ratio).

When shooting with almost any CineEI camera I will use an EI that is between 1 and 2 stops darker than the base settings. So on the FS7 I normally set the EI to 800 EI. It’s very rare to get any highlight problems at 800 EI and the improvement this low EI brings to the noise levels in the image is very nice.

Slide01

Post Production.

When shooting raw information about the EI gain is stored in the clips metadata. The idea is that this metadata can be used by the grading or editing software to adjust the clips exposure level in the edit or grading application so that it looks correctly exposed (or at least exposed as you saw it in the viewfinder via the LUT). The metadata information is recorded alongside the XAVC footage when shooting SLog2/3. However, currently few edit applications or grading applications use this metadata to offset the exposure, so S-Log2/3 material may look dark/bright when imported into your edit application and you may need to add a correction to return the exposure to a “normal” level. You can use a correction LUT to move the exposure up and when I provide LUT sets on this website I will always try to include LUT’s for over and under exposure. Another way to deal with brightly exposed log footage in post production is to first apply an “S” curve using the luma curve tool to the log. Then a simple gain adjustment will shift the exposure.

See this video for detailed information on how to expose using CineEI:

 

 WHAT IF YOU ARE SHOOTING USING HFR (High Frame Rate) AND LUT’S CANT BE USED.

In HFR you can either have LUT’s on for everything including internal recording, or all off, no LUT’s at all. This is not helpful if your primary recordings are internal S-Log.

So for HFR in many cases you will have to just work viewing the native S-log. If you set zebras to 70% and expose a white card at 70% this will result in S-Log footage that is 1.2 – 1.5 stops over exposed. This is the same as shooting at 800 EI and I highly recomend this approach for HFR (slow motion) shooting as it will help clean up the additional noise that you see when shooting HFR.

BAKING IN THE LUT/LOOK.

When shooting using a high or low EI, the EI gain is added or subtracted from the LUT or LOOK, this makes the picture in the viewfinder or monitor fed via the LUT brighter or darker depending on the EI used. In Cine-EI mode you want the camera to always actually record the S-log at the cameras native 2000 ISO. So normally you want to leave the LUT’s OFF for the internal recording. Just in case you missed that very important point: normally you want to leave the LUT’s OFF for the internal recording!

You need to turn ON the SD1 and Internal Rec LUT t "Bake In" a LUT. Normally leave this OFF.
You need to turn ON the SD1 and Internal Rec LUT t “Bake In” a LUT. Normally leave this OFF.

Just about the only exceptions to this might be when shooting raw or when you want to deliberately record with the LUT/Look baked in to your XQD recordings. By “baked-in” I mean with the gamma, contrast and color of the LUT/Look permanently recorded as part of the recording. You can’t remove this LUT/look later if it’s “baked-in”.

No matter what the LUT/Look settings, if you’re recording raw on an external raw recorder, recorder the raw is always recorded at 2000 ISO.  But the internal XQD recordings are different. It is possible, if you choose, to apply a LUT/LOOK to the XQD recordings by setting the “SDI1 & Internal Rec” LUT to ON. The gain of the recorded LUT/LOOK will be altered according to the CineEI gain settings. This might be useful to provide an easy to work with proxy file for editing, with the LUT/LOOK baked-in while shooting raw. Or as a way to create an in-camera look or style for material that won’t be graded. Using a baked-in LUT/LOOK for a production that won’t be graded or only have minimal grading is an interesting alternative to using Custom Mode that should be considered for fast turn-around productions.

In most cases however you will probably not have a LUT applied to your primary recordings. If shooting in S-Log you must set LUT – OFF for “SDI1 & Internal Rec” See the image above. With “SDI1 & Internal Rec” Off the internal recordings, without LUT, will be SLog2 or Slog3 and at 2000 ISO.

You can tell what it is that the camera is actually recording by looking in the viewfinder. At the center right side of the display there is an indication of what is being recorded on the cards. Normally for Cine-EI this should say either SLog2 or Slog3. If it indicates something else, then you are baking the LUT in to the internal recordings.

The internal recording gamma is shown on the right of the VF. This is recording SLog-3
The internal recording gamma is shown on the right of the VF. This is recording SLog-3
The indication here shows that the 709(800) LUT is being baked-in to the internal recordings.
The indication here shows that the 709(800) LUT is being baked-in to the internal recordings.

CINE-EI SUMMARY:

CineEI allows you to “rate” the camera at different ISO.

You MUST use a LUT for CineEI to work as designed.

A low EI number will result in a brighter exposure which will improve the signal to noise ratio giving a cleaner picture or allow you to see more shadow detail. However you will loose some over exposure headroom.

A high EI number will result in a darker exposure which will improve the over exposure headroom but decrease the under exposure range. The signal to noise ratio is worse so the final picture may end up with more noise.

A 1D LUT will not clip and appear to overexpose as readily as a 3D LOOK when using a low EI, so a 1D LUT may be preferable.

When viewing via a 709 LUT you expose using normal 709 exposure levels. Basically if it looks right in the viewfinder or on the monitor (via the 709 LUT) it almost certainly is right.

When I shoot with my FS7 I normally rate the camera at between 800 and 1000EI. I find that 5 stops of over exposure range is plenty for most situations and I prefer the decrease in noise in the final pictures. But please, test and experiment for yourself.

 

 

QUICK GUIDE TO CREATING YOUR OWN LOOK’s (Using DaVinci Resolve).

It’s very easy to create your own 3D LUT for the FS7 using DaVinci Resolve or just about any grading software with LUT export capability. The LUT should be a 17x17x17 or 33x33x33 .cube LUT. This is what Resolve creates by default and .cube LUT’s are the most common types of LUT in use today.

First simply shoot some test Slog3 clips at 2000EI. In addition you should also use the same color space (S.Gamut or S.Gamut3.cine) for the test shot as you will when you want to use the LUT. I recommend shooting a variety of clips so that you can asses how the LUT will work in different lighting situations.

Import and grade the clips from the test shoot in Resolve creating the look that you are after for your production or as you wish your footage to appear in the viewfinder of the camera. Then once your happy with the look of the graded clip, right click on the clip in the timeline and “Export LUT”. Resolve will then create and save a .cube LUT.

Then place the .cube LUT file created by the grading software on an SD card in the PMWF55_F5 folder. You may need to create the following folder structure on the SD card. So first you have a PRIVATE folder, in that there is a SONY folder and so on.

PRIVATE   :   SONY   :    PRO   :   CAMERA   :    PMWF55_F5

Put the SD card in the camera, then go to the “File” menu and go to “Monitor 3D LUT” and select “Load SD Card”. The camera will offer you a 1 to 4 destination memory selection. Choose 1,2,3 or 4, this is the memory location where the LUT will be saved. You should then be presented with a list of all the LUT’s on the SD card. Select your chosen LUT to save it from the SD card to the camera.

Once loaded in to the camera when you choose 3D User LUT’s you can select between user LUT memory 1,2,3 or 4. Your LUT will be in the memory you selected when you copied the LUT from the SD card to the camera.

Do you really want or need 4K?

For the past 18 months almost everything I have shot has been shot at 4K. I have to say that I am addicted to the extra resolution and the quality of the images I am getting these days from my PMW-F5 and R5 raw recorder. In addition, the flexibility I get in post from shooting in 4K to crop and re-frame my shots is fantastic.

BUT: I have a Sony A7s on order. Us European buyers won’t get them until late July as the European model is different to the US model, in the US the cameras are based on the NTSC system, so do 24, 30 and 60fps while the European models are based on PAL, so do 25 and 50fps  but with the addition of 24fps as well. Right now there are no realistic portable 4K recording option for the A7s, these will come later. So this means that for a now if I want to shoot with the A7s it will have to be HD.

Is that really such a bad thing? Well, no not really. It’s a sideways step not a backwards one, as I’m getting the A7s for a very specific roll.

Image quality is a combination of factors. Resolution is just one part of the image quality equation. Dynamic range, contrast, noise, colour etc all contribute in more or less equal parts to getting a great image. The A7s delivers all of these very well. If I am delivering in HD then most of the time I don’t NEED 4K. 4K is nice to have and if I can have 4K then I will take advantage of it, but for an HD production it is definitely not essential in most cases.

The reason for getting the A7s is that I want a pocket sized camera that I can use for grab and go shooting. It offers amazing low light performance and great dynamic range thanks to it’s use of S-Log2. I’m really excited about the prospect of having a camera as sensitive as the A7s for next years Northern Lights trips. I should be able to get shots that have not been possible before, so even at “only” HD the A7s will get used along side my 4K F5/R5.

In the future there will of course be external 4K recording options for the A7s making it even more versatile. I probably won’t always use them with the A7s but the option will be there when I NEED 4K.

Given the choice, if I can shoot in 4K I almost always will. I want to shoot in 4K whenever I can. It really does give me much greater post production flexibility, for example I can shoot a wide shot of an interview in 4K and then crop in for a mid shot or close up if I’m delivering in HD. So 4K will always be very high on my priority list when choosing a camera. But if you can’t afford 4K and are still delivering in HD then worry not. It’s probably better to have a well optimised HD camera than a cheap, poor quality less than perfect 4K camera. Don’t let 4K trick you into buying a lesser camera just because the lesser camera has 4K.

Well shot HD still looks fantastic, even on a big screen. Most movies are shown at 2K and few people complain about the quality of most blockbusters. So, HD is still good enough, 4K has not made HD obsolete or degraded the quality of existing HD cameras.

But is good enough? Good enough for you and your clients? I am passionate about getting great images, so I don’t just want good enough, I want the best I can get, so I’m a 4K convert, as are some of my clients. I’m actually delivering content in 4K for many of my customers. But sometimes, 4K isn’t practical, so in these cases I’ll just get the very best HD I can (hence the A7s for very portable and ultra low light shooting).

The bottom line is that right now, maybe you don’t need 4K, but it’s OK to want 4K. You may need 4K very soon as it becomes more mainstream (some nice Samsung and LG 4K TV’s are now available in the $1.5K/£1K price range). 4K might bring you many benefits in post production, but that doesn’t mean you need it, not yet at least. But once you do start to shoot in 4K there is no going back and while you might still not need 4K, you’ll probably find that you do actually need 4K. 🙂

What is a Gamut or Color Space and why do I need to know about it?

Well I have set myself quite a challenge here as this is a tough one to describe and explain. Not so much perhaps because it’s difficult, but just because it’s hard to visualise, as you will see.

First of all the dictionary definition of Gamut is “The complete range or scope of something”.

In video terms what it means is normally the full range of colours and brightness that can be either captured or displayed.

I’m sure you have probably heard of the specification REC-709 before. Well REC-709, short for ITU-R Recommendation, Broadcast Television, number 709. This recommendation sets out the display of colours and brightness that a television set or monitor should be able to display. Note that it is a recommendation for display devices, not for cameras, it is a “display reference” and you might hear me talking about when things are “display referenced” ie meeting these display standards or “scene referenced” which would me shooting the light and colours in a scene as they really are, rather than what they will look like on a display.

Anyway…. Perhaps you have seen a chart or diagram that looks like the one below before.

Sony colour gamuts.
Sony colour gamuts.

Now this shows several things. The big outer oval shape is what is considered to be the equivalent to what we can see with our own eyes. Within that range are triangles that represent the boundaries of different colour gamuts or colour ranges. The grey coloured triangle for example is REC-709.

Something useful to know is that the 3 corners of each of the triangles are whats referred to as the “primaries”. You will hear this term a lot when people talk about colour spaces because if you know where the primaries (corners) are, by joining them together you can find the size of the colour space or Gamut and what the colour response will be.

Look closely at the chart. Look at the shades of red, green or blue shown at the primaries for the REC-709 triangle. Now compare these with the shades shown at the primaries for the much larger F65 and F55 primaries. Is there much difference? Well no, not really. Can you figure out why there is so little difference?

Think about it for a moment, what type of display device are you looking at this chart on? It’s most likely a computer display of some kind and the Gamut of most computer displays is the same size as that of REC-709. So given that the display device your looking at the chart on can’t actually show any of the extended colours outside of the grey triangle anyway, is it really any surprise that you can’t see much of a difference between the 709 primaries and the F65 and F55 primaries. That’s the problems with charts like this, they don’t really tell you everything  that’s going on. It does however tell us some things. Lets have a look at another chart:

SGamuts Compared.
SGamuts Compared.

This chart is similar to the first one we looked at, but without the pretty colours. Blue is bottom left, Red is to the right and green top left.

What we are interested in here is the relationship between the different colour space triangles.  Using the REC-709 triangle as our reference (as that’s the type of display most TV and video productions will be shown on) look at how S-Gamut and S-Gamut3 is much larger than 709. So S-Gamut will be able to record deeper, richer colours than 709 can ever hope to show. In addition, also note how S-Gamut isn’t just a bigger triangle, but it’s also twisted and distorted relative to 709. This is really important.

You may also want to refer to the top diagram as well as I do my best to explain this. The center of the overall gamut is white. As you draw a line out from the center towards the colour spaces primary the colour becomes more saturated (vivid). The position of the primary determines the exact hue or tone represented. Lets just consider green for the moment and lets pretend we are shooting a shot with 3 green apples. These apples have different amounts of green. The most vivid of the 3 apples has 8/10ths of what we can possibly see, the middle one 6/10ths and the least colourful one 4/10ths. The image below represents what the apples would look like to us if we saw them with our eyes.

The apples as we would see them with our own eyes.
The apples as we would see them with our own eyes.

If we were shooting with a camera designed to match the 709 display specification, which is often a good idea as we want the colours to look right on the TV, the the greenest, deepest green we can capture is the 709 green primary. lets consider the 709 green primary to be 6/10ths with 10/10ths  being the greenest thing a human being can see. 6/10ths green will be recorded at our peak green recording level so that when we play back on a 709 TV it will display the greenest the most intense green that the display panel is capable of.  So if we shoot the apples with a 709 compatible camera, 6/10ths green will be recorded at 100% as this is the richest green we can record (these are not real levels, I’m just using them to illustrate the principles involved) and this below is what the apples would look like on the TV screen.

6/10ths Green and above recorded at 100% (our imaginary rec-709)
6/10ths Green and above recorded at 100% (our imaginary rec-709)

So that’s rec-709, our 6/10ths green apple recorded at 100%. Everything above 6/10 will also be 100% so the 8/10th and 6/10ths green apples will look more or less the same.

What happens then if we record with a bigger Gamut. Lets say that the green primary for S-Gamut is 8/10ths of visible green. Now when recording this more vibrant 8/10ths green in S-Gamut it will be recorded at 100% because this is the most vibrant green that S-Gamut can record and everything less than 8/10 will be recorded at a lower percentage.

But what happens if we play back S-Gamut on a 709 display? Well when the 709 display sees that 100% signal it will show 6/10ths green, a paler less vibrant shade of green than the 8/10ths shade the camera captured because 6/10ths is the most vibrant green the display is capable of. All of our colours will be paler and less rich than they should be.

The apples recorded using a big gamut but displayed using 709 gamut.
The apples recorded using a big gamut but displayed using 709 gamut.

So that’s the first issue when shooting with a larger colour Gamut than the Gamut of the display device, the saturation will be incorrect, a dark green apple will be pale green. OK, that doesn’t sound like too big a problem, why don’t we just boost the saturation of the image in post production? Well if the display is already showing our 100% green S-Gamut signal at the maximum it can show (6/10ths for Rec-709) then boosting the saturation won’t help colours that are already at the limit of what the display can show simply because it isn’t capable of showing them any greener than they already look. Boosting the saturation will make those colours not at the limit of the display technology richer, but those already at the limit won’t get any more colourful. So as we boost the saturation any pale green apples become greener while the deep green apples stay the same so we loose colour contrast between the pale and deep green apples. The end result is an image that doesn’t really look any different that it would have done if shot in Rec-709.

Saturation boosted S-Gamut looks little different to 709 original.
Saturation boosted S-Gamut looks little different to 709 original.
Sony colour gamuts.
Sony colour gamuts.

But, it’s even worse that just a difference to the saturation. Look at the triangles again  and compare 709 with S-Gamut. Look at how much more green there is within the S-Gamut colour soace than the 709 colour space compared to red or blue.  So what do you think will happen if we try to take that S-Gamut range and squeeze it in to the 709 range? Well there will be a distinct colour shift towards green as we have a greater percentage of green in S-Gamut than we should have in Rec-709 and that will generate a noticeable colour shift and the skewing of colours.

Squeezing S-Gamut into 709 will result in a colour shift.
Squeezing S-Gamut into 709 will result in a colour shift.

This is where Sony have been very clever with S-Gamut3. If you do take S-Gamut and squeeze it in to 709 then you will see a colour shift (as well as the saturation shift discussed earlier). But with S-Gamut3 Sony have altered the colour sampling within the colour space so that there is a better match between 709 and S-Gamut3. This means that when you squeeze S-Gamut3 into 709 there is virtually no colour shift. However S-Gamut3 is still a very big colour space so to correctly use it in a 709 environment you really need to use a Look Up Table (LUT) to re-map it into the smaller space without an appreciable saturation loss, mapping the colours in such a way that a dark green apple will still look darker green than a light green apple but keeping within the boundaries of what a 709 display can show.

Taking this one step further, realising that there are very few, if any display devices that can actually show a gamut as large as S-Gamut or S-Gamut3, Sony have developed a smaller Gamut known as S-Gamut3.cine that is a subset of S-Gamut3.

The benefit of this smaller gamut is that the red green and blue ratios are very close to 709. If you look at the triangles you can see that S-Gamut3.cine is more or less just a larger version of the 709 triangle. This means that colours shifts are almost totally eliminated making this gammut much easier to work with in post production. It’s still a large gamut, bigger than the DCI-P3 specification for digital cinema, so it still has a bigger colour range than we can ever normally hope to see, but as it is better aligned to both P3 and rec-709 colourists will find it much easier to work with. For productions that will end up as DCI-P3 a slight saturation boost is all that will be needed in many cases.

So as you can see, having a huge Gamut may not always be beneficial as often we don’t have any way to show it and simply adding more saturation to a seemingly de-saturated big gamut image may actually reduce the colour contrast as our already fully saturated objects, limited by what a 709 display can show, can’t get any more saturated. In addition a gamut such as S-Gamut that has a very different ratio of R, G and B to that of 709 will introduce colour shifts if it isn’t correctly re-mapped. This is why Sony developed S-Gamut3.cine, a big but not excessively large colour space that lines up well with both DCI-P3 and Rec-709 and is thus easier to handle in post production.

Understanding Sony’s SLog3. It isn’t really noisy.

It’s been brought to my attention that there is a lot of concern about the apparent noise levels when using Sony’s new Slog3 gamma curve. The problem being that when you view the ungraded Slog3 it appears to have more noise in the shadows than Slog2. Many are concerned that this “extra” noise will end up making the final pictures nosier. The reality is that this is not the case, you won’t get any extra noise using Slog3 over Slog2. Because S-Log3 is closer to the log gamma curves used in other cameras many people find that Slog3 is generally easier to grade and work with in post production.

So what’s going on?

Slog3 mimics the Cineon Log curve, a curve that was originally designed, back in the 1980’s to match the density of film stocks. As a result the shadow and low key parts of the scene are shown and recorded at a brighter level than Slog2. S-Log2 was designed from the outset to work with electronic sensors and is optimised for the way an electronic sensor works rather than film. Because the S-Log3 shadow range has more gain than S-log2, the shadows end up a bit brighter than it perhaps they really needs to be and because of the extra gain the noise in the shadows appears to be worse. The noise level might be a bit higher but the important thing, the ratio between wanted picture information and un wanted noise is exactly the same whether in Slog2 or Slog3.

Let me explain:

The signal to noise ratio of a camera is determined predominantly by the sensor itself and how the sensor is read. This is NOT changing between gamma curves.

The other thing that effects the signal to noise ratio is the exposure level, or to be more precise the aperture and how much light falls on the sensor. This should be same for Slog2 and Slog3. So again no change there.

As these two key factors do not change when you switch between Slog2 and slog3, there is no change in the signal to noise ratio between Slog2 and Slog3. It is the ratio between wanted picture information and noise that is important. Not the noise level, but the ratio. What people see when they look at ungraded SLog3 is a higher noise level simply because ALL the signal levels are also higher, both noise and desirable image information. So the ratio between the wanted signal and the noise is actually no different for both Slog2 and Slog3.

Gamma is just gain, nothing more, nothing less, just applied by variable amounts at different levels. In the case of log, the amount of gain decreases as you go further up the curve.

Increasing or decreasing gain does NOT significantly change the signal to noise ratio of a digital camera (or any other digital system). It might make noise more visible if you are amplifying the image more than normal in an underexposure situation where you are using that extra gain to compensate for not enough light. But the ratio between the dark object and the noise does not change, it’s just that as you have made the dark object brighter by adding gain, you have also made the noise brighter by the same amount, so the noise also becomes brighter and thus more obvious.

Lets take a look at some Math. I’ll keep it very simple, I promise!

Just for a moment to keep things simple, lets say some camera has a signal to noise ratio of 3:1 (SNR is normally measured in db, but I’m going to keep things really simple here).

So, from the sensor if my picture signal is 3 then my noise will be 1.

If I apply Gamma Curve “A” which has 2x gain then my picture becomes 6 and my noise becomes 2. The SNR is 6:2 = 3:1

If I apply Gamma Curve “B” which has 3x gain then my picture becomes 9 and my noise becomes 3. The SNR is 9:3 = 3:1 so no change to the ratio, but the noise is now 3 with gamma B compared to  Gamma A where it is 2, so the gamma B image will appear at first glance to be noisier.

Now we take those imaginary clips in to post production:

In post we want to grade the shots so that we end up with the same brightness of image, so lets say our target level after grading is 12.

For the gamma “A” signal we need to add 3x gain to take 6 to 18. As a result the noise now becomes 6 (3 x 2 = 6).

For the gamma “B” signal (our noisy looking one) we need to use  less gain in post, only 2x gain, to take 9  to 18. When we apply 2x gain our noise for gamma B becomes 6 (2 x 3 = 6).

Notice anything? In both cases the noise in the final image is exactly the same, in both cases the final image level is 18 and the final noise level is 6, even though the two recordings started at different levels with one appearing noisier than the other.

OK, so that’s the theory, what about in practice?

Take a look at the images below. These are 400% crops from larger frames. Identical exposure, workflow and processing for each. You will see the original Slog2 and SLog3 plus the Slog 2 and Slog 3 after applying the LC-709 LUT to each in Sony’s raw viewer. Nothing else has been done to the clips. You can “see” more noise in the raised shadows in the untouched SLog3, but after applying the LUTs the noise levels are the same. This is because the Signal to Noise ratio of both curves is the same and after adding the LUT’s the total gain applied (camera gain + LUT gain) to get the same output levels is the same.

Slog2-400
Slog3-400Slog2-to-709-400Slog3-to-709-400

It’s interesting to note in these frame grabs that you can actually see that in fact the S-Log3 final image looks if anything a touch less noisy. The bobbles and the edge of the picture frame look better in the Slog3 in my opinion. This is probably because the S-Log3 recording uses very slightly higher levels in the shadow areas and this helps reduce compression artefacts.

The best way to alter the SNR of a typical video system (other than through electronic noise reduction) is by changing the exposure, which is why EI (Exposure Index) and exposure offsets are so important and so effective.

Slog3 has a near straight line curve above middle grey. This means that in post production it’s easier to grade as adjustments to one part of the image will have a similar effect to other parts of the image. It’s also very, very close to Cineon and to Arri Log C and in many cases LUT and grades designed for these gammas will also work pretty well with SLog3.

The down side to Slog3?

Very few really. Fewer data points are recorded for each stop in the brighter parts of the picture and highlight range compared to Slog2. This doesn’t change the dynamic range but if you are using a less than ideal 8 bit codec you may find S-Log2 less prone to banding in the sky or other gradients compared to S-Log3. With a 10 bit recording, in a decent workflow, it makes very little difference.

 

What causes CA or Purple and Blue fringes in my videos?

Blue and purple fringes around edges in photos and videos are nothing new. Its a problem we have always had. telescopes and binoculars can also suffer. It’s normally called chromatic aberration or CA. When we were all shooting in standard definition it wasn’t something that created too many issues, but with HD cameras and 4K cameras it’s a much bigger issue because as you increase the resolution of the system (camera + lens) generally speaking, CA becomes much worse.

As light passes through a glass lens the different wavelengths that result in the different colours we see are diffracted and bet by different amounts. So the point behind the lens where the light comes into sharp focus will be different for red light to blue light.

A simple glass lens will bend red, green and blue wavelengths by different amounts, so the focus point will be slightly different for each.
A simple glass lens will bend red, green and blue wavelengths by different amounts, so the focus point will be slightly different for each.

The larger the pixels on your sensor the less of an issue this will be. Lets say for example that on an SD sensor with big pixels, when the blue light is brought to best focus the red light is out of focus by 1/2 a pixel width. All you will see is the very slightest red tint to edges as a small bit of out of focus red spills on to the adjacent pixel. Now consider what happens if you increase the resolution of the sensor. If you go from SD to HD the pixels need to made much smaller to fit them all on to the same size sensor. HD pixels are around half the size of SD pixels (for the same size sensor). So now that out of focus red light that was only half the width of an SD pixel will completely fill the adjacent pixels so the CA becomes more noticeable.

In addition as you increase the resolution of the lens you need to make the focus of the light “tighter” and less blurred to increase the lenses resolving power. This has the effect of making the difference between the focus points of the red and blue light more distinct, there is less blurring of each colour, so less bleed of one colour into the other and as a result more CA as the focus point for each wavelength becomes more distinct. When each focus point is more distinct the difference between the in focus and out of focus light becomes more obvious, so the colour fringing becomes more obvious.

This is why SD lenses very often show less CA than HD lenses, a softer more blurry SD lens will have less distinct CA. Lens manufacturers will use exotic types of glass to try to combat CA. Some types of glass have a negative index so blue may focus closer than red and then other types of glass may have a positive index so red may focus closer than blue. By mixing positive and negative glass elements within the lens you can cancel out some of the colour shift. But this is very difficult to get right across all focal lengths in zoom lenses so some CA almost always remains. The exotic glass used in some of the lens elements can be incredibly expensive to produce and is one of the reasons why good lenses don’t come cheap.

Rather than trying to eliminate every last bit of CA optically the other approach is to electronically reduce the CA by either shifting the R G B channels in the camera electronically or reducing the saturation around high contrast edges. This is what ALAC or CAC does. It’s easier to get a better result from these systems when the lens is precisely matched to the camera and I think this is why the CA correction on the Sony kit lenses tends to be more effective than that of the 3rd party lenses.

Sony recently released firmware updates for the PMW200 and PMW300 cameras that improves the performance of the electronic CA reduction of these cameras when using the supplied kit lenses.

Understanding the difference between Display Referenced and Scene Referenced.

This is really useful! Understand this and it will help you understand a lot more about gamma curves, log curves and raw. Even if you don’t shoot raw, understanding this can be very helpful in working out differences in how we see the world, the way the world really is and how a video camera see’s the world.

So first of all what is “Display Referenced”? As the name of the term implies this is all about how an image is displayed. The vast majority of gamma curves are display referenced. Most cameras are setup based on what the pictures look like on a monitor or TV, this is display referenced. It’s all about producing a picture that looks nice when it is displayed. Most cameras and monitors produce pictures that look nice by mimicking the way or own visual system works, that’s why the pictures look good.

Kodak Grey Card Plus.
Kodak Grey Card Plus.

If you’ve never used a grey card it really is worth getting one as well as a black and white card. One of the most commonly available grey cards is the Kodak 18% grey card. Look at the image of the Kodak Grey Card Plus shown here. You can see a white bar at the top, a grey middle and a black bar at the bottom.

What do you see? If your monitor is correctly calibrated the grey patch should look like it’s half way between white and black. But this “middle” grey is also known as 18% grey because it only actually reflects 18% of the light falling on it. A white card will reflect 90% of the light falling on it. If we assume black is black then you would think that a card reflecting only 18% of the light falling on it would look closer to black than white, but it doesn’t, it looks half way between the two. This is because our own visual system is tuned to shadows and the mid range and tends to ignore highlights and brighter parts of the scenes we are looking at. As a result we perceive shadows and dark objects as brighter than they actually are. Maybe this is because in the past the things that used to want to eat us lurked in the shadows, or simply because faces are more important to us than the sky and clouds.

To compensate for this, right now your monitor is only using 18% of it’s brightness range to show shades and hues that appear to be half way between black and white. This is part of the gamma process that makes images on screens look natural and this is “display referenced”

When we expose a video camera using a display referenced gamma curve (Rec-709 is display referenced) and a grey card, we would normally set the exposure level of the grey card at around 40-45%. It’s not normally 50% because a white card will reflect 90% of the light falling on it and half way between black and the white card will be about 45%.

We do this for a couple of reasons. In older analog recording and broadcasting systems the signal is nosier when closer to black, if we recorded 18% grey at 18% it would be possibly be very noisy. Most scenes contain lots of shadows and objects less bright than white, so recording these at a higher level provides a less noisy picture and allows us to use more bandwidth for those all important shadow areas. When the recording is then displayed on a TV or monitor the levels are then adjusted by the monitors gamma curve so that the brightness levels are such that mid-tones appear as just that, mid tones.

So that middle grey recorded at 45% is getting reduced back down so that the display outputs 18% of its available brightness range and thus to us humans it appears to be half way between black and white.

So are you still with me? All the above is “Display Referenced”, it’s all about how it looks.

So what is “Scene Referenced”?

Think about our middle grey grey card again. It reflects only 18% of the light that falls on it, yet appears to be half way between black and white. How do we know this? Well because someone has used a light meter to measure it. A light meter is a device that captures photons of light and from that produces an electrical signal to drive a meter. What is a video camera? Every pixel in a video camera is a microscopic light meter that turns electrons of light into and electrical signal. So a video camera is in effect a very sophisticated light meter.

Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.
Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.

If we remove the cameras gamma curve and just record the data coming off the sensor we are recording a measurement of the true light coming from the scene just as it is. Sony’s F5, F55 and F65 cameras record the raw sensor data with no gamma curve, this is linear raw data, so it’s a true representation of the actual light levels in the scene. This is “Scene Referred”. It’s not about how the picture looks, but recording the actual light levels in the scene. So a camera shooting “Scene Referred” will record the light coming off an 18% grey card at 18%.

If we do nothing else to that scene referred image and then show it on a monitor with a conventional gamma curve, that 18% grey level would be taken down in level by the gamma curve and as a result look almost totally black (remember in Display referenced we record middle grey at 45% and then the gamma curve corrects the monitor output down to provide correct brightness so that we perceive it to be half way between black and white).

This means that we cannot simply take a scene referenced shot and show it on a display referenced monitor. To get from Scene Referenced to Display Referenced we have to add a gamma curve to the Scene Referenced footage. When your working with linear raw this is normally done on the fly in the editing or grading software, so it’s very rare to actually see the scene referenced footage as it really is. The big advantage of using scene referenced material is that because we have recorded the scene as it actually is, any grading we do will not have to deal with the distortions that a gamma curve adds. Grading correction behave in a much more natural and realistic manner. The down side is that as we don’t have a gamma curve to help shift our recording levels into a more manageable range we need to use a lot more data to record the scene accurately.

The Academy ACES workflow is based around using scene referenced material rather than display referenced. One of the ideas behind this is that scene referenced cameras from different manufacturers should all look the same. There is no artistic interpretation of the scene via a gamma curve. A scene referenced camera should be “measuring” and recording the scene how it actually is so it shouldn’t matter who makes it, they should all be recording the same thing. Of course in reality life is not that simple. Differences in the color filters, pixel design etc means that there are differences, but by using scene referred you eliminate the gamma curve and as a result a grade you apply to one camera will look very similar when applied to another, making it easier to mix multiple cameras within your workflow.

 

Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).

There is a lot of confusion and misunderstanding about aliasing and moiré. So I’ve thrown together this article to try and explain whats going on and what you can (or can’t) do about it.

Sony’s FS700, F5 and F55 cameras all have 4K sensors. They also have the ability to shoot 4K raw as well as 2K raw when using Sony’s R5 raw recorder. The FS700 will also be able to shoot 2K raw to the Convergent Design Odyssey. At 4K these cameras have near zero aliasing, at 2K there is the risk of seeing noticeable amounts of aliasing.

One key concept to understand from the outset is that when you are working with raw the signal out of the camera comes more or less directly from the sensor. When shooting non raw then the output is derived from the full sensor plus a lot of extra very complex signal processing.

First of all lets look at what aliasing is and what causes it.

Aliasing shows up in images in different ways. One common effect is a rainbow of colours across a fine repeating pattern, this is called moiré. Another artefact could be lines and edges that are just a little off horizontal or vertical appearing to have stepped or jagged edges, sometimes referred to as “jaggies”.

But what causes this and why is there an issue at 2K but not at 4K with these cameras?

Lets imagine we are going to shoot a test pattern that looks like this:

Test pattern, checked shirt or other similar repeating pattern.
Test pattern, checked shirt or other similar repeating pattern.

And lets assume we are using a bayer sensor such as the one in the FS700, F5 or F55 that has a pixel arrangement like this, although it’s worth noting that aliasing can occur with any type of sensor pattern or even a 3 chip design:

Sensor with bayer pattern.
Sensor with bayer pattern.

Now lets see what happens if we shoot our test pattern so that the stripes of the pattern line up with the pixels on the sensor. The top of the graphic below represents the pattern or image being filmed, the middle is the sensor pixels and the bottom is the output from the green pixels of the sensor:

Test pattern aligned with the sensor pixels.
Test pattern aligned with the sensor pixels.

As we can see each green pixel “see’s” either a white line or a black line and so the output is a series of black and white lines. Everything looks just fine… or is it? What happens if we move the test pattern or move the camera just a little bit. What if the camera is just a tiny bit wobbly on the tripod? What if this isn’t a test pattern but a striped or checked shirt and the person we are shooting is moving around? In the image below the pattern we are filming has been shifted to the left by a small amount.

Test pattern miss-aligned with pixels.
Test pattern miss-aligned with pixels.

Now look at the output, it’s nothing but grey, the black and white pattern has gone. Why? Simply because the green pixels are now seeing half of a white bar and half of a black bar. Half white plus half black equals grey, so every pixel see’s grey. If we were to slowly pan the camera across this pattern then the output would alternate between black and white lines when the bars and pixels line up and grey when they don’t. This is aliasing at work. Imagine the shot is of a person with a checked shirt, as the person moves about the shirt will alternate between being patterned and being grey. As the shirt will be not be perfectly flat and even, different parts of the shirt will go in and out of sync with the pixels so some parts will be grey, some patterned, it will look blotchy. A similar thing will be happening with any colours, as the red and blue pixels will sometimes see the pattern at other times not, so the colours will flicker and produce strange patterns, this is the moiré that can look like a rainbow of colours.

So what can be done to stop this?

Well what’s done in the majority of professional level cameras is to add a special filter in front of the sensor called an optical low pass filter (OPLF). This filter works by significantly reducing the contrast of the image falling on the sensor at resolutions approaching the pixel resolution so that the scenario above cannot occur. Basically the image falling on the sensor is blurred a little so that the pixels can never see only black or only white. This way we won’t get flickering between black & white and then grey if there is any movement. The downside to this is that it does mean that some contrast and resolution will be lost, but this is better than having flickery jaggies and rainbow moiré. In effect the OLPF is a type of de-focusing filter (for the techies out there it is usually something called a birefringent filtre). The design of the OLPF is a trade off between how much aliasing is acceptable and how much resolution loss is acceptable. The OLPF cut-off isn’t instant, it’s a sharp but gradual cut-off that starts somewhere lower than the sensor resolution, so there is some undesirable but unavoidable contrast loss on fine details. The OLPF will be optimised for a specific pixel size and thus image resolution, but it’s a compromise. In a 4K camera the OLPF will start reducing the resolution/contrast before it gets to 4K.

(As an aside, this is one of the reasons why shooting with a 4K camera can result in better HD, because the OLPF in an HD camera cuts contrast as we approach HD, so the HD is never as sharp and contrasty as perhaps it could be. But shoot at 4K and down-convert and you can get sharper, higher contrast HD).

So that’s how we prevent aliasing, but what’s that got to do with 2K on the FS700, F5 and F55?

Well the problem is this, when shooting 2K raw or in the high speed raw modes Sony are reading out the sensor in a way that creates a larger “virtual” pixel. This almost certainly has to be done for the high speed modes to reduce the amount of data that needs to be transferred from the sensor and into the cameras processing and recording circuits when using high frame rates.  I don’t know exactly how Sony are doing this but it might be something like my sketch below:

Using adjacent pixels to create larger virtual pixels.
Using adjacent pixels to create larger virtual pixels.

So instead of reading individual pixels, Sony are probably reading groups of pixels together to create a 2K bayer image. This creates larger virtual pixels and in effect turns the sensor into a 2k sensor. It is probably done on the sensor during the read out process (possibly simply by addressing 4 pixels at the same time instead of just one) and this makes high speed continuous shooting possible without overheating or overload as there is far less data to read out.

But, now the standard OLPF which is designed around the small 4K isn’t really doing anything because in effect the new “virtual” pixels are now much larger than the original 4K pixels the OLPF was designed around. The standard OLPF cuts off at 4K so it isn’t having any effect at 2K, so a 2K resolution pattern can fall directly on our 2K virtual bayer pixels and you will get aliasing. (There’s a clue in the filter name: optical LOW PASS filter, so it will PASS any signals that are LOWer than the cut off. If the cut off is 4K, then 2K will be passed as this is lower than 4K, but as the sensor is now in effect a 2K sensor we now need a 2K cut off).

On the FS700 there isn’t (at the moment at least) a great deal you can do about this. But on the F5 and F55 cameras Sony have made the OLPF replaceable. By loosening one screw the 4K OLPF can be swapped out with a 2K OLPF in just a few seconds. The 2K OLPF will control aliasing at 2K and high speed and in addition it can be used if you want a softer look at 4K. The contrast/resolution reduction the filter introduces at 2K will give you a softer “creamier” look at 4K which might be nice for cosmetic, fashion, period drama or other similar shoots.

Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.
Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.

Fs700 owners wanting to shoot 2K raw will have to look at adding a little bit of diffusion to their lenses. Perhaps a low contrast filter will help or a net or stocking over the front of the lens to add some diffusion will work to slightly soften the image and prevent aliasing. Maybe someone will bring out an OLPF that can be fitted between the lens and camera body, but for the time being to prevent aliasing on the FS700 you need to soften, defocus or blur the image a little to prevent the camera resolving detail above 2K. Maybe using a soft lens will work or just very slightly de-focussing the image.

But why don’t I get aliasing when I shoot HD?

Well all these cameras use the full 4K sensor when shooting HD. All the pixels are used and read out as individual pixels. This 4K signal is then de-bayered into a conventional 4K video signal (NOT bayer). This 4K (non raw) video will not have any significant aliasing as the OLPF is 4K and the derived video is 4K. Then this conventional video signal is electronically down-converted to HD. During the down conversion process an electronic low pass filter is then used to prevent aliasing and moiré in the newly created HD video. You can’t do this with raw sensor data as raw is BEFORE processing and derived directly from the sensor pixels, but you can do this with conventional video as the HD is derived from a fully processed 4K video signal.

I hope you understand this. It’s not an easy concept to understand and even harder to explain. I’d love your comments, especially anything that can add clarity to my explanation.

UPDATE: It has been pointed out that it should be possible to take the 4K bayer and from that use an image processor to produce a 2K anti-aliased raw signal.

The problem is that, yes in theory you can take a 4K signal from a bayer sensor into an image processor and from that create an anti-aliased 2K bayer signal. But the processing power needed to do this is incredible as we are looking at taking 16 bit linear sensor data and converting that to new 16 bit linear data. That means using DSP that has a massive bit depth with a big enough overhead to handle 16 bit in and 16 bit out. So as a minimum an extremely fast 24 bit DSP or possibly a 32 bit DSP working with 4K data in real time. This would have a big heat and power penalty and I suspect is completely impractical in a compact video camera like the F5/F55. This is rack-mount  high power work station territory at the moment. Anyone that’s edited 4K raw will know how processor intensive it is trying to manipulate 16 bit 4K data.

When shooting HD your taking 4K 16 bit linear sensor data, but your only generating 10 bit HD log or gamma encoded data, so the processing overhead is tiny in comparison and a 16 bit DSP can quite happily handle this, in fact you could use a 14 bit DSP by discarding the two LSB’s as these would have little effect on the final 10 bit HD.

For the high speed modes I doubt there is any way to read out every pixel from sensor continously without something overheating. It’s probably not the sensor that would overheat but the DSP. The FS700 can actually do 120fps at 4K for 4 seconds, so clearly the sensor can be read in it’s entirety at 4K and 120fps, but my guess is that something gets too hot for extended shooting times.

So this means that somehow the data throughput has to be reduced. The easiest way to do this is to read adjacent pixel groups. This can be done during the readout by addressing multiple pixels together (binning). If done as I suggest in my article this adds a small degree of antialising. Sony have stated many times that they do not pixel or line skip and that ever pixel is used at 2K, but either way, whether you line skip or pixel bin the effective pixel pitch increases and so you must also change the OLPF to match.