Category Archives: Shooting Tips

Notes on Timecode and Timecode Sync for cinematographers.

This is part 1 of two articles. In this article I will look at what timecode is and some common causes of timecode drift problems. In part 2 I will look at the correct way to synchronise timecode across multiple devices.

This is a subject that keeps cropping up from time to time. A lot of us camera operators don’t always understand the intricacies of timecode. If you live in a PAL/50Hz area and shoot at 25fps all the time you will have few problems. But start shooting at 24fps, 23.98 fps or start trying to sync different cameras or audio recorders and it can all get very complicated and very confusing very quickly.

So I’ve written these notes to try to help you out.

WHAT IS TIMECODE?

The timecode we normally encounter in the film and video world is simply a way to give every frame that we record a unique ID number based on the total number of frames recorded or the time of day.  It is a counter that counts whole frames. It can only count whole frames, it cannot count fractions of frames, as a result the highest accuracy is 1 frame. The timecode is normally displayed as Hour:Minute:Second:Frame in the following format

HH:MM:SS:FF

RECORD RUN AND FREE RUN

The two most common types of timecode used are “Record Run” and “Free Run”. Record run, as the name suggests only runs or counts up when the camera is recording. It is a cumulative frame count, which counts the total number of frames recorded. So if the first clip you record starts with the time code clock at 00:00:00:00 and runs for 10 seconds and 5 frames then the TC at the end of the clip will be 00:00:10:05. The first frame of the next clip you record will continue the count so will be 00:00:10:06 and so on. When you are not recording the timecode stops counting and does not increase.

With “Free Run” the timecode clock in the camera is always counting according to the frame rate the camera is set to. It is common to set the free run clock so that it matches the time of the day. Once you set the time in the timecode clock and enable “Free Run” the clock will start counting up whether you are recording or not.

HERE COMES A REALLY IMPORTANT BIT!

In “Free Run” once you have set the timecode clock it will always count the number of frames recorded and in some cases this will actually cause the clock to drift away from the actual time of day.

SOME OF THE PROBLEMS.

An old problem is that in the USA and other NTSC areas the frame rate is a really odd frame rate, it’s 29.97fps (this came about to prevent problems with the color signal when color TV was introduced). Timecode can only count actual whole frames, so there is no way to account for the missing 0.03 frames in every second. As a result timecode running at 29.97fps runs slightly slower than a real time clock.

If the frame rate was actually 30fps in 1 hour there would be 108,000 frames. But at 29.97fps after one real time hour you will have only recorded  107,892 frames, the frame counter TC, won’t reach one hour for another 3.6 seconds.

DROP FRAME TIMECODE.

To eliminate this 3.6 seconds per hour (relative to real time) timecode discrepancy in footage filmed at 29.97fps a special type of time code was developed called “Drop Frame Timecode“. Drop Frame Timecode (DF) works by: every minute, except each tenth minute, two timecode numbers are dropped from the timecode count. So there are some missing numbers in the timecode count but after exactly 1 real time hour the time code value will increment by 1 hour. No frames themselves are dropped, only numbers in the frame count.

WHEN TO USE DROP FRAME (DF) OR NON DROP FRAME (NDF).

Drop Frame Timecode is only ever used for material shot at  29.97fps, which includes 59.94i. (We will often incorrectly refer to this as 60i or 30fps – virtually all 30fps video these days is actually 29.97fps). If you are using “Rec Run” timecode you will almost never need to use Drop Frame as generally you will not by syncing with anything else.

If you are using 29.97fps  “Free Run” you should use Drop Frame (DF) when you want your timecode to stay in sync with a real time clock. An example would be shooting a long event or over several days where you want the timecode clock to match the time on your watch or the watch of an assistant that might be logging what you are shooting.

If you use 29.97fps Non Drop Frame  (NDF) your cameras timecode will drift relative to the actual time of day by a minute and a half each day. If you are timecode syncing multiple cameras or devices it is vital that they are all using the same type of timecode, mixing DF and NDF will cause all kinds of problems.

It’s worth noting that many lower cost portable audio recorders that record a “timecode” don’t actually record true timecode. Instead they record a timestamp based on a real time clock. So if you record on the portable recorder for lets say 2 hours and then try to sync the 1 hour point (01:00:00:00 Clock Time) with a camera recording 29.97fps NDF timecode using the 1 hour timecode number (01:00:00:00 NDF Timecode) they will be out of sync by 3.6 seconds. So this would be a situation where it would be preferable to use DF timecode in the camera as the cameras timecode will match the real time clock of the external recorder.

WHAT ABOUT 23.98fps?

Now you are entering a whole world of timecode pain!!

23.98fps is a bit of a oddball standard that came about from fitting 24fps films into the NTSC 29.97fps frame rate. It doesn’t have anything to do with pull up, it’s just that as NTSC TV runs at 29.97fps rather than true 30fps movies are sped up by 0.1% to fit in 29.97fps.

Now 23.98fps exists as a standalone format. In theory there is still a requirement for Drop Frame timecode as you can’t have 0.02 frames in a timecode frame count, each frame must have a whole number. Then after a given number of frames you go to the next second in the count. With 23.98fps we count 24 whole frames and the increment the timecode count by one second, so once again there is a discrepancy between real time and the timecode count of 3.6 seconds per hour. The time on a camera running at 23.98fps will run fast compared to a real time clock.  Unlike 29.97fps there is no Drop Frame (DF) standard for 23.98, it’s always treated as a 24fps count (TC counts 24 frames, then adds 1 to the second count), this is because there  is no nice way to adjust the count and make it fit real time as there is with 29.97fps. No matter how you do the math or how many frames you drop there would always be a fraction of a frame left over.

So 23.98fps does not have a DF mode. This means that after 1 hour of real time the timecode count on a camera shooting at 23.98 fps will be 00:01:03:14. If you set the camera to “Free Run” the timecode will inevitably drift relative to real time, again over the course of a day the camera will be fast by almost one and a half minutes compared to a real time clock or any other device using either drop frame timecode, 24fps or 25fps.

So, as I said earlier 23.98fps timecode can be painful to deal with.

24fps timecode does not have this problem as there are exactly 24 frames in every second, so a video camera shooting at 24fps should not see any significant timecode drift or loss of timecode sync compared to a real time clock.

It’s worth considering here the problem of shooting sync sound (where sound is recorded externally on a remote sound recorder). If your sound recorder does not have 23.98fps timecode the timecode  will drift relative to a camera shooting at 23.98fps. If your sound recorder only has a real time timecode clock you might need to consider shooting at 24fps instead of 23.98fps to help keep the audio and picture time codes in sync. Many older audio recorders designed for use alongside film cameras can only do 24fps timecode.

In part 2 I will look at the correct way to synchronise timecode across multiple devices.

CLICK HERE FOR PART 2

 

How much technology does a modern cinematographer need to know?

This post might be a little controversial, I am often told “you don’t need to know the technical stuff to be a cinematographer” or “I don’t need to know about log and gamma, I just want to shoot”.

I would argue that unless you are working closely with a good DIT a modern DP/Cinematographer really does need to understand many of the technical aspects of the equipment being used, in particular the settings that alter the way the camera captures the images. Not just things like “set it to gamma x for bright scenes” but why you would want to do that.

Now I’m not saying that you have to be a full blown electronics engineer, but if you really want to capture the best possible images it really is very important that you truly understand what the camera is doing. It’s also a huge help to understand how your footage will behave in post production. Any craftsman should understand the correct way to use his tools and not only know how to use them but how they work.

Part of the understanding of how your chosen camera behaves comes from testing and experimentation. Shooting test clips across a range of exposures, trying different gamma or log curves and then taking the footage into post production and seeing how it behaves.

Film cinematographers will shoot tests with different film stocks before a large production under the kinds of lighting conditions that will be encountered during the film. Then the film would be processed in different ways to find the best match to the look the cinematographer is trying to achieve. Digital cinematographers should be doing the same and importantly understanding what the end results are telling them.

Most of the great painters didn’t just pick up a paint brush and slap paint on a canvas. Many artists from  Da Vinci to Turner studied chemistry so they could develop new paints and painting techniques. DaVinci was a pioneer of oil painting, Turner used to make his own paints from base pigments and chemicals and patented some of the unique colors he created.

This doesn’t take anything away from the traditional skills of lighting and composition etc, those are just as important as ever and always will be. But modern electronic cameras are sophisticated devices that need to be used correctly to get the best out of them.  I believe that you need to understand the way your camera responds to light. Understands it’s limitations, understand it’s strengths and learn how to use those strengths and avoid the weaknesses.

And that’s a really important consideration. Today the majority of the cameras on the market are capable of making great images…… Provided you know how to get the best from them. One may be stronger in low light, one may be better in bright light. It may be that one camera will suit one job or one scene better than another. You need to learn about these differences and understanding the underlying technologies will help you figure out which cameras may be candidates for your next project.

It’s not just the camera tech that’s important to understand but also how to manage the footage all the way from the camera to delivery. While you don’t need to be an expert colorist, it certainly helps if you know the process, just as film cameramen know about color timing and film processing. A trend that is growing in the US is high end cinematographers that also grade.

This has come about because in the days of film the cinematographer could determine the look of the finished production through a combination of lighting, the choice of film stock and how it was to be processed. Today a cinematographer may have much less control  over the final image as it passes through the post production and grading process. Often the final look is determined by the colorist as much as the cinematographer. By also becoming colorists and staying with their material all the way through post production, cinematographers can retain control of the final look of the production.

As HDR (High Dynamic Range) delivery becomes more important along with the need to deliver SDR content at the same time, a good understanding of the differences between and limitations of both systems will be needed as you may need to alter the way you expose to suit one or the other.

So, there is lots that you need to know about the technology used in todays world of digital cinematography. Where there is a big enough budget DIT’s (Digital Imaging Technicians) can help cinematographers with guidance on camera setups, gamma, color science, LUT’s and workflows. But at the low budget end of the market, as a cinematographer you need at the very least a firm grasp of how a modern camera works, how to correctly mange the dat it produces (you would be amazed how many people get this wrong). Finally how the material handles in post production, if you really want to get the best from it.

It isn’t simple, it isn’t always easy, it takes time and effort. But it’s incredibly rewarding when it all comes together and results in beautiful images.

If you disagree or have your own take on this please post a comment. I’d love to hear other views.

What is “Exposure”?

This comes up in many of my workshops. It seems like a very simple question and the correct answer is really very simple, but many cameramen, especially those from a TV and video background actually get this a little wrong.

The word “expose” means to lay open, reveal or un-mask. In film terms it’s obvious what it means, it is opening the shutter and aperture/iris to let the correct amount of light fall on the film stock. In the video world it means exactly the same thing. It is how much light we allow to fall on the sensor.

Exposure is controlled by the speed of the shutter (how long we let the light in) and the aperture of the lens (the volume of light we let in).

So why do video people get a bit confused about exposure? Well it’s the down to the way we measure it with video cameras.

In the film world you would use a light meter to measure the intensity of the light in a scene and then perform a calculation to determine the correct amount of light we need to allow to fall on the film based on the sensitivity (ISO) of the film stock. But in the video world it is common practice to look at a monitor and asses the exposure by looking at, or measuring, how bright the picture is using a waveform meter, zebras or histogram etc.

What are we measuring when we look at a video picture or measure a video signal? We are not measuring how much light is falling on the sensor, we are measuring how bright the picture is on the screen or what the recording levels of the video signal are. Most of the time there is a direct relationship between on screen brightness and exposure, but it is important to make a clear distinction between the two as variations in brightness are not always due to changes in exposure.

It’s important because something like changing a cameras gamma curve will alter the brightness of the on screen image. This isn’t an exposure change, this is a change in the recording levels used by that particular gamma curve that in turn result in a change in the brightness levels you see on the screen. This is why if you take a camera such as the FS7 or F5/F55 and correctly expose the camera using Rec709 as the gamma curve you will find middle grey at 42% and white at 90%. Then switch to a Cinegamma or Hypergamma without adjusting the shutter speed or aperture and you will find middle grey at and white at much lower, perhaps the very same white target as low as 70%.

In both cases the exposure is correct, but the on screen brightness greatly different. The difference in on screen brightness comes from the different recording levels used by 709 and Hypergammas/Cinegammas. In order to be able to record a greater dynamic range than the 6 stops offered by 709, we need to compress the original 6 stop 709 range into a much smaller  range to make room for the extra  stops of dynamic range that the Hypergammas or Cinegammas can record.

So as you can see, exposure should really be the absolute measurement of the amount of light falling on the sensor. Brightness is related to exposure, but just how bright the picture should be depends on many factors of which exposure is just one. Once you realise that brightness and exposure are not always the same thing it becomes easier to understand how Cinegamma, Hypergamma, log and raw recording works. Levels are just levels and it doesn’t really matter whether something is recorded at 90%, 70% or 61%. Provided you have enough data (and this is where 10bit or better recording really helps) you have the same amount of picture information at both levels and you can easily shift from one level to the other without degrading the image in any way in post production.

Of course we do want to have our video levels in the finished production at the right levels to match the levels that the TV, monitor or display device is expecting. But when shooting, especially with non standard gammas such as Hypergamma or log it’s perfectly normal to have levels that are different to what we would see with plain vanilla 709 and these typically lower levels should not be considered too dark or under exposed, because they are not. Dark does not necessarily mean under exposed, nor does it mean a noisy image. How much noise there is depends on the signal to noise ratio which is dependant on the amount of light that we let on to the sensor. I’ll be explaining that in my next article.

log is often a poor choice for low light.

UPDATE: Following much debate and discussion in the comments section and on my Facebook feed I think one thing that has become clear is an important factor in this subject is the required end contrast. If you take S-Log3 which has a raised shadow range and shoot with it in low light you will gain a low contrast image. If you choose to keep the image low contrast then there is no accentuation of the recorded noise in post and this can bring an acceptable and useable result. However if you need to grade the S-log3 to gain the same contrast as a dedicated high contrast gamma such as 709, then the lack of recorded data can make the image become coarser than it would be if recorded by a narrow range gamma. Furthermore many other factors come into play such as how noisy the camera is, the codec used, bit depth etc. So at the end of the day my recommendation is to not assume log will be better, but to test both log and standard gammas in similar conditions to those you will be shooting in.

Log gamma curves are designed for one thing and one thing only, to extend the dynamic range that can be recorded. In order to be able to record that greater dynamic range all kinds of compromises are being made.

Lets look at a few facts.

Amount of picture information:
The amount of picture information that you can record, i.e. the amount of image samples, shades or data points is not determined by the gamma curve. It is determined by the recording format or recording codec. For example a 10 bit codec can store up to 1023 shades or code values while an 8 bit codec can record up to 255 shades or code values (in practice this is a maximum of 235 shades as 16 are used for sync). It doesn’t matter which gamma curve you use, the 10 bit codec will contain more usable picture information than the 8 bit codec. The 10 bit picture will have over 1000 shades while the 8 bit one less than 255. For low light more “bits” is always going to be better than less as noise can be recorded more faithfully. If noise is recorded with only a few shades or code values it will look coarse and ugly compared to noise recorded with lots more levels which will look smoother.

Bottom line though is that no matter what gamma curve, the  maximum amount of picture information is determined by the codec or recording format. It’s a bit of a myth that log gives you more data for post, it does not, it gives you a broader range.

Log extends the dynamic range: This is the one thing that log is best know for. Extending the dynamic range, but this does not mean we have more picture information, all it means is we have a broader range. So instead of say a 6 or 7 stop range we have a 14 stop range. That range increase is not just an increase in highlight range but also a corresponding increase in shadow range. A typical rec-709 camera can “see” about 3 or 4 stops below middle grey before the image is deemed to be too noisy and any shades or tone blend into one. An S-log2 or S-log3 camera can see about 8 stops below middle grey before there is nothing else to see but noise. However the lower 2 or 3 stops of this extended range really are very noisy and it’s questionable as to how useful they really are.

Imagine you are shooting a row of buildings (each building representing a few stops of dynamic range). Think of standard gammas as a standard 50mm lens. It will give you a great image but it won’t be very wide, you might only get one or two buildings into the shot, but you will have a ton of detail of those buildings.

Hong-Kong-50mm log is often a poor choice for low light.
Shot of buildings taken with standard lens, think “standard gamma”

Think of a wide dynamic range gamma such as S-log as a wide angle lens. It will give you a much wider image taking in several buildings and assuming the lens is of similar quality to the 50mm lens, the captured pictures will appear to be of similar quality. But although you have a wider view the level of detail for each building will be reduced. You have a wider range, but each individual building has less detail

Hong-Kong-20mm log is often a poor choice for low light.
Buildings shot with 20 mm wide lens. Think “wide gamma” or log gamma.

But what if in your final scene you are only going to show one or two buildings and they need to fill the frame? If you shoot with the wide lens you will need to blow the image up in post to the show just the buildings you want. Blowing an image up like this results in a lower quality image. The standard lens image however won’t need to be blown up, so it will look better. Log is just the same. While you do start off with a wider range (which may indeed be highly beneficial) each element or range of shades within that range has less data than if we had shot with a narrower gamma.

Hong-Kong-20mm-cropped log is often a poor choice for low light.
Wide lens (think wide gamma) cropped to match standard lens (think standard gamma). Note the loss of quality compared to starting with standard.

Using log in low light is the equivalent of using a wide angle lens to shoot a row of buildings where you can actually only see a few of the buildings, the others being invisible and then blowing up that image to fill the frame. The reality is you would be better off using the standard lens and filing the frame with the few visible building, thus saving the need to blow up the image.

Hong-Kong-50mm-wasted-data log is often a poor choice for low light.
Shooting a scene where most of it is dark with wide lens (wide gamma/log) wastes a lot of data.
Hong-Kong-50mm-dark2 log is often a poor choice for low light.
Using a narrower lens (narrow or standard gamma) wastes less data and the information that is captured is of higher quality.

S-Log2/3 has a higher base ISO: On a Sony camera this higher ISO value is actually very miss-leading because the camera isn’t actually any more sensitive in log. The camera is still at 0dB gain, even though it is being rated at a higher ISO. The higher ISO rating is there to offset an external light meter to give you the darker recording levels normally used for log. Remember a white card is recorded at 90% with standard gammas, but only 60% with log. When you change the ISO setting upwards on a light meter it will tell you to close down the aperture on the camera, that then results in the correct darker log exposure.

S-Log3 may appear at first brighter than standard gammas when you switch to it. This is because it raises the very bottom of the log curve and puts more data into the shadows. But the brighter parts of the image will be no brighter than with a camera with standard gammas at 0db gain. This extra shadow data may be beneficial for some low light situations, so if  you are going to use log in low light S-Log3 is superior to S-Log2.

If you can’t get the correct exposure with log, don’t use it! Basically if you can’t get the correct exposure without adding gain or increasing the ISO don’t use log. If you can’t get your midrange up where it’s supposed to be then you are wasting data. You are not filling your codec or recording format so a lot of data available for picture information is being wasted. Also consider that because each stop is recorded with less data with log not only is the picture information a bit coarser but so too is any noise. If you really are struggling for light, your image is likely to be a bit dark and thus have a lot of noisy and coarse noise is not nice. Log has very little data allocated to the shadows in order to free up data for the highlights because one of the key features of log is the excellent way it handles highlights as a result an under exposed log image is going to lack even more data. So never under expose log.

S-log-levels log is often a poor choice for low light.
Chart showing S-Log2 and S-Log3 plotted against f-stops and code values. Note how little data there is for each of the darker stops, the best data is above middle grey.

Think of log as the opposite of standard gammas. With standard gammas you always try never to over expose and often being very slightly under exposed is good. But log must never be under exposed, there is not enough data in the shadows to cope with under exposure. Meanwhile log has more data in the highlights, so is very happy to be a little over exposed.

My rule of thumb is quite simple. If I can’t fully expose log at the base sensitivity I don’t use it. I will drop down to a cinegamma or hypergamma. If I can’t correctly expose the hypergamma or cinegamma then I drop down to standard gamma, rec-709.

Video Camera Noise, ISO and Sensitivity.

It’s amazing how poorly this is understood. I’m also rather surprised at some peoples expectations when it comes to noise in shadow areas of video images.

First of all, all video camera sensors produce noise. There will always be noise to some degree and this is typically most visible in the darker parts of the image because if your actual image brightness is 5% and your noise is 5% the noise is as big as the desired signal. In the highlights the same noise is still there, but when the brightness is 80% and the noise is 5% the noise is much, much less obvious.

ISO: What is ISO with a video camera? On it’s own it’s actually a fairly meaningless term. Why? Well because a camera manufacturer can declare more or less any ISO they choose as the cameras sensitivity. There is no set standard. It’s up to the camera manufacturer to pick an ISO number that gives a reasonably bright image with an acceptable amount of noise. But what is acceptable noise? Again there is no standard, so ISO ratings should be ignored unless you also know what the signal to noise ratio is at that ISO. For decades video camera sensitivity was rated in db. The sensitivity  is measured at 0db in terms of the aperture needed to correctly expose a 90% white card  at 2000 lux. This is very precise and easily repeatable. The signal to noise ratio is then also measured at the unity (0db) gain point and from this you can actually get a really good understanding of how that camera will perform, not just sensitivity, but more importantly how much noise at the nominal native sensitivity.

But now, because it’s fashionable and makes us sound like film camera operators it’s all about ISO. But ISO on it’s own doesn’t really tell us anything useful. Take a Sony FS7 or F5. In standard gamma at 0db the ISO rating is 800 ISO. But when you switch to S-Log it becomes 2000 ISO (but you are still at 0db). Have you ever noticed that the image doesn’t get brighter even though you are increasing the ISO? The ISO is increased because what is actually happens is that you gain the ability to record a little over 1 stop further into the shadows as you are now using more of the sensors low range (which is normally well below the black level chosen for 709) with the side effect of also seeing a little more than twice as much more noise (1 stop = 6db = double). The camera isn’t actually becoming any noisier, but because your using a lower sensor range you will see more noise in the shadows, noise that in normal gammas goes unseen. It’s really important that you understand this as it explains why S-log looks very noisy in the deepest shadows compared to standard gammas.

Native sensitivity… again this is open to a bit of wriggle room by the camera manufacturer. With a camera shooting log, generally it is a case of mapping the entire sensor capture range from black to white to the zero to 100% recording range. Normally this is done using as little gain as possible as gain adds noise. But as noise reduction processes get better, including on sensor noise reduction, camera manufacturers have some space to move the mapping of the sensor to the recording up and down a bit. Sadly or us, high ISO’s sell cameras. So camera manufacturer’s like to have cameras with high ISO’s because people look at the ISO rating, but ignore the signal to noise figure. The end result is cameras with high ISO’s (because it sounds cool) but with less than optimum signal to noise ratios. It would probably be better for all of us if we started paying much more attention to the signal to noise ratios of cameras, not just the ISO. That may help prevent manufacturers from bring out cameras with ridiculously high native ISO’s that are noisy and frankly far from what we need, which is a good low noise base sensitivity.

The next issue is that people appear to expect to be able to magically pull something out of nothing. If you have areas of deep shadow in your image you can’t magically pull out details and textures from those areas without significantly increasing the noise in those parts of the picture. You can’t do it and you shouldn’t be trying to do it. If you have an 8 bit camera the noise in the shadows will be really coarse, you try to stretch those levels, even by a tiny bit, it’s going to get ugly fast (the same with 12 bit linear raw too). What’s the answer…. LIGHT IT PROPERLY OR EXPOSE IT BRIGHTER.

We appear to have lost the ability to light or expose properly. If you want detail in your shadows either expose them brighter or throw some light in there, then take the levels down in post. Remember it’s all about contrast ratios. Faces are normally 1.5 stops above middle grey and 3.5 stops above our dark shadow range. So if you want a lot of textures in your deep shadows expose the entire scene brighter, not just the foreground but the background and shadows too. If you expose faces at +4.5 above black. Mid grey will still be -1.5 stops below those skin tones and your shadows will still be 3.5 stops below your faces. the contrast ratio remains the same if you increase the overall light level, so now everything will be 1 stop brighter. Then take the levels down by 1 stop in post and bingo, you noise levels are cut in half and your shadows look so much better and might actually now contain some useable picture information.

To follow on from this I recommend reading this: https://www.xdcam-user.com/2015/03/what-is-exposure/

Deeper Understanding Of Log Gamma. Experiments with a Waveform Display.

I started writing this as an explanation of why I often choose not to use log for low light. But instead it’s ended up as an experiment you can try for yourself if you have a waveform monitor that will hopefully allow you to better understand the differences between log and standard gamma. Get a waveform display hooked up to your log camera and try this for yourself.

S-Log and other log gammas are wonderful things, but they are not the be-all and end-all of video gammas. They are designed for one specific purpose and that is to give cameras using conventional YCbCr or RGB recording methods the ability to record the greatest possible dynamic range with a limited amount of data, as a result there are some compromises made when using log. Unlike conventional gammas with a knee or gammas such as hypergammas and cinegammas, log gammas do not normally have any highlight roll off, but do have a shadow roll-off. Once you get above middle grey log gammas normally record every stop with almost exactly the same amount of data, right up to the clipping point where they hard clip. Below middle grey there is a roll off of data per stop as you go down towards the black clip point (as there is naturally less information in the shadows this is expected). So in many respects log gammas are almost the reverse of standard gammas. The highlight roll off that you may believe that you see with log is often just the natural way that real world highlights roll off anyway, after all there isn’t an infinite amount of light floating around (thank goodness). Or that apparent roll off is simply a display or LUT limitation.

An experiment for you to try.

xdcam-greyscale-300x169 Deeper Understanding Of Log Gamma. Experiments with a Waveform Display.
Click on the chart to go to larger versions that you can download. Display it full screen on your computer and use it as a test chart. You may need to de-focus the camera slightly to avoid aliasing from the screens pixels.

If you have a waveform display and a grey scale chart you can actually see this behaviour. If you don’t have a chart use the grey scale posted here full screen on your computer monitor. Start with a conventional gamma, preferably REC-709. Point the camera at the chart and gradually open up the aperture. With normal gammas as you open the aperture you will see the steps between each grey bar open up and the steps spread apart until you reach the knee point, typically at 90% (assuming the knee is ON which is the default for most cameras).  Once you hit the knee all those steps rapidly squash back together again.

What you are seeing on the waveform is conventional gamma behaviour where for each stop you go up in exposure you almost double the amount of data recorded, thus capturing the real world very accurately (although only within a limited range). Once you hit the knee everything is compressed together to increase the dynamic range using only a very small recording range, leaving the shadows and all important mid range well recorded. It’s this highlight compression that gives video the “video look”, washed out highlights with no contrast that look electronic.

If you repeat the same exercise with a hypergamma or cinegamma once again in the lower and mid range you will see the steps stretch apart on the waveform as you increase the exposure. But once you get to about 65-70% they stop stretching apart and now start to squeeze together. This is the highlight roll off of the hypergamma/cinegamma doing it’s thing. Once again compressing the highlights to get a greater dynamic range but doing this in a progressive gradual manner that tends to look much nicer than the hard knee. Even though this does look better than 709 + Knee in the vast majority of cases, we are still compressing the highlights, still throwing away a lot of data or highlight picture information that can never be recovered in post production no matter what you do.

Conventional video = Protect Your Highlights.

So in the conventional video world we are taught as cameramen to “protect the highlights”. Never overexpose because it looks bad and even grading won’t help a lot. If anything we will often err on the side of caution and expose a little low to avoid highlight issues. If you are using a Hypergamma or Cinegamma you really need to be careful with skin tones to keep them below that 65-70% beginning of the highlight roll off.

Now repeat the same experiment with Slog2 or S-log3. S-log2 is best for the experiment as it shows what is going on most clearly. Before you do it though mark middle grey on your waveform display with a piece of tape or similar. Middle grey for S-log2 is 32% (41% for S-log3).

Now open up the aperture and watch those steps between the grey scale bars. Below middle grey, as with the standard gammas you will see the gap between each bar open up. But take careful note of what happens above middle grey. Once you get above middle grey and all the way to the clip point the gap between each step remains the same.

So what’s happening now?

Well this is the S-log curve recording each stop above middle grey with the same amount of data. In addition there is NO highlight roll off. Even the very brightest step just below clipping will be same size as the one just above middle grey. In practice what this means is that it doesn’t make a great deal of difference where you expose for example skin tones, provided they are above middle grey and below clipping. After grading it will look more or less the same. In addition it means that that very brightest stop contains a lot of great, useable picture information. Compare that to Rec-709 or the Cinegammas/Hypergammas where the brightest  stops are all squashed together and contain almost no contrast or picture information.

Now add in to the equation what is going on in the shadows. Log has less data in the shadows than standard gammas because you are recording a greater overall dynamic range, so each stop is recorded with overall less data.

Standard Gammas = More shadow data per stop, much less highlight data = Need to protect highlights.

Log= Less shadow data per stop, much more highlight data = Need to protect shadows.

Hopefully now you can see that with S-log we need to flip the way we shoot from protecting highlights to protecting shadows. When you shoot with conventional gammas most people expose so the mid range is OK, then take a look at the highlights to make sure they are not too bright and largely ignore whats going on in the shadows. With Log you need to do the opposite. Expose the mid range and then check the shadows to make sure they are not too dark. You can ignore the highlights.

Yes, thats’ right, when shooting log: IGNORE the highlights!

WF-Cinegamma-3-1024x576 Deeper Understanding Of Log Gamma. Experiments with a Waveform Display.
Cinegamma highlight roll off. Note how the tree branches in the highlights look strangled and ugly due to the lack of highlight data, hence “protect your highlights”.
WF-Slog-graded-1024x576 Deeper Understanding Of Log Gamma. Experiments with a Waveform Display.
Graded S-Log2. Note how nice the same tree branches look because there is a lot of data in the highlights, but the shadows are a little crunchy. Hence: protect your shadows.

For a start you monitor or viewfinder isn’t going to be able to accurately reproduce the highlights as bright as they are . So typically they will look a lot more over exposed than they really are. In addition there is a ton of data in those highlights that you will be able to extract in the grade. But most importantly if you do underexpose your mid range will suffer, it will get noisy and your shadows will look terrible because there will be no data to work with.

When I shoot with log I always over expose by at least 1 stop above the manufacturer recommended levels. If you are using S-log2 or S-log3 that can be achieved by setting zebras to 70% and then checking that you are JUST starting to see zebras on something white in your shot such as a white shirt or piece of paper. If your camera has CineEI use an EI that is half of the cameras native ISO (I use 1000 or 800 EI for my FS7 or F5).

I hope these experiments with a grey scale and waveform help you understand what is going on with you gamma curves. One thing I will add is that while controlled over exposure is beneficial it can lead to some issues with grading. That’s because most LUT’s are designed for “correct” exposure so will typically look over exposed. Another issue is that if you simply reduce the gain level in post to compensate than the graded footage looks flat and washed out. This is because you are applying a linear correction to log footage. Fo a long tome I struggled to get pleasing results from over exposed log footage. The secret is to either use LUT’s that are offset to compensate for over exposure or to de-log the footage prior to grading using an S-Curve. I’ll cover both of these in a later article.

S-log-levels Deeper Understanding Of Log Gamma. Experiments with a Waveform Display.
Chart showing S-Log2 and S-Log3 plotted against f-stops and code values.

 

 

 

 

 

 

 

 

 

What about shooting in low light?

OK, now lets imagine we are shooting a dark or low light scene. It’s dark enough that even if we open the aperture all the way the brightest parts of the scene (ignoring things like street lights) do not reach clipping (92% with S-Log3 or 109% with S-Log2). This means two things. 1: The scene has a dynamic range less than 14 stops and 2: We are not utilising all of the recording data available to us. We are wasting data.

Log exposed so that the scene fills the entire curve puts around 100 code values (or luma shades) per stop above middle grey for S-log2 and 75 code values for S-Log3 with a 10 bit codec. If your codec is only 8 bit then that becomes 25 for S-log2 and 19 code values for S-Log3. And that’s ONLY if you are recording a signal that fills the full range from black clip to white clip.

3 stops below middle grey there is very little data, about thirty 10 bit code values for S-Log2 and about 45 for S-log3. Once again if the codec is 8 bit you have much less, about 7 for S-Log2 and about 11 for S-log2. As a result the darker parts of your recorded scene will be recorded with very little data and very few shades. This impacts how much you can grade the image in post as there is very little picture information in the darker parts of the shot and noise tends to look quite coarse as it is only recorded with a limited number of steps or levels (this is particularly true of 8 bit codecs and an area where 8 bit recordings can be problematic).

So what happens if we use a standard gamma curve?

Lets say we now shoot the same scene with a standard gamma curve, perhaps REC-709. One point to note with Sony cameras like the FS5, FS7, F5/F55 etc is that the standard gammas normally have a native ISO one to two stops lower than S-Log. That’s because the standard gammas ignore the darkest couple of stops that are recorded when in log. After all there is very little really useable picture information down there in all the noise.

Now our limited dynamic range scene will be filling much more of our recording range. So straight away we have more data per stop because we are utilising a bigger portion of the recording range. In addition because our recorded levels will be higher in our recording range there will be more data per stop, typically double the data especially in the darker parts of the recorded image. This means than any noise is recorded more accurately which results in smoother looking noise. It also means there is more data available for any post production manipulation.

But what about those dark scenes with problem highlights such as street lights?

This an area where Cinegammas or Hypergammas are very useful. The problem highlights like strret lights normally only make up a very small part of your your overall scene. So unless you are shooting for HDR display it’s a huge waste to use S-log just to bring some highlights into range as you make big compromises to the rest of the image and you’ll never be able to show them accurately in the finished image anyway as they will exceed the dynamic range of the TV display.  Instead for these situations a Hypergamma or Cinegamma works well because below about 70% exposure Hypergammas and cinegammas are very similar to Rec-709 so you will have lots of data in the shadows and mid range where you really need it. The highlights will be up in the highlight roll off area where the data levels or number of recorded shades are rolled off. So the highlights still get recorded, perhaps without clipping, but you are only giving away a small amount of data to do this. The highlights possibly won’t look quite as nice as if recorded with log, but they are typically only a small part of the scene and the rest of the scene especially the shadows and mid tones will end up looking much better as the noise will be smoother and there will be more data in that all important mid-range.

 

Treat it like a film camera!

If you have a modern camera that can record log or raw and has 13 stops or more of dynamic range you need to stop thinking “video” and think “film”.

A big mistake most traditional video camera operators make with these big DR cameras is to treat them as they would a typical limited dynamic range video camera and constantly worry and obsess about protecting highlights. Why do we do this? Well probably because that’s what you do with cameras with a very limited range and that’s probably what you have had drummed into you for years. But now with modern large sensor cameras everything changes. When you get to a 14 stop range camera, even if you choose to shoot 2 stops over exposed  (perhaps by using 500 EI on an FS7 or F5) you still have as much or more over exposure range as a conventional video camera and the highlight range that you do have is not subject to a knee or other similar acute highlight compression. So any highlights will contain a ton of high quality, usable picture information. By shooting over exposed by a controlled amount (1 to 2 stops), perhaps by using a low EI you gain very big improvements in the signal to noise ratio and get better saturated colors (opening the aperture lets more light onto the sensor, your colors will be better recorded). This allows you to pull a lot more information out of the data thin shadows and mid range. Most cameras that use log have very little data in the shadows. If you are recording with a 10 bit codec cameras that use variations of the Cineon log curve (Arri LogC, Sony S-Log3, Panasonic V-Log) only have about 80 luma shades covering the first 4 stops of exposure in total. Above the 4th stop the amount of data per stop increases rapidly so a little bit of deliberate over exposure really helps lift your darkest shadows up out of the noise and mire. Up in the highlights each stop has exactly the same amount of data, so over exposing a bit doesn’t compress the highlights as it would with a conventional camera, so a bit of mild over exposure is normally not noticeable.

Really with a 14 stop log camera you want to treat it like film, not video. Just like film, a 14 stop log camera will almost always benefit from a controlled amount of over exposure, highlights will rarely suffer or look bad just because you’re one stop hot, but he shadows and midtones will be significantly improved. And just like film, if you under expose log you will take a big hit. You will loose a lot of shadow information very quickly, have less color, it will be noisy and the highlight benefit will be marginal.

More info on CMOS sensor grid artefacts.

Cameras with bayer CMOS sensors can in certain circumstances suffer from an image artefact that appears as a grid pattern across the image. The actual artefact is normally the result of red and blue pixels that are brighter than they should be which gives a magenta type flare effect. However sometimes re-scaling an image containing this artefact can result in what looks like a grid type pattern as some pixels may be dropped or added together during the re scaling and this makes the artefact show up as a grip superimposed over the image.

grid-pattern-1024x576 More info on CMOS sensor grid artefacts.
Grid type artefact.

The cause of this artefact is most likely off-axis light somehow falling on the sensor. This off axis light could come from an internal reflection within the camera or the lens. It’s known that with the F5/F55 and FS7 cameras that a very strong light source that is just out of shot, just above or below the image frame can in some circumstances with some lenses result in this artefact. But this problem can occur with almost any CMOS Bayer camera, it’s not just a Sony problem.

The cure is actually very simple, use a flag or lens hood to prevent off axis light from entering the lens. This is best practice anyway.

So what’s going on, why does it happen?sony-grid-artefact-explained More info on CMOS sensor grid artefacts.

When white light falls on a bayer sensor it passes through color filters before hitting the pixel that measures the light level. The color filters are slightly above the pixels. For white light the amount of light that passes through each color filter is different.  I don’t know the actual ratios of the different colors, it will vary from sensor to sensor, but green is the predominant color with red and blue being considerably lower, I’ve used some made up values to illustrate what is going on, these are not the true values, but should illustrate the point.

In the illustration above when the blue pixel see’s 10%, green see 70% and red 20%, after processing the output would be white. If the light falling on the sensor is on axis, ie coming directly, straight through the lens then everything is fine.

But if somehow the light falls on the sensor off axis at an oblique angle then it is possible that the light that passes through the blue filter may fall on the green pixel, or the light from the green filter may fall on the red pixel etc. So instead of nice white light the sensor pixels would think they are seeing light with an unusually high red and blue component. If you viewed the image pixel for pixel it would have very bright red pixels, bright blue pixels and dark green pixels. When combined together instead of white you would get Pink or Blue. This is the kind of pattern that can result in the grid type artefact seen on many CMOS bayer sensors when there are problems with off axis light.

This is a very rare problem and only occurs in certain circumstances. But when it does occur it can spoil an otherwise good shot. It happens more with full frame lenses than with lenses designed for super 35mm or APSC and wide angles tend to be the biggest offenders as their wide Field of View  (FoV) allows light to enter the optical path at acute angles. It’s a problem with DSLR lenses designed for large 4:3 shaped sensors rather than the various wide screen format that we shoot video in today. All that extra light above and below the desired widescreen frame, if it isn’t prevented from entering the lens has to go somewhere. Unfortunately once it enters the cameras optical path it can be reflected off things like the very edge of the optical low pass filter, the ND filters or the face of the sensor itself.

The cure is very simple and should be standard practice anyway. Use a sun shade, matte box or other flag to prevent light from out of the frame entering the lens. This will prevent this problem from happening and it will also reduce flare and maximise contrast. Those expensive matte boxes that we all like to dress up our cameras with really can help when used and adjusted correctly.

I have found that adding a simple mask in front of the lens or using a matte box such as any of the Vocas matte boxes with eyebrows will eliminate the issue. Many matte boxes will have the ability to be fitted with a 16:9 or 2.40:1 mask ( also know as Mattes hence the name Matte Box) ahead of the filter trays. It’s one of the key reason why Matte Boxes were developed.

IMG_1022 More info on CMOS sensor grid artefacts.
Note the clamp inside the hood for holding a mask in front of the filters on this Vocas MB216 Matte Box. Not also how the Matte Box’s aperture is 16:9 rather than square to help cut out of frame light.
SMB-1_mpa_04-1024x576 More info on CMOS sensor grid artefacts.
Arri Matte Box with Matte selection.

You should also try to make sure the size of the matte box you use is appropriate to the FOV of the lenses that you are using. An excessively large Matte Box isn’t going to cut as much light as a correctly sized one.  I made a number of screw on masks for my lenses by taking a clear glass or UV filter and adding a couple of strips of black electrical tape to the rear of the filter to produce a mask for the top and bottom of the lens. With zoom lenses if you make this mask such that it can’t be seen in the shot at the wide end the mask is effective throughout the entire zoom range.

f5-f55-mask More info on CMOS sensor grid artefacts.

Many cinema lenses include a mask for 17:9 or a similar wide screen aperture inside the lens.

 

Tales of exposure from the grading suite.

I had the pleasure of listening to Pablo Garcia Soriano the resident DiT/Colorist at the Sony Digital Motion Picture Center at Pinewood Studios last week talk about grading modern digital cinema video cameras during the WTS event .

The thrust of his talk was about exposure and how getting the exposure right during the shoot makes a huge difference in how much you can grade the footage in post. His main observation was that many people are under exposing the camera and this leads to excessive noise which makes the pictures hard to grade.

There isn’t really any real way to reduce the noise in a video camera because nothing you normally do can change the sensitivity of the sensor or the amount of noise it produces. Sure, noise reduction can mask noise, but it doesn’t really get rid of it and it often introduces other artefacts. So the only way to change the all important signal to noise ratio, if you can’t change the noise, is to change the signal.

In a video camera that means opening the aperture and letting in more light. More light means a bigger video signal and as the noise remains more or less constant that means a better signal to noise ratio.

If you are shooting log or raw then you do have a fair amount of leeway with your exposure. You can’t go completely crazy with log, but you can often over expose by a stop or two with no major issues. You know, I really don’t like using the term “over-expose”  in these situations. But that’s what you might want to do, to let in up to 2 stops more light than you would normally.

In photography, photographers shooting raw have been using a technique called exposing to the right (ETTR) for a long time. The term comes from the use of a histogram to gauge exposure and then exposing so the the signal goes as far to the right on the histogram as possible (the right being the “bright” side of the scale). If you really wanted to have the best possible signal to noise ratio you could use this method for video too. But ETTR means setting your exposure based on your brightest highlights and as highlights will be different from shot to shot this means the mid range of you shot will go up and down in exposure depending on how bright the highlights are. This is a nightmare for the colorist as it’s the mid-tones and mid range that is the most important, this is what the viewer notices more than anything else. If these are all over the place the colorist has to work very hard to normalise the levels and it can lead to a lot of variability in the footage.  So while ETTR might be the best way to get the very best signal to noise ratio (SNR), you still need to be consistent from shot to shot so really you need to expose for mid range consistency, but shift that mid range a little brighter to get a better SNR.

Pablo told his audience that just about any modern digital cinema camera will happily tolerate at least 3/4 of a stop of over exposure and he would always prefer footage with very slightly clipped highlights rather than deep shadows lost in the noise. He showed a lovely example of a dark red car that was “correctly” exposed. The deep red body panels of the car were full of noise and this made grading the shot really tough even though it had been exposed by the book.

When I shoot with my F5 or FS7 I always rate them a stop slower that the native ISO of 2000. So I set my EI to 1000 or even 800 and this gives me great results. With the F55 I rate that at 800 or even 640EI. The F65 at 400EI.

If you ever get offered a chance to see one of Pablo’s demos at the DMPCE go and have a listen. He’s very good.