Category Archives: Technology

Choosing the right gamma curve.

One of the most common questions I get asked is “which gamma curve should I use?”.

Well it’s not an easy one to answer because it will depend on many things. There is no one-fits-all gamma curve. Different gamma curves offer different contrast and dynamic ranges.

So why not just use the gamma curve with the greatest dynamic range, maybe log? Log and S-Log are also gamma curves but even if you have Log or S-Log it’s not always going to be the best gamma to use. You see the problem is this: You have a limited size recording bucket into which you must fit all your data. Your data bucket, codec or recording medium will also effect your gamma choice.

If your shooting and recording with an 8 bit camera, anything that uses AVCHD or Mpeg 2 (including XDCAM), then you have 235 bits of data to record your signal. A 10 bit camera or 10 bit external recorder does a bit better with around 940 bits of data, but even so, it’s a limited size data bucket. The more dynamic range you try to record, the less data you will be using to record each stop. Lets take an 8 bit camera for example, try to record 8 stops and that’s about 30 bits per stop. Try to extend that dynamic range out to 11 stops and now you only have about 21 bits per stop. It’s not quite as simple as this as the more advanced gamma curves like hypergammas, cinegammas and S-Log all allocate more data to the mid range and less to highlights, but the greater the dynamic range you try to capture, the less recorded information there will be for each stop.

In a perfect world you would choose the gamma you use to match each scene you shoot. If shooting in a studio where you can control the lighting then it makes a lot of sense to use a standard gamma (no knee or knee off) with a range of up to 7 stops and then light your scene to suit. That way you are maximising the data per stop. Not only will this look good straight out of the camera, but it will also grade well provided your not over exposed.

However the real world is not always contained in a 7 stop range, so you often need to use a gamma with a greater dynamic range. If your going direct to air or will not be grading then the first consideration will be a standard gamma (Rec709 for HD) with a knee. The knee adds compression to just the highlights and extends the over-exposure range by up to 2 or 3 stops depending on the dynamic range of the camera. The problem with the knee is that because it’s either on or off, compressed or not compressed it can look quite electronic and it’s one of the dead giveaways of video over film.

If you don’t like the look of the knee yet still need a greater dynamic range, then there are the various extended range gammas like Cinegamma, Hypergamma or Cinestyle. These extend the dynamic range by compressing highlights, but unlike the knee, the amount of compression starts gradually and get progressively greater. This tends to look more film like than the on/off knee as it tends to roll off highlights much more gently. But, to get this gentle roll-off the compression starts lower in the exposure range so you have to be very careful not to over expose your mid-range as this can push faces and skin tones etc into the compressed part of the curve and things won’t look good. Another consideration is that as you are now moving away from the gamma used for display in most TV’s and monitors the pictures will be a little flat so a slight grade often helps with these extended gammas.

Finally we come to log gammas like S-Log, C-Log etc. These are a long way from display gamma, so will need to be graded to like right. In addition they are adding a lot of compression (log compression) to the image so exposure becomes super critical. Normally you’ll find the specified recording levels for middle grey and white to be much lower with log gammas than conventional gammas. White with S-Log for example should only be exposed at 68%. The reason for this is the extreme amount of mid to highlight compression, so your mid range needs to be recorded lower to keep it out of the heavily compressed part of the log gamma curve. Skin tones with log are often in the 40 – 50% range compared to the 60-70% range commonly used with standard gammas.  Log curves do normally provide the very best dynamic range (apart from raw), but they will need grading and ideally you want to grade log footage in a dedicated grading package that supports log corrections. If you grade log in your edit suite using linear (normal gamma) effects your end results won’t be as good as they could be. The other thing with log is now your recording anything up to 13 or 14 stops of dynamic range. With an 8 bit codec that’s only 17 – 18 bits per stop, which really isn’t a lot, so for log really you want to be recording with a very high quality 10 bit codec and possibly an external recorder. Remember with a standard gamma your over 30 bits per stop, now were looking at almost half that with log!

Shooting flat: There is a lot of talk about shooting flat. Some of this comes from people that have seen high dynamic range images from cameras with S-Log or similar which do look very flat. You see, the bigger the captured dynamic range the flatter the images will look. Consider this: On a TV, with a camera with a 6 stop range, the brightest thing the camera can capture will appear as white and the darkest as black. There will be 5 stops between white and black. Now shoot the same scene with a camera with a 12 stop range and show it on the same TV. Again the brightest is white and black is black, but the original 6 stops that the first camera was able to capture are now only being shown using half of the available brightness range of the TV as the new camera is capturing 12 stops in total, so the first 6 stops will now have only half the maximum display contrast. The pictures would look flatter. If a camera truly has greater dynamic range then in general you will get a flatter looking image, but it’s also possible to get a flat looking picture by raising the black level or reducing the white level. In this case the picture looks flat, but in reality has no more dynamic range than the original. Be very careful of modified gammas said to give a flat look and greater dynamic range from cameras that otherwise don’t have great DR. Often these flat gammas don’t increase the true dynamic range, they just make a flat picture with raised blacks which results in less data being assigned to the mid range and as a result less pleasing finished images.

So the key points to consider are:

Where you can control your lighting, consider using standard gamma.

The bigger the dynamic range you try to capture, the less information per stop you will be recording.

The further you deviate from standard gamma, the more likely the need to grade the footage.

The bigger the dynamic range, the more compressed the gamma curve, the more critical accurate mid range exposure becomes.

Flat isn’t always better.

Advertisements

Sensitivity and sensor size -governed by the laws of physics.

Sensor technology right now has not really changed for quite a few years. The materials in sensor pixels and photo-sites to convert photons of light into electrons are pretty efficient. Most manufacturers are using the same materials and are using similar tricks such as micro lenses to maximise the sensors performance. As a result low light performance largely comes down to the laws of physics and the size of the pixels on the sensor rather than who makes it. If you have cameras with the same numbers of pixels per sensor chip, but different sized sensors, the larger sensors will almost always be more sensitive and this is not something that’s likely to change in the near future. It hasn’t actually changed for quite a few years now.
Both on the sensor and after the sensor the camera manufacturers use various noise reduction methods to minimise and reduce noise. Noise reduction almost always has a negative affect on the image quality. Picture smear, posterisation, a smoothed plastic like look can all be symptoms of excessive noise reduction. There are probably more differences between the way different manufacturers implement noise reduction than there are differences between sensors.
The less noise there is from the sensor the less aggressive you need to be with the noise reduction and this is where you really start to see differences in camera performance. At low gain levels there may be little difference between a 1/3″ and 1/2″ camera as the NR circuits cope fairly well in both cases. But when you start boosting the sensitivity by adding gain the NR on the small sensor camera has to work much harder than on the larger sensor camera. This results in either more undesirable image artefacts or allowing more noise to be visible on the smaller sensor camera. So when faced with challenging low light situations, bigger will almost always be better when it comes to sensors. In addition dynamic range is linked to noise as picture noise limits how far the camera can see into the shadows, so generally speaking a bigger sensor will have better dynamic range. Overall camera real camera sensitivity has not changed greatly in recent years. Cameras made with one size of sensor made today are not really any more sensitive than similar ones made 5 years ago. Of course the current trend for large sensor cameras has meant that many more cameras now have bigger sensors with bigger pixels and these are more sensitive than smaller sensors, but like for like, there has been little change.

What is PsF, or why does my camera output interlace in progressive?

This one keeps coming around again and again and it’s not well understood by many.

When the standards for HDSDI and connecting HD devices were originally set down, almost everyone was using interlace. The only real exception was people producing movies and films in 24p. As a result the early standards for HDSDI did not include a specification for 25 or 30 frame per second progressive video. However over time 25p and 30p became popular shooting formats, so a way was needed to send these progressive signals over HDSDI.

The solution was really rather simple, split the progressive frames into odd and even lines and send the odd numbered lines in what would be the upper field of an interlace stream and then send the even numbered lines in what would be the lower field. So in effect the progressive frame gets split into two fields, a bit like an interlaced video stream, but there is no time difference (temporal difference) between when the odd and even are were captured.

This system has the added benefit that even if the monitor at the end of the HDSDI chain is interlace only, it will still display the progressive material more or less correctly.

But here’s the catch. Because the progressive frame split into odd and even lines and stuffed into an interlace signal looks so much like an interlace signal, many devices attached to the PsF source cannot distinguish PsF from real interlace. So more often than not the recorder/monitor/edit system will report that what it is receiving is interlace, even if it is progressive PsF. In most cases this doesn’t cause any problems as what’s contained within the stream does not have any temporal difference between the odd and even lines. The only time it can cause problems is when you apply slow motion effects, scaling effects or standards conversion processes to the footage as fields/lines from adjacent frames may get interleaved in the wrong order. Cases of this kind of thing are however quite rare and unusual.

Some external recorders offer you the option to force them to mark any files recorded as PsF instead of interlace. If you are sure what you are sending to the recorder is progressive, then this is a good idea. However you do need to be careful because what will screw you up is marking real interlace footage as PsF by mistake. If you do this the interlaced frames will be treated as progressive.  If there is any motion in the frame then the two true interlace fields will contain objects in different positions, they will have temporal differences. Combine those two temporally different fields together into a progressive frame and you will see an artifact that looks like a comb has been run through the frame horizontally, it’s not pretty and it can be hard to fix.

So, if you are shooting progressive and yet your external recorder say’s it’s seeing interlace from your HDSDI, don’t panic. This is quite normal.

If you are importing footage that is indicated as being interlace, but you know it’s progressive PsF into most edit packages you can normally select the clips and “interpret footage” or similar to change the clip header files to progressive instead of interlace.

Raw is raw, but not all raw is created equal.

I was looking at some test footage from several raw video cameras the other day and it became very obvious that some of the cameras were better than others and one or two had some real problems with skin tones. You would think that once you bypass all the cameras internal image processing that you should be able to get whatever colorimetry that you want from a raw camera. After all, your dealing with raw camera data. With traditional video cameras a lot of the “look” is created by the cameras color matrix, gamma curves and other internal processing, but a raw camera bypasses all of this outputting the raw sensor data. With an almost infinite amount of adjustment available in post production why is it that not all raw cameras are created equal?

For a start there are differences in sensor sensitivity and noise. This will depend on the size of the sensor, the number of pixels and the effectiveness of the on-sensor noise reduction. Many new sensors employ noise reduction at both analog and digital levels and this can be very effective at producing a cleaner image. So, clearly there are differences in the underlying electronics of different sensors but in addition there is also the quality of the color filters applied over the top of the pixels.

On a single chip camera a color filter array (CFA) is applied to the surface of the sensor. The properties of this filter array are crucial to the performance of the camera. If the filters are not selective enough there will be leakage of red light on to the green sensor pixels, green into blue etc. Designing and manufacturing such a microscopically small filter array is not easy. The filters need to be effective at blocking undesired wavelengths  while still passing enough light so as not to compromise the sensitivity of the sensor. The dyes and materials used must not age or fade and must be resistant to the heat generated by the sensor. One of the reasons why Sony’s new PMW-F55 camera is so much more expensive than the F5 is because the F55’s sensor has a higher quality color filter array that gives a larger color gamut (range) than the F5’s more conventional filter array.

The quality of the color filter array will affect the quality of the final image. If there is too much leakage between the red, green and blue channels there will be a loss of subtle color textures. Faces, skin tones and those mid range image nuances that make a great image great will suffer and no amount of post production processing will make up for the loss of verisimilitude. This is what I believe I was seeing in the comparison raw footage where a couple of the cameras just didn’t have good looking skin tones. So, just because a camera can output raw data from it’s sensor, this is not a guarantee of a superior image. It might well be raw, but because of sensor differences not all raw cameras are created equal.

Calibrating your viewfinder or LCD.

smpte-arib-bars-sample Calibrating your viewfinder or LCD.One of the most important things to do before you shoot anything is to make sure that any monitors, viewfinders or LCD panels are accurately calibrated. The majority of modern HD cameras have built in colour bars and these are ideal for checking your monitor. On most Sony cameras you have SMPTE ARIB colour bars like the ones in the image here. Note that I have raised the black level in the image so that you can see some of the key features more clearly. If your using a LCD or OLED monitor connected via HDSDI or HDMI then the main adjustments you will have are for Contrast, Brightness and Saturation.

First set up the monitor or viewfinder so that the 100% white square is shown as peak white on the monitor. This is done by increasing the contrast control until the white box stops getting brighter on the screen. Once it reaches maximum brightness, back the contrast level down until you can just perceive the tiniest of brightness changes on the screen.

Once this is set you now use the pluge bars to set up the black level. The pluge bars are the narrow near black bars that I’ve marked as -2% +2% and +4% in the picture they are each separated by black. The -2% bar is blacker than black so we should not be able to see this. Using the brightness control adjust the screen so that you can’t see the -2% bar but can just see the +2% bar. The 4% bar should also be visible separated from the 2% bar by black.

Color is harder to set accurately. Looking at the bars, the main upper bars are 75% bars so these are fully saturated, but only at 75% luma. The 4 coloured boxes, 2 on each side, two thirds of the way down the pattern are 100% fully saturated boxes. Using the outer 100% boxes increase the saturation or colour level until the color vibrance of the outer boxes stops increasing, then back the level down again until you just perceive the color decreasing. I find this easiest to see with the blue box.

Now you should have good, well saturated looking bars on you monitor or LCD and provided it is of reasonable quality it should be calibrated adequately well for judging exposure.

I find that on an EX or F3 the LCD panel ends up with the contrast at zero, colour at zero and brightness at about +28 on most cameras.