Category Archives: Technology

What shutter speed to use if shooting 50p or 60p for 50i/60i conversion.

An interesting question got raised on Facebook today.

What shutter speed should I use if I am shooting at 50p so that my client can later convert the 50p to 50i? Of course this would also apply to shooting at 60p for 60i conversion.

Lets first of all make sure that we all understand that what’s being asked for here is to shoot at 50(60) progressive frames per second so that the footage can later be converted to 25(30) frames per second interlace – which has 50(60) fields.

If we just consider normal 50p or 60p shooing the the shutter speed that you would chooses on many factors including what you are shooting and how much light you have and personal preference.

1/48 or 1/50th of a second is normally considered the slowest shutter speed at which motion blur in a typical frame no longer significantly softens the image. This is why old point and shoot film cameras almost always had a 1/50th shutter, it was the slowest you could get away with.

Shooting with a shutter speed that is half the duration of the cameras frame rate is also know as using a 180 degree shutter, a very necessary practice with a film movie camera due to the way the mechanical shutter must be closed while the film is physically advanced to the next frame. But it isn’t essential that you have the closed shutter period with an electronic camera as there is no film to move, so you don’t have to use a 180 degree shutter if you don’t want to.

There is no reason why you can’t use a 1/50th or 1/60th shutter when shooting at 50fps or 60fps, especially if you don’t have a lot of light to work with. 1/50(1/60) at 50fps(60fps) will give you the smoothest motion as there are no breaks in the motion between each frame. But many people like to sharpen up the image still further by using 1/100th(1/120th) to reduce motion blur.  Or they prefer the slightly steppy cadence this brings as it introduces a small jump in motion between each frame. Of course 1/100th needs twice as much light. So there is no hard and fast rule and some shots will work better at 1/50th while others may work better at 1/100th.

However if you are shooting at 50fps or 60fps so that it can be converted to 50i or 60i, with each frame becoming a field, then the “normal” shutter speed you should use will be 1/50th or 1/60th to mimic a 25fps-50i camera or 30fps-60i camera which would typically have it’s shutter running at 1/50 or 1/60th. 1/100th(120th) at 50i(60i) can look a little over sharp due to an increase in aliasing due to the way a interlace video field only has half the resolution of the full frame. Particularly with 50p converted to 50i as there is no in-camera anti-aliasing and each frame will simply have it’s resolution divided by 2 to produce the equivalent of a single field. When you shoot with a “real” 50i camera line pairs on the sensor are combined and read out together as a  single field line and this slightly softens and anti-aliases each of fields. 50i has lower vertical resolution than 25p. But with simple software conversions from 50p to 50i this anti-aliasing does not occur. If you combine that with a faster than typical shutter speed the interlaced image can start to look over sharp and may have jaggies or color moire not present in the original 50/60p footage.

Advertisements

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.

Why hasn’t anyone brought out a super sensitive 4K camera?

Our current video cameras are operating at the limits of current sensor technology. As a result there isn’t much a camera manufacturer can do to improve sensitivity without compromising other aspects of the image quality.
Every sensor is made out of silicon and silicon is around 70% efficient at converting photons of light into electrons of electricity. So the only things you can do to alter the sensitivity is change the pixel size, reduce losses in the colour and low pass filters, use better micro lenses and use various methods to prevent the wires and other electronics on the face of the sensor from obstructing the light. But all of these will only ever make very small changes to the sensor performance as the key limiting factor is the silicon used to make the sensor.
 
This is why even though we have many different sensor manufacturers, if you take a similar sized sensor with a similar pixel count from different manufacturers the performance difference will only ever be small.
 
Better image processing with more advanced noise reduction can help reduce noise which can be used to mimic greater sensitivity. But NR rarely comes without introducing other artefacts such as smear, banding or a loss of subtle details. So there are limits as to how much noise reduction you want to apply. 
 

So, unless there is a new sensor technology breakthrough we are unlikely to see any new camera come out with a large, actual improvement in sensitivity. Also we are unlikely to see a sudden jump in resolution without a sensitivity or dynamic range penalty with a like for like sensor size. This is why Sony’s Venice and the Red cameras are moving to larger sensors as this is the only realistic way to increase resolution without compromising other aspects of the image. It’s why all the current crop of S35mm 4K cameras are all of very similar sensitivity, have similar dynamic range and similar noise levels.

 

A great example of this is the Sony A7s. It is more sensitive than most 4K S35 video cameras simply because it has a larger full frame sensor, so the pixels can be bigger, so each pixel can capture more light. It’s also why cameras with smaller 4K sensors will tend to be less sensitive and in addition have lower dynamic range (because the pixel size determines how many electrons it can store before it overloads).

FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.

Here is something a little different.

I few years ago I was privileged to have Jean Mouettee and Thierry Legault join me on one of my Northern Lights tours. They were along to shoot the Aurora on an FS100 (it might have been an FS700) in real time. Sadly we didn’t have the best of Auroras on that particular trip. Theirry is famous for his amazing images of the Sun with the International Space Station passing in front of it.

iss_atlantis_transit2_2010 FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
Amazing image by Thierry Legault of the ISS passing in front of the Sun.

Well the two of them have been very busy. Working with some special dual A7s camera rigs recording on to a pair of Atomos Shoguns, they have been up in Norway shooting the Northern Lights in 3D. You can read more about their exploits and find out how they did it here: https://www.swsc-journal.org/articles/swsc/abs/2017/01/swsc170015/swsc170015.html

To be able to “see” the Aurora in 3D they needed to place the camera rigs over 6km apart. I did try to take some 3D time-lapse of the Aurora a few years back with cameras 3Km apart, but that was timelapse and I was thwarted by low cloud. Jean and Thierry have gone one better and filmed the Aurora not only in 3D but also in real time. That’s no mean feat!

20170218_233041_rec FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
One of the two A7s camera rigs used for the real time 3D Aurora project. The next stage will use 4 cameras in each rig for whole sky coverage.

If you want to see the 3D movies take a look at this page: http://www.iap.fr/science/diffusion/aurora3d/aurora3d.html

I’d love to see these projected in a planetarium or other dome venue in 3D. It would be quite an experience.

Jean was also in the US for the total Eclipse in August. He shot the eclipse using an FS5 recording 12 bit raw on a Atomos Shogun. He’s put together a short film of his experience and it really captures the excitement of the event as well as some really spectacular images of the moon moving across the face of the sun. I really shows what a versatile camera the FS5 is.

If you want a chance to see the Northern Lights for yourself why not join me next year for one of my rather special trips to Norway. I still have some spaces. http://www.xdcam-user.com/northern-lights-expeditions-to-norway/

SD Cards – how long do they last?

This came up on facebook the other day, how long do SD cards last?

First of all – I have found SD cards to be pretty reliable overall. Not as reliable as SxS cards or XQD cards, but pretty good generally. The physical construction of SD cards has let me down a few times, the little plastic fins between the contacts breaking off.  I’ve had a couple of cards that have just died, but I didn’t loose any content as the camera wouldn’t let me record to them. Plus I have also had SD cards that have given me a lot of trouble getting content and files off them. But compared to tape, I’ve had far fewer problems with solid state media.

But something that I don’t think most people realise is that a  lot of solid state media ages the more you use it. In effect it wears out.

There are a couple of different types of memory cell that can be used in solid state media. High end professional media will often use single level memory cells that are either on or off. These cells can only store a single value, but they tend to be fast and extremely reliable due to their simplicity. But you need a lot of them in a big memory card.  The other type of cell found in most lower cost media is a multi-level cell. Each multi-level cell stores a voltage and the level of the voltage in that cell represents many different values. As a result each cell can store more than one single value. The memory cells are insulated to prevent the voltage charge leaking away. However each time you write to the cell the insulation can be eroded. Over time this can result in the cell becoming leaky and this allows the voltage in the cell to change slightly resulting in a change to the data that it holds. This can lead to data corruption.

So multi level cards that get used a lot, may develop leaky cells. But if the card is read reasonably soon after it was written to (days, weeks, a month perhaps) then it is unlikely that the user will experience any problems. The cards include circuitry designed to detect problem cells and then avoid them. But over time the card can reach a point where it no longer has enough memory to keep mapping out damaged cells, or the cells loose there charge quickly and as a result the data becomes corrupt.

Raspberry Pi computers that use SD cards as memory can kill SD cards in a matter of days because of the extremely high number of times that the card may be written to.

With a video camera it will depend on how often you use the cards. If you only have one or 2 cards and you shoot a lot I would recommend replacing the cards yearly. If you have lots of cards either use one or two and replace them regularly or try to cycle through all the cards you have to extend their life and avoid any one card from excessive use which might make it less reliable than the rest.

One thing regular SD cards are not good for is long term storage (more than a year and never more than 5 years) as the charge in the cells will leak away over time. There are special write once SD cards designed for archival purposes where each cell is permanently fused to either On or Off.  Most standard SD cards, no matter how many times they have been used won’t hold data reliably beyond 5 years.

What does ISO mean with todays cameras?

Once upon a time the meaning of ISO was quite clear. It was a standardised sensitivity rating of the film stock you were using. If you wanted more sensitivity, you used film with a higher ISO rating. But today the meaning of ISO is less clear and we can’t swap our sensor out for more or less sensitive ones. So what does it mean?

ISO is short for International Standards Organisation. And they specify many, many different standards for many different things. For example ISO 3166 is for country codes, ISO 50001 is for energy management.

But in our world of film and TV there are two ISO standards that we have blended into one and we just call it “ISO”.

ISO 5800:2001 is the system used to determine the sensitivity of color negative film found by plotting the density of the film against exposure to light.

ISO 12232:2006 specifies the method for assigning and reporting ISO speed ratings, ISO speed latitude ratings, standard output sensitivity values, and recommended exposure index values, for digital still cameras.

Note a key difference: ISO 5800 is the measurement of the actual sensitivity to light of film.  ISO 12232 is a standardised way to report the speed rating, it is not a direct sensitivity measurement.

Within the digital camera ISO rating system there are 5 different standards that a camera manufacturer can use when obtaining the ISO rating of a camera. The most commonly used method is the Recommended Exposure Index (REI) method, which allows the manufacturer to specify a camera model’s EI or base ISO arbitrarily based on what the manufacturer believes produces a satisfactory image. So it’s not really a measure of the cameras sensitivity, but a rating that if used with a standard external calibrated light meter to set the exposure will give a satisfactory looking image. This is very different to a sensitivity measurement and variations in the opinion as to what is a satisfactory image will vary from person to person. So there is a lot of scope for movement as to how an electronic camera might be rated.

As you cannot change the sensor in a digital camera, you cannot change the cameras efficiency at converting light into electrons (which is largely determined by the materials used and the physical construction). So you cannot change the actual sensitivity of the camera to light. But we have all seen how the ISO number of most digital cameras can normally be increased (and sometimes lowered) from the base ISO number.

Raising and lowering the ISO in an electronic camera is normally done by adjusting the amplification of the signal coming from the sensor, typically referred to as “gain” in the camera. It’s not actually a physical change in the cameras sensitivity to light, it like turning up the volume on a radio to make the music louder. Dual ISO cameras that claim not to add gain when switching between ISO’s typically do this by adjusting the way the signal from the sensor is converted from an analog signal to a digital one. While it is true that this is different to a gain shift it does typically alter the noise levels as to make the picture brighter you need to sample the sensors output lower down and closer to the noise floor. Once again though it is not an actual sensitivity change, it does not alter the sensors sensitivity to light, you are just picking a different part of it’s output range.

Noise and Signal To Noise Ratio.

Most of the noise in the pictures we shoot comes from the sensor and the level of this noise coming from the sensor is largely unchanged no matter what you do (some dual ISO cameras use variations in the way the sensor signal is sampled to shift the noise floor up and down a bit). So the biggest influence on the signal to noise ratio is the amount of light you put on the sensor. More light = More signal. The noise remains the same but the signal is bigger so you get a better signal to noise ratio, up to the point where the sensor overloads.

But what about low light?

To obtain a brighter image when there the light levels are low and the picture coming from the sensor looks dark the signal coming from the sensor is boosted or amplified (gain is added). This amplification makes both the desirable signal bigger but also the noise bigger. If we make the desirable picture 2 times brighter we also make the noise 2 x bigger. As a result the picture will be more noisy and grainy than one where we had enough light to get the brightness we want.

The signal to noise ratio deteriorates because the added amplification means the recording will clip more readily. Something that is close to the recordings clip point may be sent above the clip point by adding gain, so the range you can record reduces while the noise gets bigger. However the optimum exposure is now achieved with less light so the equivalent ISO number is increased. If you were using a light meter you would increase the ISO setting on the light meter to get the correct exposure. But the camera isn’t more sensitive, it’s just that the optimum amount of light for the “best” or “correct” exposure is reduced due to the added amplification.

So with an electronic camera, ISO is a rating that will give you the correct brightness of recording for the amount of light and the amount of gain that you have. This is different to sensitivity. Obviously the two are related, but they are not quite the same thing.

Getting rid of noise:

To combat the inevitable noise increase as you add gain/amplification most modern cameras use electronic noise reduction which is applied more and more aggressively as you increase the gain. At low levels this goes largely un-noticed. But as you start to add more gain and thus and more noise reduction you will start to degrade the image. It may become softer, it may become smeary. You may start to see banding ghosting or other artefacts.

Often as you increase the gain you may only see a very small increase in noise as the noise reduction does a very good job of hiding the noise. But for every bit of noise thats reduced there will be another artefact replacing it.

Technically the signal to noise ratio is improved by the use of noise reduction, but this typically comes at a price and NR can be very problematic if you later want to grade or adjust the footage as often you won’t see the artefacts until after the corrections or adjustments have been made. So be very careful when adding gain. It’s never good to have extra gain.

Why are Sony’s ISO’s different between standard gammas and log?

With Sony’s log capable cameras (and most other manufacturers) when you switch between the standard gamma curves and log gamma there is a change in the cameras ISO rating. For example the FS7 is rated at 800 ISO in rec709 but rated at 2000 ISO in log. Why does this change occur and how does it effect the pictures you shoot?

As 709 etc has a limited DR (between around 6 and 10 stops depending on the knee settings) while the sensor itself has a 14 stop range, you only need to take a small part of the sensors full range to produce that smaller range 709 or hypergamma image. That gives the camera manufacturer some freedom to pick the sweetest part of the sensors range. his also gives some leeway as to where you place the base ISO.

I suspect Sony chose 800 ISO for the FS7 and F5 etc as that’s the sensors sweet spot, I certainly don’t think it was an accidental choice.

What is ISO on an electronic camera? ISO is the equivalent sensitivity rating. It isn’t a measure of the cameras actual sensitivity, it is the ISO rating you need to enter into a light meter if you were using an external light meter to get the correct exposure settings. It is the equivalent sensitivity. Remember we can’t change the sensor in these cameras so we can’t actually change the cameras real sensitivity, all we can do is use different amounts of gain or signal amplification to make the pictures brighter or darker.

When you go switch the camera to log you have no choice other than to take everything the sensor offers. It’s a 14 stop sensor and if you want to record 14 stops, then you have to take 100% of the sensors output. The camera manufacturer then chooses what they believe is the best exposure mid point point where they feel there is an acceptable compromise between noise, highlight and lowlight response. From that the manufacture will get an ISO equivalent exposure rating.

If you have an F5, FS7 or other Sony log camera, look at what happens when you switch from rec709 to S-Log2 but you keep your exposure constant.

Middle grey stays more or less where it is, the highlights come down. White will drop from 90% to around 73%. But the ISO rating given by the camera increases from 800ISO to 2000ISO. This increased ISO number implies that the sensor became more sensitive – This is not the case and a little missleading. If you set the camera up to display gain in dB and switch between rec709 (std gamma) and S-Log the camera stays at 0dB, this should be telling you that there is no change to the cameras gain, no change to it’s sensitivity. Yet the ISO rating changes – why?

The only reason the ISO number increases is to force us to underexpose the sensor by 1.3 stops (relative to standard gammas such as rec709 and almost every other gamma) so we can squeeze a bit more out of the highlights. If you were using an external light meter to set your exposure if you change the ISO setting on the light meter from 800 ISO to 2000 ISO  the light meter will tell you to close the aperture by 1.3 stops. So that’s what we do on the camera, we close the aperture down a bit to gain some extra highlight range.

But all this comes at the expense of the shadows and mid range. Because you are putting less light on the sensor if you use 2000 ISO as your base setting the shadows and mids are now not as good as they would be  in 709 or with the other standard gammas.

This is part of the reason why I recommend that you shoot with log between 1 and 2 stops brighter than the base levels given by Sony. If you shoot 1 stop brighter that is the equivalent to shooting at 1000 ISO and this is closer to the 800 ISO that Sony rate the camera at in standard gamma.  Shooting that bit brighter gives you a much better mid range that grades much better.

 

Sony RX0 – Is this the ultimate mini-cam (for now at least).

RX0 Sony RX0 - Is this the ultimate mini-cam (for now at least).Sony have just released a rather exciting looking new type of mini-cam, the RX0.

I have not played with one yet, so I can only base my comments on the specs, but the specs are both impressive and exciting.

Most gopro type cameras use tiny sensors packed with pixels. This presents a problem as they tend not to be very light sensitive. However those small sensors when combined with an ultra wide angle lens eliminates the need to focus as the depth of field is vast. But for many applications that’s not what you always want. Sometimes you don’t want an ultra wide fisheye view of the world, sometimes you want to get in a bit closer. Sometimes you want a bit of selective focus. In addition it’s hard to be creative when you have no focus or depth of field control. Talking of control most mini-cams have very, very little in the way of manual control as they don’t have adjustable apertures and as a result rely entirely on variable gain and shutter speeds to control the exposure.

Enter the RX0. The RX0 shares a lot of features with the well regarded RX series of compact stills cameras. It has a 1.0″ type sensor, huge compared to most other minicams. It has 24mm f4 lens so it’s less wide and has a shallower DoF. It can shoot in 4K, it can even record using S-Log2 to capture a greater dynamic range so it may turn out to be a great mini-cam for HDR productions (although how big that dynamic range is is not clear at this time). I wish I had some of these for the HDR shoots I did at the beginning of the year.

It’s a camera you can control manually and it even has a special high speed shutter mode for all but eliminating rolling shutter artefacts.

Want to shoot slow-mo? No problem, the maximum frame rate is 960fps (although I suspect that the image quality drops at the higher frame rates).

It’s still very small and very compact, it’s also waterproof and has a high degree of shock proofing.

I can see myself using this as a time lapse camera or in a VR rig. So many applications for a camera like this. Can’t wait to get my hands on one.

Here’s the Sony product page: https://www.sony.co.uk/electronics/cyber-shot-compact-cameras/dsc-rx0#product_details_default

What is HLG and what is it supposed to be used for?

While we wait for Sony to re-release the version 4 firmware for the FS5 I thought I would briefly take a look at what HLG is and what it’s designed to do as there seems to be a lot of confusion.

HLG stands for Hybrid Log Gamma. It is one of the gamma curves used for DISTRIBUTION of HDR content to HDR TV’s that support the HLG standard. It was never meant to be used for capture, it was specifically designed for delivery.

As the name suggests HLG is a hybrid gamma curve. It is a hybrid of Rec-709 and Log. But before you get all excited by the log part, the log used by HLG is only a small part of the curve and it is very agressive – it crams a very big dynamic range into a very small space – This means that if you take it into post production and start to fiddle around with it there is a very high probability of problems with banding and other similar artefacts becoming apparent.

The version of HLG in the FS5 firmware follows the BBC HLG standard (there is another NHK standard). From black to around 70% the curve is very similar to Rec 709, so from 0 to 70% you get quite reasonable contrast. Around 70% the curve transitions to a log type gamma allowing a dynamic range much greater than 709 to be squeezed into a conventional codec. The benefit this brings is that on a conventional Rec-709 TV the picture doesn’t look wrong. It looks like a very slightly darker than normal, only slightly flat mid range, but the highlights are quite flat and  washed out. For the average home TV viewer watching on a 709 TV the picture looks OK, maybe not the best image ever seen, but certainly acceptable.

However feed this same signal to an HDR TV that supports HLG and the magic starts to happen. IF the TV supports HLG (and currently only a fairly small proportion of HDR TV’s support HLG. Most use PQ/ST2084) then the HLG capable HDR TV will take the compressed log highlight range and stretch it out to give a greater dynamic range display. The fact that the signal gets stretched out means that the quality of the codec used is critical. HLG was designed for 10 bit distribution using HEVC, it was never meant to be used with 8 bit codecs, so be very, very careful if using it in UHD with the FS5 as this is only 8 bit.

So, HLG’s big party trick is that it produces an acceptable looking image on a Rec-709 TV, but also gives an HDR image on an HDR TV. So one signal can be used for both HDR and SDR giving what might be called backwards compatibility with regular SDR TV’s. But it is worth noting that on a 709 TV HLG images don’t look as good as images specifically shot or graded for 709. It is a bit of a compromise.

What about the dynamic range? High end HDR TV’s can currently show about 10 stops. Lower cost HDR TV’s may only be able to show 8 stops (compared to the 6 stops of a 709 TV). There is no point in feeding a 14 stop signal to a 10 stop TV, it won’t look the best. From what I’ve seen of the HLG curves in the FS5 they allow for a maximum of around 10 to 11 stops, about the same as the cinegammas. HLG can be used for much greater ranges, but as yet there are no TV’s that can take advantage of this and it will be a long tome before there are. So for now, the recorded range is a deliberately limited so you don’t see stuff in the viewfinder that will never be seen on todays HDR TV’s.  As a result the curves don’t use the full recording range of the camera. This means they are not using the recording data in a particularly efficient way, a lot of data is unused and wasted. But this is necessary to make the curves directly compatible with an HLG display.

What about grading them? My advice – don’t try to grade HLG footage. There are three problems. The first is that the gamma is very different in the low/mid range compared to the highlights. This means that in post the shadows and mid range will respond to corrections and adjustments very differently to the high range. That makes grading tricky as you need to apply separate correction to the midrange and highlights.

The second problem is that the is a very large highlight range squeezed into a very small recording range. It should look OK when viewed directly with no adjustment. But if you try stretching that out to make the highlights brighter (remember they never reach 100% as recorded) or to make them more contrasty, there is a higher probability of seeing banding artefacts than with any other gamma in the camera.

The third issue is simply that the limited recording range means you have fewer code values per stop than regular Rec-709, the cinegammas or S-Log2. HLG is the least best choice for grading in the FS5.

Next problem is color. Most HDR TV’s want Rec-2020 color. Most conventional monitors want Rec-709 color. Feed Rec-2020 into a 709 monitor and the colors look flat and the hues are all over the place, especially skin tones. Some highly saturated colors on the edge of the color gamut may pop out more than others and this looks odd.

Feed 709 into a 2020 TV and it will look super saturated and once again the color hues will be wrong. Also don’t fool yourself into thinking that by recording Rec2020 you are actually capturing more. The FS5 sensor is designed for 709. The color filters on the sensor do work a little beyond 709, but nowhere near what’s needed to actually “see” the full 2020 color space. So if you set the FS5 to 2020 what you are capturing is only marginally greater than 709. All you really have is the 709  with the hues shifted and saturation reduced so color looks right on a 2020 monitor or TV.

So really, unless you are actually feeding an Rec 2100 (HLG + 2020) TV, there is no point in using 2020 color as this require you to grade the footage to get the colors to look right on most normal TV’s and monitors. As already discussed, HLG is far from ideal for grading, so better to shot 709 if that’s what your audience will be using.

Don’t let the hype and fanfares that have surrounded this update cloud your vision. HLG is certainly very useful if you plan to directly feed HDR to a TV that supports HLG. But if you plan on creating HDR content that will be viewed on both HLG TV’s and the more common PQ/ST2084 TV’s then HLG is NOT what you want. You would be far – far better off shooting with S-Log and then grading your footage to these two very different HDR standards. If you try to convert HLG to PQ it is not going to look nearly as good as if you start with S-Log.

Exposure levels: If you want to get footage that works both with an HLG HDR TV and a SDR 709 TV then you need to expose carefully. A small bit of over exposure wont hurt the image when you view it on a 709 TV or monitor, so it will look OK in the viewfinder. But on an HDR TV any over exposure could result in skin tones that look much too bright and an image that is unpleasantly bright. As a guide you should expose diffuse 90% white (a white card or white piece of paper) at no more than 75%. Skin tones should be around 55 to 60%. You should not expose HLG as brightly as you do Rec-709.

Sure you can shoot with HLG for non HDR applications. You will get some slightly flat looking footage with rolled off highlights. If that’s the image you want then I’m not going to stop you shooting that way. If that’s what you want I suggest you consider the Cinegamma as these capture a similar DR also have a nice highlight roll off (when exposed correctly) and do use the full recording range.

Whatever you do make sure you understand what HLG was designed for. Make sure you understand the post production limitations and above all else understand that it absolutely is not a substitute for S-log.

PXW-FS5, Version 4.0 and above base ISO – BEWARE if you use ISO!!

The new version 4.0 firmware for the PXW-FS5 brings a new lower base ISO range to the camera. This very slightly reduces noise levels in the pictures. If you use “gain” in dB to indicate your gain level, then you shouldn’t have any problems, +6dB is still +6dB and will be twice as noisy as 0dB. However if you use ISO to indicate your gain level then be aware that as the base sensitivity is now lower, if you use the same ISO with version 4 as you did with version 3 you will be adding more gain than before.

Version 3 ISO  in black, version 4 ISO in Blue

Standard 1000 ISO – 800 ISO
Still 800 ISO- 640 ISO
Cinegamma 1  800 ISO – 640 ISO
Cinegamma 2  640 ISO – 500 ISO
Cinegamma 3  1000 ISO – 800 ISO
Cinegamma 4  1000 ISO – 800 ISO
ITU709 1000 ISO – 800 ISO
ITU709(800) 3200 ISO – 2000 ISO
S-Log2 3200 ISO – 3200/2000 ISO
S-Log3 3200 ISO- 3200/2000 ISO

At 0dB or the base ISO these small changes (a little under 3dB) won’t make much difference because the noise levels are pretty low in either case. But at higher gain levels the difference is more noticeable.

For example if you  often used Cinegamma 1 at 3200 ISO with Version 3 you would be adding 12dB gain and the pictures would be approx 4x noisier than the base ISO.

With Version 4, 3200 ISO with Cinegamma 1 is an extra 15dB gain and you will have pictures approx 6 time noisier than the base ISO.

Having said that, because 0dB in version 4 is now a little less noisy than in version 3, 3200 ISO in V3 looks quite similar to 3200 ISO in version 4 even though you are adding a bit more gain.