Category Archives: Technology

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.
Advertisements

FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.

Here is something a little different.

I few years ago I was privileged to have Jean Mouettee and Thierry Legault join me on one of my Northern Lights tours. They were along to shoot the Aurora on an FS100 (it might have been an FS700) in real time. Sadly we didn’t have the best of Auroras on that particular trip. Theirry is famous for his amazing images of the Sun with the International Space Station passing in front of it.

iss_atlantis_transit2_2010 FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
Amazing image by Thierry Legault of the ISS passing in front of the Sun.

Well the two of them have been very busy. Working with some special dual A7s camera rigs recording on to a pair of Atomos Shoguns, they have been up in Norway shooting the Northern Lights in 3D. You can read more about their exploits and find out how they did it here: https://www.swsc-journal.org/articles/swsc/abs/2017/01/swsc170015/swsc170015.html

To be able to “see” the Aurora in 3D they needed to place the camera rigs over 6km apart. I did try to take some 3D time-lapse of the Aurora a few years back with cameras 3Km apart, but that was timelapse and I was thwarted by low cloud. Jean and Thierry have gone one better and filmed the Aurora not only in 3D but also in real time. That’s no mean feat!

20170218_233041_rec FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
One of the two A7s camera rigs used for the real time 3D Aurora project. The next stage will use 4 cameras in each rig for whole sky coverage.

If you want to see the 3D movies take a look at this page: http://www.iap.fr/science/diffusion/aurora3d/aurora3d.html

I’d love to see these projected in a planetarium or other dome venue in 3D. It would be quite an experience.

Jean was also in the US for the total Eclipse in August. He shot the eclipse using an FS5 recording 12 bit raw on a Atomos Shogun. He’s put together a short film of his experience and it really captures the excitement of the event as well as some really spectacular images of the moon moving across the face of the sun. I really shows what a versatile camera the FS5 is.

If you want a chance to see the Northern Lights for yourself why not join me next year for one of my rather special trips to Norway. I still have some spaces. http://www.xdcam-user.com/northern-lights-expeditions-to-norway/

SD Cards – how long do they last?

This came up on facebook the other day, how long do SD cards last?

First of all – I have found SD cards to be pretty reliable overall. Not as reliable as SxS cards or XQD cards, but pretty good generally. The physical construction of SD cards has let me down a few times, the little plastic fins between the contacts breaking off.  I’ve had a couple of cards that have just died, but I didn’t loose any content as the camera wouldn’t let me record to them. Plus I have also had SD cards that have given me a lot of trouble getting content and files off them. But compared to tape, I’ve had far fewer problems with solid state media.

But something that I don’t think most people realise is that a  lot of solid state media ages the more you use it. In effect it wears out.

There are a couple of different types of memory cell that can be used in solid state media. High end professional media will often use single level memory cells that are either on or off. These cells can only store a single value, but they tend to be fast and extremely reliable due to their simplicity. But you need a lot of them in a big memory card.  The other type of cell found in most lower cost media is a multi-level cell. Each multi-level cell stores a voltage and the level of the voltage in that cell represents many different values. As a result each cell can store more than one single value. The memory cells are insulated to prevent the voltage charge leaking away. However each time you write to the cell the insulation can be eroded. Over time this can result in the cell becoming leaky and this allows the voltage in the cell to change slightly resulting in a change to the data that it holds. This can lead to data corruption.

So multi level cards that get used a lot, may develop leaky cells. But if the card is read reasonably soon after it was written to (days, weeks, a month perhaps) then it is unlikely that the user will experience any problems. The cards include circuitry designed to detect problem cells and then avoid them. But over time the card can reach a point where it no longer has enough memory to keep mapping out damaged cells, or the cells loose there charge quickly and as a result the data becomes corrupt.

Raspberry Pi computers that use SD cards as memory can kill SD cards in a matter of days because of the extremely high number of times that the card may be written to.

With a video camera it will depend on how often you use the cards. If you only have one or 2 cards and you shoot a lot I would recommend replacing the cards yearly. If you have lots of cards either use one or two and replace them regularly or try to cycle through all the cards you have to extend their life and avoid any one card from excessive use which might make it less reliable than the rest.

One thing regular SD cards are not good for is long term storage (more than a year and never more than 5 years) as the charge in the cells will leak away over time. There are special write once SD cards designed for archival purposes where each cell is permanently fused to either On or Off.  Most standard SD cards, no matter how many times they have been used won’t hold data reliably beyond 5 years.

Sony RX0 – Is this the ultimate mini-cam (for now at least).

RX0 Sony RX0 - Is this the ultimate mini-cam (for now at least).Sony have just released a rather exciting looking new type of mini-cam, the RX0.

I have not played with one yet, so I can only base my comments on the specs, but the specs are both impressive and exciting.

Most gopro type cameras use tiny sensors packed with pixels. This presents a problem as they tend not to be very light sensitive. However those small sensors when combined with an ultra wide angle lens eliminates the need to focus as the depth of field is vast. But for many applications that’s not what you always want. Sometimes you don’t want an ultra wide fisheye view of the world, sometimes you want to get in a bit closer. Sometimes you want a bit of selective focus. In addition it’s hard to be creative when you have no focus or depth of field control. Talking of control most mini-cams have very, very little in the way of manual control as they don’t have adjustable apertures and as a result rely entirely on variable gain and shutter speeds to control the exposure.

Enter the RX0. The RX0 shares a lot of features with the well regarded RX series of compact stills cameras. It has a 1.0″ type sensor, huge compared to most other minicams. It has 24mm f4 lens so it’s less wide and has a shallower DoF. It can shoot in 4K, it can even record using S-Log2 to capture a greater dynamic range so it may turn out to be a great mini-cam for HDR productions (although how big that dynamic range is is not clear at this time). I wish I had some of these for the HDR shoots I did at the beginning of the year.

It’s a camera you can control manually and it even has a special high speed shutter mode for all but eliminating rolling shutter artefacts.

Want to shoot slow-mo? No problem, the maximum frame rate is 960fps (although I suspect that the image quality drops at the higher frame rates).

It’s still very small and very compact, it’s also waterproof and has a high degree of shock proofing.

I can see myself using this as a time lapse camera or in a VR rig. So many applications for a camera like this. Can’t wait to get my hands on one.

Here’s the Sony product page: https://www.sony.co.uk/electronics/cyber-shot-compact-cameras/dsc-rx0#product_details_default

What is HLG and what is it supposed to be used for?

While we wait for Sony to re-release the version 4 firmware for the FS5 I thought I would briefly take a look at what HLG is and what it’s designed to do as there seems to be a lot of confusion.

HLG stands for Hybrid Log Gamma. It is one of the gamma curves used for DISTRIBUTION of HDR content to HDR TV’s that support the HLG standard. It was never meant to be used for capture, it was specifically designed for delivery.

As the name suggests HLG is a hybrid gamma curve. It is a hybrid of Rec-709 and Log. But before you get all excited by the log part, the log used by HLG is only a small part of the curve and it is very agressive – it crams a very big dynamic range into a very small space – This means that if you take it into post production and start to fiddle around with it there is a very high probability of problems with banding and other similar artefacts becoming apparent.

The version of HLG in the FS5 firmware follows the BBC HLG standard (there is another NHK standard). From black to around 70% the curve is very similar to Rec 709, so from 0 to 70% you get quite reasonable contrast. Around 70% the curve transitions to a log type gamma allowing a dynamic range much greater than 709 to be squeezed into a conventional codec. The benefit this brings is that on a conventional Rec-709 TV the picture doesn’t look wrong. It looks like a very slightly darker than normal, only slightly flat mid range, but the highlights are quite flat and  washed out. For the average home TV viewer watching on a 709 TV the picture looks OK, maybe not the best image ever seen, but certainly acceptable.

However feed this same signal to an HDR TV that supports HLG and the magic starts to happen. IF the TV supports HLG (and currently only a fairly small proportion of HDR TV’s support HLG. Most use PQ/ST2084) then the HLG capable HDR TV will take the compressed log highlight range and stretch it out to give a greater dynamic range display. The fact that the signal gets stretched out means that the quality of the codec used is critical. HLG was designed for 10 bit distribution using HEVC, it was never meant to be used with 8 bit codecs, so be very, very careful if using it in UHD with the FS5 as this is only 8 bit.

So, HLG’s big party trick is that it produces an acceptable looking image on a Rec-709 TV, but also gives an HDR image on an HDR TV. So one signal can be used for both HDR and SDR giving what might be called backwards compatibility with regular SDR TV’s. But it is worth noting that on a 709 TV HLG images don’t look as good as images specifically shot or graded for 709. It is a bit of a compromise.

What about the dynamic range? High end HDR TV’s can currently show about 10 stops. Lower cost HDR TV’s may only be able to show 8 stops (compared to the 6 stops of a 709 TV). There is no point in feeding a 14 stop signal to a 10 stop TV, it won’t look the best. From what I’ve seen of the HLG curves in the FS5 they allow for a maximum of around 10 to 11 stops, about the same as the cinegammas. HLG can be used for much greater ranges, but as yet there are no TV’s that can take advantage of this and it will be a long tome before there are. So for now, the recorded range is a deliberately limited so you don’t see stuff in the viewfinder that will never be seen on todays HDR TV’s.  As a result the curves don’t use the full recording range of the camera. This means they are not using the recording data in a particularly efficient way, a lot of data is unused and wasted. But this is necessary to make the curves directly compatible with an HLG display.

What about grading them? My advice – don’t try to grade HLG footage. There are three problems. The first is that the gamma is very different in the low/mid range compared to the highlights. This means that in post the shadows and mid range will respond to corrections and adjustments very differently to the high range. That makes grading tricky as you need to apply separate correction to the midrange and highlights.

The second problem is that the is a very large highlight range squeezed into a very small recording range. It should look OK when viewed directly with no adjustment. But if you try stretching that out to make the highlights brighter (remember they never reach 100% as recorded) or to make them more contrasty, there is a higher probability of seeing banding artefacts than with any other gamma in the camera.

The third issue is simply that the limited recording range means you have fewer code values per stop than regular Rec-709, the cinegammas or S-Log2. HLG is the least best choice for grading in the FS5.

Next problem is color. Most HDR TV’s want Rec-2020 color. Most conventional monitors want Rec-709 color. Feed Rec-2020 into a 709 monitor and the colors look flat and the hues are all over the place, especially skin tones. Some highly saturated colors on the edge of the color gamut may pop out more than others and this looks odd.

Feed 709 into a 2020 TV and it will look super saturated and once again the color hues will be wrong. Also don’t fool yourself into thinking that by recording Rec2020 you are actually capturing more. The FS5 sensor is designed for 709. The color filters on the sensor do work a little beyond 709, but nowhere near what’s needed to actually “see” the full 2020 color space. So if you set the FS5 to 2020 what you are capturing is only marginally greater than 709. All you really have is the 709  with the hues shifted and saturation reduced so color looks right on a 2020 monitor or TV.

So really, unless you are actually feeding an Rec 2100 (HLG + 2020) TV, there is no point in using 2020 color as this require you to grade the footage to get the colors to look right on most normal TV’s and monitors. As already discussed, HLG is far from ideal for grading, so better to shot 709 if that’s what your audience will be using.

Don’t let the hype and fanfares that have surrounded this update cloud your vision. HLG is certainly very useful if you plan to directly feed HDR to a TV that supports HLG. But if you plan on creating HDR content that will be viewed on both HLG TV’s and the more common PQ/ST2084 TV’s then HLG is NOT what you want. You would be far – far better off shooting with S-Log and then grading your footage to these two very different HDR standards. If you try to convert HLG to PQ it is not going to look nearly as good as if you start with S-Log.

Exposure levels: If you want to get footage that works both with an HLG HDR TV and a SDR 709 TV then you need to expose carefully. A small bit of over exposure wont hurt the image when you view it on a 709 TV or monitor, so it will look OK in the viewfinder. But on an HDR TV any over exposure could result in skin tones that look much too bright and an image that is unpleasantly bright. As a guide you should expose diffuse 90% white (a white card or white piece of paper) at no more than 75%. Skin tones should be around 55 to 60%. You should not expose HLG as brightly as you do Rec-709.

Sure you can shoot with HLG for non HDR applications. You will get some slightly flat looking footage with rolled off highlights. If that’s the image you want then I’m not going to stop you shooting that way. If that’s what you want I suggest you consider the Cinegamma as these capture a similar DR also have a nice highlight roll off (when exposed correctly) and do use the full recording range.

Whatever you do make sure you understand what HLG was designed for. Make sure you understand the post production limitations and above all else understand that it absolutely is not a substitute for S-log.

PXW-FS5, Version 4.0 and base ISO – BEWARE if you use ISO!!

The new version 4.0 firmware for the PXW-FS5 brings a new lower base ISO range to the camera. This very slightly reduces noise levels in the pictures. If you use “gain” in dB to indicate your gain level, then you shouldn’t have any problems, +6dB is still +6dB and will be twice as noisy as 0dB. However if you use ISO to indicate your gain level then be aware that as the base sensitivity is now lower, if you use the same ISO with version 4 as you did with version 3 you will be adding more gain than before.

Version 3 ISO  in black, version 4 ISO in Blue

Standard 1000 ISO – 800 ISO
Still 800 ISO- 640 ISO
Cinegamma 1  800 ISO – 640 ISO
Cinegamma 2  640 ISO – 500 ISO
Cinegamma 3  1000 ISO – 800 ISO
Cinegamma 4  1000 ISO – 800 ISO
ITU709 1000 ISO – 800 ISO
ITU709(800) 3200 ISO – 2000 ISO
S-Log2 3200 ISO – 3200/2000 ISO
S-Log3 3200 ISO- 3200/2000 ISO

At 0dB or the base ISO these small changes (a little under 3dB) won’t make much difference because the noise levels are pretty low in either case. But at higher gain levels the difference is more noticeable.

For example if you  often used Cinegamma 1 at 3200 ISO with Version 3 you would be adding 12dB gain and the pictures would be approx 4x noisier than the base ISO.

With Version 4, 3200 ISO with Cinegamma 1 is an extra 15dB gain and you will have pictures approx 6 time noisier than the base ISO.

Having said that, because 0dB in version 4 is now a little less noisy than in version 3, 3200 ISO in V3 looks quite similar to 3200 ISO in version 4 even though you are adding a bit more gain.

Want to shoot direct to HDR with the PXW-FS7, PMW-F5 and F55?

Sony will be releasing an update for the firmware in the Sony PXW-FS5 in the next few days. This update amongst other things will allow users of the FS5 to shoot to HDR directly using the Hybrid Log Gamma HDR gamma curve and Rec2020 color. By doing this you  eliminate the need to grade your footage and could plug the camera directly in to a compatible HDR TV (the TV must support HLG) and see an HDR image directly on the screen.

But what about FS7 and F5/F55 owners? Well, for most HDR productions I still believe the best workflow is to shoot in S-Log3 and then to grade the footage to HDR. However there may be times when you need that direct HDR output. So for the FS7, F5 and F55 I have created a set of Hybrid Log Gamma LUT’s that you can use to bake in HLG and Rec2020 while you shoot. This gives you the same capabilities as the FS5 (with the exception of the ability to add HLG metadata to the HDMI).

For a video explanation of the process please follow the link to my new Patreon page where you will find the video and the downloadable LUT’s.

Thinking about frame rates.

Once upon a time it was really simple. We made TV programmes and videos that would only ever be seen on TV screens. If you lived and worked in a PAL area you would produce programmes at 25fps. If you lived in an NTSC area, most likely 30fps. But today it’s not that simple. For a start the internet allows us to distribute our content globally, across borders. In addition PAL and NTSC only really apply to standard definition television as they are the way the SD signal is broadcast with a PAL frame being larger than an NTSC one and both use non-square pixels. With HD Pal and NTSC does not exist, both are 1280×720 or 1920×1080 and both use square pixels, the only difference between HD in a 50hz country and a 60hz country is the frame rate.

Today with HD we have many different frame rates to choose from. For film like motion we can use 23.98fps or 24fps. For fluid smooth motion we can use 50fps or 60fps. In between sits the familiar 25fps and 30fps (29.97fps) frame rates. Then there is also the option of using interlace or progressive scan. Which do you choose?

If you are producing a show for a broadcaster then normally the broadcaster will tell you which frame rate they need. But what about the rest of us?

There is no single “right” frame rate to use. A lot will depend on your particular application, but there are some things worth considering.

If you are producing content that will be viewed via the internet then you probably want to steer clear of interlace. Most modern TV’s and all computer monitors use progressive scan and the motion in interlaced content does not look good on progressive TVs and monitors. In addition most computer monitors run by default at 60hz. If you show content shot at 25fps or 50fps on a 60hz monitor it will stutter slightly as the computer repeats an uneven number of  frames to make 25fps fit into 60Hz. So you might want to think about shooting at 30fps or 60fps for smoother less stuttery motion.

24fps or 23.98fps will also stutter slightly on a 60hz computer screen, but the stutter is very even as 1 frame gets repeated in every 4 frames shown.  This is very similar to the “pull-up” that gets added to 24fps movies when shown on 30fps television, so it’s a kind of motion that many viewers are used to seeing anyway. Because it’s a regular stutter pattern it tends to be less noticeable in the irregular conversion from 25fps to 60hz. 25 just doesn’t fit into 60 in a nice even manner. Which brings me to another consideration – If you are looking for a one fits all standard then 24 or 23.98fps might be a wise choice. It works reasonably well via the internet on 60hz monitors. It can easily be converted to 30fps (29.97fps) using the pull-up for television and it’s not too difficult to convert to 25fps simply by speeding it up by 4% (many feature films are shown in 25fps countries simply by being sped up and a pitch shift added to the audio).

So, even if you live and work in a 25fps (Pal) area, depending on how your content will be distributed you might actually want to consider 24, 30 or 60fps for your productions. 25fps or 50fps looks great on a 50hz TV, but with the majority of non broadcast content being viewed on computers, laptops and tablets 24/30/60fps may be a better choice.

What about the “film look”? Well I think it’s obvious to say that 24p or 23.98p will be as close as you can get to the typical cadence and motion seen in most movies. But 25p also looks more or less the same. Even 30p has a hint of the judder that we see in a 24p movie, but 30p is a little smoother. 50p and 60p will give very smooth motion, so if you shoot sports or fast action and you want it to be smooth you may need to use 50/60p. But 50/60p files will be twice the size of 24/25 and 30p files in most cases, so then storage and streaming bandwidth have to be considered. It’s much easier to stream 24p than 60p.

For almost all of the things that I do I shoot at 23.98p, even though I live in a 50hz country. I find this gives me the best overall compatibility. It also means I have the smallest files sizes and the clips will normally stream pretty well. One day I will probably need to consider shooting everything at 60fps, but that seems to be some way off for now, HDR and higher resolutions seem to be what people want right now rather than higher frame rates.

What’s the difference between raw and S-Log ProRes – Re: FS5 raw output.

This is a question that comes up a lot.

Raw is the unprocessed (or minimally processed) data direct from the sensor. It is just the brightness value for each of the pixels, it is not a color image, but we know which color filter is above each pixel, so we are able to work out the color later. In the computer you take that raw data and convert it into a conventional color video signal defining the gamma curve and colorspace in the computer.  This gives you the freedom to choose the gamma and colorspace after the shoot and retains as much of the original sensor information as possible.Of course the captured dynamic and color range is determined by the capabilities of the sensor and we can’t magically get more than the sensor can “see”. The quality of the final image is also dependant on the quality of the debayer process in the computer, but as you have the raw data you can always go back and re-encode the footage with a better quality encoder at a later date. Raw can be compressed or uncompressed. Sony’s 12 bit FS-raw when recorded on an Odyssey or Atomos recorder is normally uncompressed so there are no additional artefacts from compression, but the files are large. The 16 bit raw from a Sony F5 or F55 when recorded on an R5 or R7 is made about 3x smaller through a proprietary algorithm.

ProRes is a conventional compressed color video format. So a ProRes file will already have a pre-determined gamma curve and color space, this is set in the camera through a picture profile, scene file or other similar settings at the time of shooting. The quality of the ProRes file is dependant on the quality of the encoder in the camera or recorder at the time of recording, so there is no way to go back and improve on this or change the gamma/colorspace later. In addition ProRes, like most commonly used codecs is a lossy compressed format, so some (minimal) picture information may be lost in the encoding process and artefacts (again minimal) are added to the image. These cannot easily be removed later, however they should not normally present any serious problems.

It’s important to understand that there are many different types of raw and many different types of ProRes and not all are equal. The FS-raw from the FS5/FS7 is 12 bit linear and 12 bit’s are not really enough for the best possible quality from a 14 stop camera (there are not enough code values so floating point math and/or data rounding has to take place and this effects the shadows and low key areas of the image). You really need 16 bit data for 14 stops of dynamic range with linear raw, so if you are really serious about raw you may want to consider a Sony F5 or F55. ProRes is a pretty decent codec, especially if you use ProResHQ and 10 bit log approaches the quality of 12 bit linear raw but without the huge file sizes.  Incidentally there is very little to be gained by going to ProRes 444 when recording the 12 bit raw from an FS5/FS7, you’ll just have bigger files and less record time.

Taking the 12 bit raw from an FS5 and converting it to ProRes in an external recorder has potential problems of it’s own. The quality of the final file will be dependant on the quality of the debayer and encoding process in the recorder, so there may be differences in the end result from different recorders. In addition you have to add a gamma curve at this point so you must be careful to choose the correct gamma curve to minimise concatenation where you add the imperfections of 12 bit linear to the imperfections of the 10 bit encoded file (S-Log2 appears to be the best fit to Sony’s 12 bit linear raw).

Despite the limitations of 12 bit linear, it is normally a noticeable improvement over the FS5’s 8 bit internal UHD recordings, but less of a step up from the 10 bit XAVC that an FS7 can record internally. What it won’t do is allow you to capture anything extra. It won’t improve the dynamic range, won’t give you more color and won’t enhance the low light performance (if anything there will be a slight increase in shadow noise and it may be slightly inferior in under exposed shots). You will have the same dynamic and color range, but recorded with more “bits” (code values to be precise). Linear raw excels at capturing highlight information and what you will find is that compared to log there will be more textures in highlights and brighter parts of your captured scenes. This will become more and more important as HDR screens are better able to show highlights correctly. Current standard dynamic range displays don’t show highlights well, so often the extra highlight data in raw is of little benefit over log. But that’s going to change in the next few years so linear recording with it’s extra highlight information will become more and more important.

Will a bigger recording Gamut give me more picture information?

The short answer is it all depends on the camera you are using. With the F55 or F65 then S-Log2/S-Gamut and S-Log3/S-Gamut3 will give you a larger range of colours in your final image than S-Log3/S-Gamut3.cine. But if you have a PMW-F5, PXW-FS7 or PXW-FS5 this is not going to be the case.

What is Gamut?

The word Gamut means the complete range or scale of something. So when we talk about Gamut in a video camera we are talking about dynamic range and color range (colorspace) taken together. Then within the Gamut we can break that down into the dynamic range or brightness range which is determined by the gamma curve and the color range which is determined by the colorspace.

Looking at the current Sony digital cinema cameras you have a choice of 3 different gamuts when the camera is in log mode plus a number of conventional gamuts you get when shooting rec-709, rec-2020 or any other combination of rec-709 color with cinegammas or hypergammas.

Log gamma and gamuts.

But it’s in the log mode where there is much confusion. When shooting with log with the current cameras you have 3 recommended combinations.

S-Gamut (S-Gamut colorspace + S-log2 gamma).

S-Gamut3 (S-Gamut3 colorspace + S-Log3 gamma).

S-Gamut3.cine (S-Gamut3.cine colorpace + S-Log3 gamma).

The S-log2 and S-log3 gamma curves both capture the same dynamic range – 14 stops, there is no difference in the dynamic range captured.

In terms of the range of colors that can be recorded S-Gamut and S-Gamut3 are the same size and the largest recording colorspaces the cameras have. S-Gamut3.cine is a smaller colourspace but still larger than P3 (digital cinema projection) or rec-709.

Gamuts-only Will a bigger recording Gamut give me more picture information?

But those were all designed for the F55 and F65 cameras that have extremely high quality (expensive) colour filters on their sensors. The reality is that the F5/FS7/FS5 sensor cannot see the full range of any of the S-Gamut colorspaces so in reality you gain very little by using the larger versions. Don’t expect to see a noticeably greater range of colours than any of the other colour modes if you have the F5/FS7/FS5. But all the LUT’s designed for these cameras are based on the S-Gamuts and if you want to mix an FS5 with an F55 in one production it helps to use the same settings so that grading will be easier. It is worth noting at this point that most natural colors do fall within Rec-709, so while it is always nicer to have a bigger color range it isn’t the end of the world for most of what we shoot.

S-Log3 is a great example of what it means to have a bigger recording range than the camera can “see”. S-log3 is based on the Cineon film transfer log gamma curve developed back in the late 1980’s. Cineon was carefully tailored to match film response and designed around 10 bit data (as that was state of the art back then). It allows for around 16 stops of dynamic range. Much later, Arri and many others then adapted Cineon for use in video cameras – The “C” in Arri’s LogC stands for Cineon.

When Sony started doing wide dynamic range cameras they developed their own log gammas starting with S-Log, then S-Log2. These curves are matched very precisely to the way a video sensor captures a scene rather than film. In addition they are matched to the sensors actual capture range, S-Log can record 13 stops as that’s what the sensors in the cameras with S-Log can see. Then S-Log2 is 14 stops as the second generation cameras can all see 14 stops. As a result of being purpose designed for a video sensor, when using S-Log2 you maximise the entire recording range because the sensor is matched to the log which is matched to the record range.

But, these curves drew much criticism from early adopters and colorists because they were very different from the Cineon curve and all the other log curves based on this old school film curve. Colorists didn’t like it because none of their old Cineon LUT’s would work as expected and it was “different”.

S-log-levels Will a bigger recording Gamut give me more picture information?
Chart showing S-Log2 and S-Log3 plotted against f-stops and code values. Note how little data there is for each of the darker stops, the best data is above middle grey. Note that current sensors only go to +6 stops over middle grey so S-Log2 and S-Log3 record to different peak levels.

In response to this Sony then developed S-Log3 and surprise, surprise – S-log3 is based on Cineon. So S-log3 is based on a 16 stop film transfer curve, but the current cameras can only see 14 stops. What this means is that the top 14% of the gamma curve is never used (that’s where stops 15 and 16 would reside) and as a result s-Log3 tops out at 92% and never gets to the 107% that S-Log2 can reach. If Sony were to release a 16 stop camera then S-Log3 could still be used and then it would reach 107%.

Coming back to colorspace. If you understand that the sensor in the F5/FS7/FS5 cannot see the full colour range that S-Gamut or S-Gamut3 are capable of recording then you will appreciate that like S-log3 (that is larger than the camera can see and therefore part empty) many of the possible code values available in S-Gamut are left empty. This is a waste of data. So from a colourspace point of view the best match when shooting log for these cameras is the slightly smaller colorspace S-Gamut3.cine. But S-Gamut3.cine is meant to be matched with S-Log3 which as we have seen wastes data anyway. If the camera is shooting using a 10 bit codec such as XAVC-I or XAVC-L in HD there are plenty of code values to play with, so a small loss of data has little impact on the final image. But if you are recording with only 8 bit data, for example XAVC-L in UHD then this does become much more of a problem and this is when you will find that S-Gamut with S-Log2 is going to give a better result as S-Log2 was designed for use with a video sensor from day 1 and it maximises the use of what little data you have.