Category Archives: PMW-F55

Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).

There is a lot of confusion and misunderstanding about aliasing and moiré. So I’ve thrown together this article to try and explain whats going on and what you can (or can’t) do about it.

Sony’s FS700, F5 and F55 cameras all have 4K sensors. They also have the ability to shoot 4K raw as well as 2K raw when using Sony’s R5 raw recorder. The FS700 will also be able to shoot 2K raw to the Convergent Design Odyssey. At 4K these cameras have near zero aliasing, at 2K there is the risk of seeing noticeable amounts of aliasing.

One key concept to understand from the outset is that when you are working with raw the signal out of the camera comes more or less directly from the sensor. When shooting non raw then the output is derived from the full sensor plus a lot of extra very complex signal processing.

First of all lets look at what aliasing is and what causes it.

Aliasing shows up in images in different ways. One common effect is a rainbow of colours across a fine repeating pattern, this is called moiré. Another artefact could be lines and edges that are just a little off horizontal or vertical appearing to have stepped or jagged edges, sometimes referred to as “jaggies”.

But what causes this and why is there an issue at 2K but not at 4K with these cameras?

Lets imagine we are going to shoot a test pattern that looks like this:

F55-aliase-test-pattern Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).
Test pattern, checked shirt or other similar repeating pattern.

And lets assume we are using a bayer sensor such as the one in the FS700, F5 or F55 that has a pixel arrangement like this, although it’s worth noting that aliasing can occur with any type of sensor pattern or even a 3 chip design:

f55-bayer-samall Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).
Sensor with bayer pattern.

Now lets see what happens if we shoot our test pattern so that the stripes of the pattern line up with the pixels on the sensor. The top of the graphic below represents the pattern or image being filmed, the middle is the sensor pixels and the bottom is the output from the green pixels of the sensor:

f55-no-aliase Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).
Test pattern aligned with the sensor pixels.

As we can see each green pixel “see’s” either a white line or a black line and so the output is a series of black and white lines. Everything looks just fine… or is it? What happens if we move the test pattern or move the camera just a little bit. What if the camera is just a tiny bit wobbly on the tripod? What if this isn’t a test pattern but a striped or checked shirt and the person we are shooting is moving around? In the image below the pattern we are filming has been shifted to the left by a small amount.

f55-aliasin-all-grey Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).
Test pattern miss-aligned with pixels.

Now look at the output, it’s nothing but grey, the black and white pattern has gone. Why? Simply because the green pixels are now seeing half of a white bar and half of a black bar. Half white plus half black equals grey, so every pixel see’s grey. If we were to slowly pan the camera across this pattern then the output would alternate between black and white lines when the bars and pixels line up and grey when they don’t. This is aliasing at work. Imagine the shot is of a person with a checked shirt, as the person moves about the shirt will alternate between being patterned and being grey. As the shirt will be not be perfectly flat and even, different parts of the shirt will go in and out of sync with the pixels so some parts will be grey, some patterned, it will look blotchy. A similar thing will be happening with any colours, as the red and blue pixels will sometimes see the pattern at other times not, so the colours will flicker and produce strange patterns, this is the moiré that can look like a rainbow of colours.

So what can be done to stop this?

Well what’s done in the majority of professional level cameras is to add a special filter in front of the sensor called an optical low pass filter (OPLF). This filter works by significantly reducing the contrast of the image falling on the sensor at resolutions approaching the pixel resolution so that the scenario above cannot occur. Basically the image falling on the sensor is blurred a little so that the pixels can never see only black or only white. This way we won’t get flickering between black & white and then grey if there is any movement. The downside to this is that it does mean that some contrast and resolution will be lost, but this is better than having flickery jaggies and rainbow moiré. In effect the OLPF is a type of de-focusing filter (for the techies out there it is usually something called a birefringent filtre). The design of the OLPF is a trade off between how much aliasing is acceptable and how much resolution loss is acceptable. The OLPF cut-off isn’t instant, it’s a sharp but gradual cut-off that starts somewhere lower than the sensor resolution, so there is some undesirable but unavoidable contrast loss on fine details. The OLPF will be optimised for a specific pixel size and thus image resolution, but it’s a compromise. In a 4K camera the OLPF will start reducing the resolution/contrast before it gets to 4K.

(As an aside, this is one of the reasons why shooting with a 4K camera can result in better HD, because the OLPF in an HD camera cuts contrast as we approach HD, so the HD is never as sharp and contrasty as perhaps it could be. But shoot at 4K and down-convert and you can get sharper, higher contrast HD).

So that’s how we prevent aliasing, but what’s that got to do with 2K on the FS700, F5 and F55?

Well the problem is this, when shooting 2K raw or in the high speed raw modes Sony are reading out the sensor in a way that creates a larger “virtual” pixel. This almost certainly has to be done for the high speed modes to reduce the amount of data that needs to be transferred from the sensor and into the cameras processing and recording circuits when using high frame rates.  I don’t know exactly how Sony are doing this but it might be something like my sketch below:

F55-2K-bayer Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).
Using adjacent pixels to create larger virtual pixels.

So instead of reading individual pixels, Sony are probably reading groups of pixels together to create a 2K bayer image. This creates larger virtual pixels and in effect turns the sensor into a 2k sensor. It is probably done on the sensor during the read out process (possibly simply by addressing 4 pixels at the same time instead of just one) and this makes high speed continuous shooting possible without overheating or overload as there is far less data to read out.

But, now the standard OLPF which is designed around the small 4K isn’t really doing anything because in effect the new “virtual” pixels are now much larger than the original 4K pixels the OLPF was designed around. The standard OLPF cuts off at 4K so it isn’t having any effect at 2K, so a 2K resolution pattern can fall directly on our 2K virtual bayer pixels and you will get aliasing. (There’s a clue in the filter name: optical LOW PASS filter, so it will PASS any signals that are LOWer than the cut off. If the cut off is 4K, then 2K will be passed as this is lower than 4K, but as the sensor is now in effect a 2K sensor we now need a 2K cut off).

On the FS700 there isn’t (at the moment at least) a great deal you can do about this. But on the F5 and F55 cameras Sony have made the OLPF replaceable. By loosening one screw the 4K OLPF can be swapped out with a 2K OLPF in just a few seconds. The 2K OLPF will control aliasing at 2K and high speed and in addition it can be used if you want a softer look at 4K. The contrast/resolution reduction the filter introduces at 2K will give you a softer “creamier” look at 4K which might be nice for cosmetic, fashion, period drama or other similar shoots.

f55-olpf Aliasing when shooting 2K with a 4K sensor (FS700, PMW-F5, PMW-F55 and others).
Replacing the OLPF on a Sony PMW-F5 or PMW-F55. Very simple.

Fs700 owners wanting to shoot 2K raw will have to look at adding a little bit of diffusion to their lenses. Perhaps a low contrast filter will help or a net or stocking over the front of the lens to add some diffusion will work to slightly soften the image and prevent aliasing. Maybe someone will bring out an OLPF that can be fitted between the lens and camera body, but for the time being to prevent aliasing on the FS700 you need to soften, defocus or blur the image a little to prevent the camera resolving detail above 2K. Maybe using a soft lens will work or just very slightly de-focussing the image.

But why don’t I get aliasing when I shoot HD?

Well all these cameras use the full 4K sensor when shooting HD. All the pixels are used and read out as individual pixels. This 4K signal is then de-bayered into a conventional 4K video signal (NOT bayer). This 4K (non raw) video will not have any significant aliasing as the OLPF is 4K and the derived video is 4K. Then this conventional video signal is electronically down-converted to HD. During the down conversion process an electronic low pass filter is then used to prevent aliasing and moiré in the newly created HD video. You can’t do this with raw sensor data as raw is BEFORE processing and derived directly from the sensor pixels, but you can do this with conventional video as the HD is derived from a fully processed 4K video signal.

I hope you understand this. It’s not an easy concept to understand and even harder to explain. I’d love your comments, especially anything that can add clarity to my explanation.

UPDATE: It has been pointed out that it should be possible to take the 4K bayer and from that use an image processor to produce a 2K anti-aliased raw signal.

The problem is that, yes in theory you can take a 4K signal from a bayer sensor into an image processor and from that create an anti-aliased 2K bayer signal. But the processing power needed to do this is incredible as we are looking at taking 16 bit linear sensor data and converting that to new 16 bit linear data. That means using DSP that has a massive bit depth with a big enough overhead to handle 16 bit in and 16 bit out. So as a minimum an extremely fast 24 bit DSP or possibly a 32 bit DSP working with 4K data in real time. This would have a big heat and power penalty and I suspect is completely impractical in a compact video camera like the F5/F55. This is rack-mount  high power work station territory at the moment. Anyone that’s edited 4K raw will know how processor intensive it is trying to manipulate 16 bit 4K data.

When shooting HD your taking 4K 16 bit linear sensor data, but your only generating 10 bit HD log or gamma encoded data, so the processing overhead is tiny in comparison and a 16 bit DSP can quite happily handle this, in fact you could use a 14 bit DSP by discarding the two LSB’s as these would have little effect on the final 10 bit HD.

For the high speed modes I doubt there is any way to read out every pixel from sensor continously without something overheating. It’s probably not the sensor that would overheat but the DSP. The FS700 can actually do 120fps at 4K for 4 seconds, so clearly the sensor can be read in it’s entirety at 4K and 120fps, but my guess is that something gets too hot for extended shooting times.

So this means that somehow the data throughput has to be reduced. The easiest way to do this is to read adjacent pixel groups. This can be done during the readout by addressing multiple pixels together (binning). If done as I suggest in my article this adds a small degree of antialising. Sony have stated many times that they do not pixel or line skip and that ever pixel is used at 2K, but either way, whether you line skip or pixel bin the effective pixel pitch increases and so you must also change the OLPF to match.

Advertisements

Correct exposure levels with Sony Hypergammas and Cinegammas.

When an engineer designs a gamma curve for a camera he/she will be looking to achieve certain things. With Sony’s Hypergammas and Cinegammas one of the key aims is to capture a greater dynamic range than is possible with normal gamma curves as well as providing a pleasing highlight roll off that looks less electronic and more natural or film like.

Slide3 Correct exposure levels with Sony Hypergammas and Cinegammas.
Recording a greater dynamic range into the same sized bucket.

To achieve these things though, sometimes compromises have to be made. The problem being that our recording “bucket” where we store our picture information is the same size whether we are using a standard gamma or advanced gamma curve. If you want to squeeze more range into that same sized bucket then you need to use some form of compression. Compression almost always requires that you throw away some of your picture information and Hypergamma’s and Cinegamma’a are no different. To get the extra dynamic range, the highlights are compressed.

exposure2-300x195 Correct exposure levels with Sony Hypergammas and Cinegammas.
Compression point with Hypergamma/Cinegamma.

To get a greater dynamic range than normally provided by standard gammas the compression has to be more aggressive and start earlier. The earlier (less bright) point at which the highlight compression starts means you really need to watch your exposure. It’s ironic, but although you have a greater dynamic range i.e. the range between the darkest shadows and the brightest highlights that the camera can record is greater, your exposure latitude is actually smaller, getting your exposure just right with hypergamma’s and cinegamma’s is very important, especially with faces and skin tones. If you overexpose a face when using these advanced gammas (and S-log and S-log2 are the same) then you start to place those all important skin tone in the compressed part of the gamma curve. It might not be obvious in your footage, it might look OK. But it won’t look as good as it should and it might be hard to grade. It’s often not until you compare a correctly exposed sot with a slightly over shot that you see how the skin tones are becoming flattened out by the gamma compression.

But what exactly is the correct exposure level? Well I have always exposed Hypergammas and Cinegammas about a half to 1 stop under where I would expose with a conventional gamma curve. So if faces are sitting around 70% with a standard gamma, then with HG/CG I expose that same face at 60%. This has worked well for me although sometimes the footage might need a slight brightness or contrast tweak in post the get the very best results. On the Sony F5 and F55 cameras Sony present some extra information about the gamma curves. Hypergamma 3 is described as HG3 3259G40 and Hypergamma 4 is HG4 4609G33.
What do these numbers mean? lets look at HG3 3259G40
The first 3 numbers 325 is the dynamic range in percent compared to a standard gamma curve, so in this case we have 325% more dynamic range, roughly 2.5 stops more dynamic range. The 4th number which is either a 0 or a 9 is the maximum recording level, 0 being 100% and 9 being 109%. By the way, 109% is fine for digital broadcasting and equates to bit 255 in an 8 bit codec. 100% may be necessary for some analog broadcasters. Finally the last bit, G40 is where middle grey is supposed to sit. With a standard gamma, if you point the camera at a grey card and expose correctly middle grey will be around 45%. So as you can see these Hypergammas are designed to be exposed a little darker. Why? Simple, to keep skin tones away from the compressed part of the curve.

Here are the numbers for the 4 primary Sony Hypergammas:

HG1 3250G36, HG2 4600G30, HG3 3259G40, HG4 4609G33.

Cinegamma 1 is the same as Hypergamma 4 and Cinegamma 2 is the same as Hypergamma 2.

All of the Hypergammas and Cinegammas are designed to exposed a little lower that with a standard gamma.

Exposure levels using EI ISO and zebras with the PMW-F5 and raw.

The PMW-F5 and F55 are fantastic cameras. If you have the AXS-R5 raw recorder the dynamic range is amazing. In addition because there is no gamma applied to the raw material you can be very free with where you set middle grey. Really the key to getting good raw is simply not to over expose the highlights. Provided nothing is clipped, it should grade well. One issue though is that there is no way to show 14 stops of dynamic range in a pleasing way with current display or viewfinder technologies and at the moment the only exposure tool you have built in to the F5/F55 cameras are zebras.

My experience over many shoots with the camera is that if you set zebras to 100% and don’t use a LUT (so your monitoring using S-Log2) and expose so that your just starting to see zebra 2 (100%) on your highlights, you will in most cases have 2 stops or more of overexposure headroom in the raw material. Thats fine and quite useable, but shoot like this and the viewfinder images will look very flat and in most cases over exposed. The problem is that S-Log 2’s designed white point is only 59% and middle grey is 32%. If your exposing so your highlights are at 100%, then white is likely to be much higher than than the designed level, which also means middle grey and your entire mid range will be excessively high. This then pushes those mids into the more compressed part of the curve, squashing them all together and making the scene look extremely flat. This also has an impact on the ability to focus correctly as best focus is less obvious with a low contrast image. As a result of the over exposed look it’s often tempting to stop down a little, but this is then wasting a lot of available raw data.

So, what can you do? Well you can add a LUT. The F5 and F55 have 3 LUTS available. The LUTS are based either on REC709 (P1) or Hypergamma (P2 and P3). These will add more contrast to the VF image, but they show considerably less dynamic range than S-Log2. My experience with using these LUT’s is that on every shoot I have done so far, most of my raw material has typically had at least 3 stops of un-used headroom. Now I could simply overexpose a little to make better use of that headroom, but I hate looking into the viewfinder and seeing an overexposed image.

Why is it so important to use that extra range? It’s important because if you record at a higher level the signal to noise ratio is better and after grading you will have less noise in the finished production.

Firmware release 1.13 added a new feature to the F5 and F55, EI Gain.  EI or Exposure Index gain allows you to change the ISO of the LUT output. It has NO effect on the raw recordings, it ONLY affects the Look Up Tables. So if you have the LUT’s turned on, you can now reduce the gain on the Viewfinder, HDSDI outputs as well as the SxS recordings (see this post for more on the EI gain). By using EI gain and an ISO lower than the cameras native ISO I can reduce the brightness of the view in the viewfinder. In addition the zebras measure the signal AFTER the application of the LUT or EI gain. So if you expose using a LUT and zebra 2 just showing on your highlights and then turn on the EI gain and set it to 800 on an F5 (native 2000ISO) or 640 on an F55 (native 1250ISO) and adjust your exposure so that zebra 2 is once agin just showing you will be opening your aperture by 1.5 (F5) or 1 (F55) stop. As a result the raw recordings will be 1.5/1 stop brighter.

In order to establish for my own benefit which was the best EI gain setting to use I spent a morning trying different settings. What I wanted to find was a reliable way to expose at a good high level to minimise noise but still have a little headroom in reserve. I wanted to use a LUT so that I have a nice high contrast image to help with focus. I chose to concentrate on the P3 LUT as this uses hypergamma with a grey point at 40% so the mid range should not look underexposed and contrast would be quite normal looking.

When using EI ISO 800 and exposing the clouds in the scene so that zebras were just showing on the very brightest parts of the clouds the image below is what the scene looked like when viewed both in the viewfinder and when opened up in Resolve. Also below is the same frame from the raw footage both before and after grading. You can click on any of the images to see a larger view.

xd-norm-800-1024x311 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
P3 LUT, XDCAM recording, 800 EI ISO (PMW-F5).
raw-norm-pre-1024x325 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
Raw footage, EI 800 ISO pre-grade.
raw-norm-grade-1024x322 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
Raw, 800 ISO after grade. NO clipped highlights.

As you can see using LUT P3 and 800 EI ISO (PMW-F5) and zebra 2 just showing on the brightest parts of the clouds my raw footage is recorded at a level roughly 1.5 stops brighter than it would have been if I had not used EI gain. But even at this level there is no clipping anywhere in the scene, so I still have some extra head room. So what happens if I expose one more stop brighter?

xd-plus1-1024x316 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
The XDCAM recording, LUT P3, 800 EI, +1 stop, zebras showing over almost all clouds.
raw-+1-pre-1024x314 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
Raw clip at +1 stop prior to grade.
raw-+1-grade-1024x313 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
Raw at +1 stop after grade, no sign of any clipping.

So, as you can see above even with zebras over all of the brighter clouds and the exposure at +1 stop over where the zebras were just appearing on the brightest parts of the clouds  there was no clipping. So I still have some headroom left, so I went 1 stop brighter again. The image in the viewfinder is now seriously over exposed.

XD-+2-1024x316 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
The XDCAM recording at +2 stops, the sky and clouds look very overexposed.
raw-+2-pre-1024x318 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
Raw clip, pre grading (LUT P3, EI 800). Looking scarily over exposed.
raw-+2-grade-1024x316 Exposure levels using EI ISO and zebras with the PMW-F5 and raw.
After the grade the raw is looking much better, but there is a bit of clipping on the very brightest clouds.

The lower of the 3 images above is very telling. Now there is some clipping, you can see it on the waveform. It’s only on the very brightest clouds, but I have no reached the limit of my exposure headroom.

Based on these tests I feel very comfortable exposing my F5 in raw by using LUT P3 with EI gain at 800 and having zebra 2 starting to appear on my highlights. That would result in about 1.5 stops of headroom. If you are shooting a flat scene you could even go to 640 ISO which would give you one safe stop over the first appearance of zebra 2. On the F55 this would equate to using EI 640 with LUT P3 and having a little over 1.5 stops of headroom over the onset of zebras or EI 400 giving about 1 stop of headroom.

My recommendation having carried out these tests would be to make use of the lower EI gain settings to brighten your recorded image. This will result in cleaner, lower noise footage and also allow you to “see” a little deeper into the shadows in the grade. How low you go will depend on how much headroom you want, but even if you use 640 on the F5 or 400 on the F55 you should still have enough headroom above the onset of zebra 2 to stay out of clipping.

 

 

When should I use a Cinegamma or Hypergamma?

Cinegammas are designed to be graded. The shape of the curve with steadily increasing compression from around 65-70% upwards tends to lead to a flat looking image, but maximises the cameras latitude (although similar can be achieved with a standard gamma and careful knee setting). The beauty of the cinegammas is that the gentle onset of the highlight compression means that grading will be able to extract a more natural image from the highlights. Note than Cinegamma 2 is broadcast safe and has slightly reduced recording range than CG 1,3 and 4.

Standard gammas will give a more natural looking picture right up to the point where the knee kicks in. From there up the signal is heavily compressed, so trying to extract subtle textures from highlights in post is difficult. The issue with standard gammas and the knee is that the image is either heavily compressed or not, there’s no middle ground.

In a perfect world you would control your lighting (turning down the sun if necessary ;-o) so that you could use standard gamma 3 (ITU 709 standard HD gamma) with no knee. Everything would be linear and nothing blown out. This would equate to a roughly 7 stop range. This nice linear signal would grade very well and give you a fantastic result. Careful use of graduated filters or studio lighting might still allow you to do this, but the real world is rarely restricted to a 7 stop brightness range. So we must use the knee or Cinegamma to prevent our highlights from looking ugly.

If you are committed to a workflow that will include grading, then Cinegammas are best. If you use them be very careful with your exposure, you don’t want to overexpose, especially where faces are involved. getting the exposure just right with cinegammas is harder than with standard gammas. If anything err on the side of caution and come down 1/2 a stop.

If your workflow might not include grading then stick to the standard gammas. They are a little more tolerant of slight over exposure because skin and foliage won’t get compressed until it gets up to the 80% mark (depending on your knee setting). Plus the image looks nicer straight out of the camera as the cameras gamma should be a close match to the monitors gamma.