Measuring Resolution, Nyquist and Aliasing.

When measuring the resolution of a well designed video camera, you never want to see resolution that is significantly higher than HALF of the sensors resolution. Why is this? Why don’t I get 1920 x1080 resolution from an EX1, which we know has 1920 x1080 pixels, why is the measured resolution often around half to three quarters what you would expect?
There should be an optical low pass filter in front of the sensor in a well designed video camera that prevents frequencies above approx half of the sensors native resolution getting to the sensor. This filter will not have an instantaneous cut off, instead attenuating fine detail at ever increasing amounts centered somewhere around the Nyquist limit for the sensor. The Nyquist limit is normally half of the pixel count with a 3 chip camera or somewhat less than this for a bayer sensor. As a result measured resolution gradually tails off somewhere a little above Nyquist or half of the expected pixel resolution, but why is this?
It is theoretically possible for a sensor to resolve an image at it’s full pixel resolution. If you could line up the black and white lines on a test chart perfectly with the pixels on a 1920 x 1080 sensor then you could resolve 1920 x 1080 lines. But what happens when those lines no longer line up absolutely perfectly with the pixels? lets imagine that each line is offset by exactly half a pixel, what would you see? Well each pixel would see half of the black line and half white line. So each pixel would see 50% white, 50% black and the output from that pixel would be mid grey. With the adjacent pixels all seeing the same thing they would all output mid grey. So by panning the image by half a pixel, instead of now seeing 1920×1080 black and white lines all we see is a totally grey frame. As you continued to shift the chart relative to the pixels, say by panning across it, it would flicker between pin sharp lines and grey. If the camera was not perfectly aligned with the chart some of the image would appear grey or different shades of grey depending on the exact pixel to chart alignment while other parts may show distinct black and white lines. This is aliasing and it’s not nice to look at and can in effect reduce the resolution of the final image to zero. So to counter this you deliberately reduce the system resolution (lens + sensor) to around half the pixel count so that it is impossible for any one pixel to only see one object. By blurring the image across two pixels you ensure that aliasing wont occur. It should also be noted that the same thing can happen with a display or monitor, so trying to show a 1920×1080 image on a 1920×1080 monitor can have the same effect.
When I did my recent F3 resolution tests I used a term called the MTF or modulation transfer function, which is a measure of the contrast between adjacent pixels, so MTF 50 is where there is a 50% of maximum contrast difference between the black and white lines on the test chart.
When visually observing a resolution chart you can see where the lines on the chart can no longer be distinguished from one another, this is the resolution vanishing point and is typically somewhere around MTF15 to MTF5, ie. the contrast between the black and white lines becomes so low that you can no longer distinguish one from the other. But the problem with this is that as you are looking for the point where you can no longer see any difference, you are attempting to measure the invisible so it is prone to gross inaccuracies. In addition the contrast at MTF10 or the vanishing point between black and white will be very, very low, so in a real world image you would often struggle to ever see fine detail at MTF10 unless it was strong black and white edges.
So for resolution tests a more consistent result can be obtained by measuring the point at which the contrast between the black and white lines on the chart reduces to 50% of maximum, or MTF50 (as resolution decreases so too does contrast). So while MTF50 does not determine the ultimate resolution of the system, it gives a very reliable performance indicator that is repeatable and consistent from test to test. What it will tell you is how sharp one camera will appear to be compared to the next.
As the Nyquist frequency  is half the sampling frequency of the system, for a 1920 x 1080 sensor anything over 540 LP/ph will potentially aliase, so we don’t want lots of detail above this.  As Optical Low Pass filters cannot instantly cut off unwanted frequencies there will be a gradual resolution tail off that spans the Nyquist frequency and there is a fine balance between getting a sharp image and excessive aliasing. In addition as real world images are rarely black and white lines (square waves) and fixed high contrast patterns you can afford to push things a little above Nyquist to gain some extra sharpness. A well designed 1920 x 1080 HD video camera should resolve around 1000TVL. This where seeing the MTF curve helps, as it’s important to see how quickly the resolution is attenuated past MTF50.
With Bayer pattern sensors it’s even more problematic due to the reduced pixel count for the R and B samples compared to G.
The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast resolutions significantly higher than 1000TVL will have aliasing issues, indeed the EX1/EX3 can aliase in some situations as does the F3. These cameras are right at the limits of what will allow for a good, sharp image at 1920×1080.

3 thoughts on “Measuring Resolution, Nyquist and Aliasing.”

  1. Thanks Alister for this cogent explanation on an always confusing topic. And you explained it without relying on figures or diagrams, only words, quite impressive.

    I just wonder if when you say ‘960 LP/ph’ in the last two paragraphs it should say instead ‘480 LP/ph’.

    You wrote: “As Nyquist is half the pixel resolution of the system, for a 1920 sensor anything over 960 LP/ph will potentially aliase. […] The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast higher than 960 LP/ph will have aliasing “[end of quote]

    I understand LP means “Line Pair” (or a pair of lines: one black, one white). If you have 1920 pixels you can theoretically sample an input signal that contains a maximum resolution of 1920 pixels, or what is equivalent, 960 line pairs (LP), each “pair” made of a black line followed by a white one. But, as you said, all of this at the risk of aliasing if the pixels and the input image are not perfectly aligned. So to prevent aliasing, Nyquist tells us that our system shouldn’t be exposed to frequencies higher than half the sample frequency. So if you have sensor with 1920 pixels you can sample, free-of-aliasing, an input signal that contains half of that, i.e. no more than 960 pixels. But based on our definition, 960 pixels is equivalent to 480 LP, and not 960 LP (as it is written in the last two paragraphs).

    So I wonder if when you wrote “960LP/ph” you meant “960LW/ph” where LW means “Line Width”. This, since there are two line widths on each pair.

    1. Yes you are correct it should be 960 LW/ph Horizontal or 1080 LW/ph 540 LP/ph to be completely correct, which is 960 x 540 in pixel terms.

  2. Hello, I own a PMW EX3. I wanted to use a 7-inch external monitor. The telecaera has no hdmi output. You would use the monitor of the camera or use an external monitor and connect it to SDI? What do you use? Greetings and thanks, Mark

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.