Tag Archives: frequency

Thinking about frame rates.

Once upon a time it was really simple. We made TV programmes and videos that would only ever be seen on TV screens. If you lived and worked in a PAL area you would produce programmes at 25fps. If you lived in an NTSC area, most likely 30fps. But today it’s not that simple. For a start the internet allows us to distribute our content globally, across borders. In addition PAL and NTSC only really apply to standard definition television as they are the way the SD signal is broadcast with a PAL frame being larger than an NTSC one and both use non-square pixels. With HD Pal and NTSC does not exist, both are 1280×720 or 1920×1080 and both use square pixels, the only difference between HD in a 50hz country and a 60hz country is the frame rate.

Today with HD we have many different frame rates to choose from. For film like motion we can use 23.98fps or 24fps. For fluid smooth motion we can use 50fps or 60fps. In between sits the familiar 25fps and 30fps (29.97fps) frame rates. Then there is also the option of using interlace or progressive scan. Which do you choose?

If you are producing a show for a broadcaster then normally the broadcaster will tell you which frame rate they need. But what about the rest of us?

There is no single “right” frame rate to use. A lot will depend on your particular application, but there are some things worth considering.

If you are producing content that will be viewed via the internet then you probably want to steer clear of interlace. Most modern TV’s and all computer monitors use progressive scan and the motion in interlaced content does not look good on progressive TVs and monitors. In addition most computer monitors run by default at 60hz. If you show content shot at 25fps or 50fps on a 60hz monitor it will stutter slightly as the computer repeats an uneven number of  frames to make 25fps fit into 60Hz. So you might want to think about shooting at 30fps or 60fps for smoother less stuttery motion.

24fps or 23.98fps will also stutter slightly on a 60hz computer screen, but the stutter is very even as 1 frame gets repeated in every 4 frames shown.  This is very similar to the “pull-up” that gets added to 24fps movies when shown on 30fps television, so it’s a kind of motion that many viewers are used to seeing anyway. Because it’s a regular stutter pattern it tends to be less noticeable in the irregular conversion from 25fps to 60hz. 25 just doesn’t fit into 60 in a nice even manner. Which brings me to another consideration – If you are looking for a one fits all standard then 24 or 23.98fps might be a wise choice. It works reasonably well via the internet on 60hz monitors. It can easily be converted to 30fps (29.97fps) using the pull-up for television and it’s not too difficult to convert to 25fps simply by speeding it up by 4% (many feature films are shown in 25fps countries simply by being sped up and a pitch shift added to the audio).

So, even if you live and work in a 25fps (Pal) area, depending on how your content will be distributed you might actually want to consider 24, 30 or 60fps for your productions. 25fps or 50fps looks great on a 50hz TV, but with the majority of non broadcast content being viewed on computers, laptops and tablets 24/30/60fps may be a better choice.

What about the “film look”? Well I think it’s obvious to say that 24p or 23.98p will be as close as you can get to the typical cadence and motion seen in most movies. But 25p also looks more or less the same. Even 30p has a hint of the judder that we see in a 24p movie, but 30p is a little smoother. 50p and 60p will give very smooth motion, so if you shoot sports or fast action and you want it to be smooth you may need to use 50/60p. But 50/60p files will be twice the size of 24/25 and 30p files in most cases, so then storage and streaming bandwidth have to be considered. It’s much easier to stream 24p than 60p.

For almost all of the things that I do I shoot at 23.98p, even though I live in a 50hz country. I find this gives me the best overall compatibility. It also means I have the smallest files sizes and the clips will normally stream pretty well. One day I will probably need to consider shooting everything at 60fps, but that seems to be some way off for now, HDR and higher resolutions seem to be what people want right now rather than higher frame rates.

Measuring Resolution, Nyquist and Aliasing.

When measuring the resolution of a well designed video camera, you never want to see resolution that is significantly higher than HALF of the sensors resolution. Why is this? Why don’t I get 1920 x1080 resolution from an EX1, which we know has 1920 x1080 pixels, why is the measured resolution often around half to three quarters what you would expect?
There should be an optical low pass filter in front of the sensor in a well designed video camera that prevents frequencies above approx half of the sensors native resolution getting to the sensor. This filter will not have an instantaneous cut off, instead attenuating fine detail at ever increasing amounts centered somewhere around the Nyquist limit for the sensor. The Nyquist limit is normally half of the pixel count with a 3 chip camera or somewhat less than this for a bayer sensor. As a result measured resolution gradually tails off somewhere a little above Nyquist or half of the expected pixel resolution, but why is this?
It is theoretically possible for a sensor to resolve an image at it’s full pixel resolution. If you could line up the black and white lines on a test chart perfectly with the pixels on a 1920 x 1080 sensor then you could resolve 1920 x 1080 lines. But what happens when those lines no longer line up absolutely perfectly with the pixels? lets imagine that each line is offset by exactly half a pixel, what would you see? Well each pixel would see half of the black line and half white line. So each pixel would see 50% white, 50% black and the output from that pixel would be mid grey. With the adjacent pixels all seeing the same thing they would all output mid grey. So by panning the image by half a pixel, instead of now seeing 1920×1080 black and white lines all we see is a totally grey frame. As you continued to shift the chart relative to the pixels, say by panning across it, it would flicker between pin sharp lines and grey. If the camera was not perfectly aligned with the chart some of the image would appear grey or different shades of grey depending on the exact pixel to chart alignment while other parts may show distinct black and white lines. This is aliasing and it’s not nice to look at and can in effect reduce the resolution of the final image to zero. So to counter this you deliberately reduce the system resolution (lens + sensor) to around half the pixel count so that it is impossible for any one pixel to only see one object. By blurring the image across two pixels you ensure that aliasing wont occur. It should also be noted that the same thing can happen with a display or monitor, so trying to show a 1920×1080 image on a 1920×1080 monitor can have the same effect.
When I did my recent F3 resolution tests I used a term called the MTF or modulation transfer function, which is a measure of the contrast between adjacent pixels, so MTF 50 is where there is a 50% of maximum contrast difference between the black and white lines on the test chart.
When visually observing a resolution chart you can see where the lines on the chart can no longer be distinguished from one another, this is the resolution vanishing point and is typically somewhere around MTF15 to MTF5, ie. the contrast between the black and white lines becomes so low that you can no longer distinguish one from the other. But the problem with this is that as you are looking for the point where you can no longer see any difference, you are attempting to measure the invisible so it is prone to gross inaccuracies. In addition the contrast at MTF10 or the vanishing point between black and white will be very, very low, so in a real world image you would often struggle to ever see fine detail at MTF10 unless it was strong black and white edges.
So for resolution tests a more consistent result can be obtained by measuring the point at which the contrast between the black and white lines on the chart reduces to 50% of maximum, or MTF50 (as resolution decreases so too does contrast). So while MTF50 does not determine the ultimate resolution of the system, it gives a very reliable performance indicator that is repeatable and consistent from test to test. What it will tell you is how sharp one camera will appear to be compared to the next.
As the Nyquist frequency  is half the sampling frequency of the system, for a 1920 x 1080 sensor anything over 540 LP/ph will potentially aliase, so we don’t want lots of detail above this.  As Optical Low Pass filters cannot instantly cut off unwanted frequencies there will be a gradual resolution tail off that spans the Nyquist frequency and there is a fine balance between getting a sharp image and excessive aliasing. In addition as real world images are rarely black and white lines (square waves) and fixed high contrast patterns you can afford to push things a little above Nyquist to gain some extra sharpness. A well designed 1920 x 1080 HD video camera should resolve around 1000TVL. This where seeing the MTF curve helps, as it’s important to see how quickly the resolution is attenuated past MTF50.
With Bayer pattern sensors it’s even more problematic due to the reduced pixel count for the R and B samples compared to G.
The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast resolutions significantly higher than 1000TVL will have aliasing issues, indeed the EX1/EX3 can aliase in some situations as does the F3. These cameras are right at the limits of what will allow for a good, sharp image at 1920×1080.

Shutter, shutter speed and shutter angle.

So you have your nice camcorder, EX1, EX3, AF100, F3 or whatever and it has a function called the shutter. This function may have different ways of being set, fractions of seconds or angle. What is it that the shutter does and what’s the difference between angle and fractions. Also why is it important to know what the frequency of the local mains electricity?

shutter3 Shutter, shutter speed and shutter angle.
180 degree film shutter

Lets start by looking at the difference between shutter speed expressed in fractions of a second and shutter angle. Shutter angle comes from film camera days when the film cameras shutter was a simple spinning disc with one half of the disc cut away to allow the light to pass from the lens to the film. The other half of the disc would rotate around, blanking off the film so it could be advanced to the next frame. If you consider that a full circle is 360 degrees, then half of a full circle is 180 degrees. So for each frame cycle with a 180 degree shutter, light is allowed to pass from the lens to the film for half of the frame rate (180 being half of 360).

Taking a frame rate of 25 frames per second, each frame lasts 1/25th of a second. Half of that is 1/50th, so with  a 180 degree shutter the exposure at 25P is 1/50th of a second. There is no difference in the way the shutter works, it is just a different way of expressing the shutter timing.

shutter2 Shutter, shutter speed and shutter angle.
Film camera 90 degree shutter

If we take that and look at a different angle, this time 90 degrees we can see from the picture that this is now one quarter of a full circle (90 is one quarter of 360 degrees). So at 25 frames per second the exposure is one quarter of 1/25th which is 1/100th and so on.

So why use angle instead of a fraction of a second? Well here’s the thing. If you set you shutter speed to 1/50th, then no matter what your frame rate the shutter speed will be 1/50th. The Sony EX and XDCAM cameras can shoot at various frame rates (as can many other cameras). It is traditional when shooting progressive, trying to create a filmic look to mimic the way a film camera behaves, so for this you would use a shutter that is open for half of the frame rate, ie. 180 degrees. When you set the shutter speed using an angle, when you change the frame rate the shutter speed will also change. Set to 180 degrees it will always be half of the frame rate. So going from 25P to 30P will change the shutter speed from 1/50th to 1/60th. Which neatly brings me on to the next bit….

Why it’s important to know the local mains frequency.

We normally take it for-granted in our home countries, shooting at our home frame rates that the pictures will be OK. But if you travel to a country where the mains frequency no longer matches the cameras base frequency then you may experience problems with flickering or strobing pictures when shooting under artificial lighting. Sometimes you will see light and dark bands slowly rolling up and down the picture. This happens because if you take your camera, lets say set to PAL (50i/25P) to the USA, the US mains frequency of 60hz will drift in and out of sync with the camera from one frame to the next. As many artificial lights brighten and dim in sync with the mains electricity you can appreciate that for one frame the lights may have one brightness and then the next frame the brightness may be different. You will possibly experience problems when the mains frequency cannot be evenly divided by the cameras shutter speed. For example shooting 30P will give problems when the mains is 50Hz as 30 will not divide evenly into 50.

So how do you counter this? Well you need to change your shutter speed to an even fraction or even multiplier of the mains frequency. So shooting 30P in a 50hz country you can use: 1/50th, 1/100th, 1/200th etc (mains frequency, frequency multiplied by 2, multiplied by 3 etc). Note that when shooting 60i you can’t normally have a 1/50th shutter so your limited to 1/100th or higher. When shooting 25P or (50i) in a 60Hz country you should use 1/60th, 1/120th, 1/240th etc. For 24P (23.98) you will often have to use the shutter when using consumer or industrial lighting using the same shutter speeds as give above, dependant on the local mains frequency.