Tag Archives: resolution

What’s wrong with my viewfinder, my old camera had a much better viewfinder!

This is something that keeps popping up all over the place and it’s not just one camera that attracts this comment. Many do, from the FS5 to the FS7 to the F55, plus cameras from other manufacturers too.
One common factor is that very often this relates to the newer super35mm cameras. Cameras designed to give a more rounded, film like look, often cameras with 4K or higher resolution sensors.
I think many people perceive there is an issue with their viewfinder because they come to these new high resolution, more rounded and film like cameras  from traditional television centric camcorders that use detail correction, coring and aperture correction to boost the image sharpness.
SD and even HD television broadcasting relies heavily on image sharpening so that viewers perceive a crisp, sharp image at any viewing distance and with any screen size (although on really big screens this can really ruin the image).
This works by enhancing and boosting the contrast around edges. This is standard practice on all normal HD and SD broadcast cameras. Especially camera that use a 3 chip design with a prism as the prism will often reduce the images edge contrast.
As most people will prefer a very slightly sharpened HD image or a heavily sharpened SD image over an unsharpened one, it’s sharpened by default. This means that the images those cameras produce will tend to look sharp even on screens that have a lower resolution than that of the camera because the edges remain high contrast even when the viewing resolution is reduced and as a result look sharp.
Most current manufacturer supplied LCD EVF’s run at 1/4″ HD with 940 x 560 pixels (each pixel made up of an RGB 3 dot matrix). In addition many of the 3rd party VF’s such as the very popular Alphatron are the same because they all use the same mass produced, relatively low cost panels – panels that are also used for mobile phones and many other devices. 
 
The problem then is that when you move to a camera that doesn’t add any image sharpening, if you view the cameras image on a lower resolution screen the image looks soft because — it is. There is no detail correction to compensate. Incidentally this is why often these same cameras can look a bit soft in HD and very soft in SD compared to other traditional or detail corrected cameras. But, that slightly softer, less processed look helps contribute to their more film like look. This softness and lack of sharpening/processing is particularly noticeable if you use the focus mag function as you are then looking at an enlarged but completely un-sharpened image.
 
It could be argued that the viewfinder should sharpen the image to compensate. Some of the more expensive viewfinders can do this using their own sharpening processes. But the image that you are then seeing is not the picture that is being recorded and this isn’t always ideal. If it is over done then it can make the entire image look sharp even when it isn’t fully in focus. Really you want to be looking at exactly the image that the camera is recording so that you can spot any potential problems. But that then makes focussing tricky.
 
There are a few 3rd party viewfinders such as the Gratical that have higher resolutions. The Gratical and Eye have screens that are 1280×1024, but in normal use you only use 1280×720 for the image area. This certainly helps, but even the 1:1 pixel zoom on these can look soft and blurry as you loose the viewfinders peaking function when you crop in.
 
Sony’s Venice and the F55/F5 can use Sony’s new DVF-EL200 OLED viewfinder. This costs around £4.5K ($6K) and has a 1920×1080 screen. It’s a beautiful image, but even this needs a fairly good dose of peaking to artificially sharpen the image to be able to see that last critical bit of focus. Again when you zoom in the image looks soft and a bit blurry (even on a Venice) as the camera itself is not adding any sharpening. The peaking function on the DVF-EL200 is quite sophisticated as it only enhances the highest frequency parts of the image, so only sharp edges and fine details are boosted.
 
Go back to the days of black and white tube viewfinders and these used tons of peaking to make them useable. Traditional SD and HD cameras add sharpening to their pictures, but most of our modern large sensor 4K camera do not and as a result often the viewfinder images appear soft compared to what we used to see on older cameras or still see today on cameras that do sharpen the pictures.
 
All of this makes it hard to nail your focus, especially if shooting 4K. Even with a DVF-EL200 on a Venice I struggle at times and rely heavily on image mag (which is still difficult) or better still a much larger monitor with a good sun shade and if necessary some reading glasses to allow you to focus on it up close.

So before you get too critical of your viewfinders performance do also consider all of the above. Try to see how another similar viewfinder looks on your camera (for example an Alphatron on an FS7). Perhaps try a higher resolution viewfinder such as a Gratical, but don’t expect miracles from a small, relatively low resolution screen on a modern digital cinema camera. This really is one of those areas where you can’t beat a big, high resolution screen.

Advertisements

4K – It’s not the be-all and end-all.

I often hear people talking about future proofing content or providing the best they can for their clients when talking about 4K. Comments such as “You’d be crazy not shoot shoot 4K for a professional production”. While on the whole I am a believer in shooting in 4K, I think you also need to qualify this by saying you need to shoot good 4K.

As always you must remember that bigger isn’t always better. Resolution is only one part of the image quality equation. Just take a look at how Arri’s cameras, the Alexa etc, continue to be incredibly popular for high end production even those these are in effect only HD/2K cameras.

Great images are a combination of many factors and frankly resolution comes some way down the list in my opinion. Just look at how DVD has managed to hang on for so long, feature films on DVD still look OK even though the resolution is very low. Contrast and dynamic range are more important, good color is vital and low noise and artefact levels are also essential.

A nice contrasty image with great color, low noise and minimal artefacts up scaled from HD to 4K may well look a lot better than a 4K originated image that lacks contrast or has other artefacts such as compression noise or poor color.

So it’s not just about the number of pixels that you have but also about the quality of those pixels. If you really want to future proof your content it has to be the best quality you can get today, not just the largest you can get today.

Contrast and Resolution, intricately linked.

This is one of those topics that keeps coming back around time and time again. The link between contrast and resolution. So I thought I would take a few minutes to create some simple illustrations to demonstrate the point.

contrast1 Contrast and Resolution, intricately linked.
Best Contrast.

This first image represents a nice high contrast picture. The white background and dark lines have high contrast and as a result you can “see” resolution a long way to the right of the image as indicated by the arrow.

contrast2 Contrast and Resolution, intricately linked.
Lower contrast.

Now look at what happens as you slowly reduce the contrast in the image. As the contrast reduces the amount of resolution that you can see reduces. Keep reducing the contrast and the resolution continues to decrease.

contrast4 Contrast and Resolution, intricately linked.
Low Contrast.

Eventually if you keep reducing the contrast enough you end up with no resolution as you can no longer differentiate between light and dark.

Now look at what happens when you reduce the resolution by blurring the image, the equivalent of using a less “sharp” lower resolution lens for example. What happens to the black lines? Well the become less dark and start to look grey, the contrast is reducing.

contrast5 Contrast and Resolution, intricately linked.
Reduced resolution.

Hopefully these simple images show that contrast and resolution are intrinsically linked. You can’t have one without the other. So when choosing lenses in particular you need to look at not just resolution but also contrast. Contrast in a lens is affected by many things including flare where brighter parts of the scene bleed into darker parts. Flare also comes from light sources that may not be in your shot but the light is still entering the lens, bouncing around inside and reducing contrast as a result. These things often don’t show up if you use just a simple resolution chart. A good lens hood or matte box with flags can be a big help reduce stray light and flare, so in fact a matte box could actually make your pictures sharper. They are not just for pimping up your rig, they really can improve the quality of your images.

The measurement for resolution and contrast is called the MTF or modulation transfer function. This is normally used  to measure lens performance and the ability of a lens to pass the light from a scene or test chart to the film or sensor. It takes into account both resolution and contrast so tells you a lot about the lens or imaging systems performance and is normally presented as a graph of contrast levels over a scale of ever increasing resolution.

Measuring Resolution, Nyquist and Aliasing.

When measuring the resolution of a well designed video camera, you never want to see resolution that is significantly higher than HALF of the sensors resolution. Why is this? Why don’t I get 1920 x1080 resolution from an EX1, which we know has 1920 x1080 pixels, why is the measured resolution often around half to three quarters what you would expect?
There should be an optical low pass filter in front of the sensor in a well designed video camera that prevents frequencies above approx half of the sensors native resolution getting to the sensor. This filter will not have an instantaneous cut off, instead attenuating fine detail at ever increasing amounts centered somewhere around the Nyquist limit for the sensor. The Nyquist limit is normally half of the pixel count with a 3 chip camera or somewhat less than this for a bayer sensor. As a result measured resolution gradually tails off somewhere a little above Nyquist or half of the expected pixel resolution, but why is this?
It is theoretically possible for a sensor to resolve an image at it’s full pixel resolution. If you could line up the black and white lines on a test chart perfectly with the pixels on a 1920 x 1080 sensor then you could resolve 1920 x 1080 lines. But what happens when those lines no longer line up absolutely perfectly with the pixels? lets imagine that each line is offset by exactly half a pixel, what would you see? Well each pixel would see half of the black line and half white line. So each pixel would see 50% white, 50% black and the output from that pixel would be mid grey. With the adjacent pixels all seeing the same thing they would all output mid grey. So by panning the image by half a pixel, instead of now seeing 1920×1080 black and white lines all we see is a totally grey frame. As you continued to shift the chart relative to the pixels, say by panning across it, it would flicker between pin sharp lines and grey. If the camera was not perfectly aligned with the chart some of the image would appear grey or different shades of grey depending on the exact pixel to chart alignment while other parts may show distinct black and white lines. This is aliasing and it’s not nice to look at and can in effect reduce the resolution of the final image to zero. So to counter this you deliberately reduce the system resolution (lens + sensor) to around half the pixel count so that it is impossible for any one pixel to only see one object. By blurring the image across two pixels you ensure that aliasing wont occur. It should also be noted that the same thing can happen with a display or monitor, so trying to show a 1920×1080 image on a 1920×1080 monitor can have the same effect.
When I did my recent F3 resolution tests I used a term called the MTF or modulation transfer function, which is a measure of the contrast between adjacent pixels, so MTF 50 is where there is a 50% of maximum contrast difference between the black and white lines on the test chart.
When visually observing a resolution chart you can see where the lines on the chart can no longer be distinguished from one another, this is the resolution vanishing point and is typically somewhere around MTF15 to MTF5, ie. the contrast between the black and white lines becomes so low that you can no longer distinguish one from the other. But the problem with this is that as you are looking for the point where you can no longer see any difference, you are attempting to measure the invisible so it is prone to gross inaccuracies. In addition the contrast at MTF10 or the vanishing point between black and white will be very, very low, so in a real world image you would often struggle to ever see fine detail at MTF10 unless it was strong black and white edges.
So for resolution tests a more consistent result can be obtained by measuring the point at which the contrast between the black and white lines on the chart reduces to 50% of maximum, or MTF50 (as resolution decreases so too does contrast). So while MTF50 does not determine the ultimate resolution of the system, it gives a very reliable performance indicator that is repeatable and consistent from test to test. What it will tell you is how sharp one camera will appear to be compared to the next.
As the Nyquist frequency  is half the sampling frequency of the system, for a 1920 x 1080 sensor anything over 540 LP/ph will potentially aliase, so we don’t want lots of detail above this.  As Optical Low Pass filters cannot instantly cut off unwanted frequencies there will be a gradual resolution tail off that spans the Nyquist frequency and there is a fine balance between getting a sharp image and excessive aliasing. In addition as real world images are rarely black and white lines (square waves) and fixed high contrast patterns you can afford to push things a little above Nyquist to gain some extra sharpness. A well designed 1920 x 1080 HD video camera should resolve around 1000TVL. This where seeing the MTF curve helps, as it’s important to see how quickly the resolution is attenuated past MTF50.
With Bayer pattern sensors it’s even more problematic due to the reduced pixel count for the R and B samples compared to G.
The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast resolutions significantly higher than 1000TVL will have aliasing issues, indeed the EX1/EX3 can aliase in some situations as does the F3. These cameras are right at the limits of what will allow for a good, sharp image at 1920×1080.

Download and print your own test charts.

Clearly these will never be as good as, or as accurate as properly produced charts. Most home printers just don’t have the ability to produce true blacks with razor sharp edges and the paper you use is unlikely to be optimum. But, the link below takes you to a nice collection of zone plates and resolution charts that are useful for A/B comparisons. I split them up into quarters and then print each quarter on a sheet of A4 paper, joining them all back together to produce a nice large chart.
http://www.bealecorner.org/red/test-patterns/

When is 4k really 4k, Bayer Sensors and resolution.

When is 4k really 4k, Bayer Sensors and resolution.

First lets clarify a couple of term. Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels are there on the sensor. Or as TV lines or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you talking about is at what point can I no longer discern one black line from the next. TVL/ph is also the resolution normalised for the picture height, so aspect ratio does not confuse the equation. TVL/ph is a measure of the actual resolution of the camera system.  With video cameras TVL/ph is the normally quoted term, while  pixel resolution or pixel count is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera.
The term 4k started in film with the use af 4k digital intermediate files for post production and compositing. The exposed film is scanned using a single row scanner that is 4,096 pixels wide. Each line of the film is scanned 3 times, once each through a red, green and blue filter, so each line is made up of three 4K pixel scans, a total of just under 12k per line. Then the next line is scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4×3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.
Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn’t stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR.
Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the resolution of the image falling on the sensor below that of the pixel sample rate. You don’t want fine details that the sensor cannot resolve falling on to the sensor, because the missing picture information will create strange patterns called moire and aliasing.
It is impossible to produce an Optical Low Pass Filter that has an instant cut off point and we don’t want any picture detail that cannot be resolved falling on the sensor, so the filter cut-off must start below the sensor resolution. Next we have to consider that a 4k bayer sensor is in effect a 2K horizontal pixel Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? As information from the four pixels in the bayer patter is interpolated, left/right/up/down there is some room to have the low pass cut off above the 2k pixel of the green channel but this can lead to problems when shooting objects that contain lots of primary colours.  If you set the low pass filter to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between the two leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It’s aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it.
In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k to 1.8k TVL/ph without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With a three 1920×1080 pixel sensors, even with a sharp cut-off  low pass filter to eliminate any aliasing in all the channels you should still get at 1k TVL/ph. That’s one reason why bayer sensors despite being around since the 70s and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add ever more pixels to get higher resolution, like the F35 with it’s (non bayer) 14.4 million pixels.
This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn’t even mean 2k TVL/ph, the laws of physics prevent that. In reality even the very best 4k pixels bayer sensor should NOT be resolving more than 1.8k TVL/ph. If it is it will have serious aliasing issues.
After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn’t that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so it is far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it’s resolution is typically lower than that of the 4k scanner.

Why is Sensor Size Important: Part 2, Diffraction Limiting

Another thing that you must consider when looking at sensor size is something called “Diffraction Limiting”. For Standard Definition this is not as big a problem as it is for HD. With HD it is a big issue.

Basically the problem is that light doesn’t always travel in straight lines. When a beam of light passes over a sharp edge it gets bent, this is called diffraction. So when the light passes through the lens of a camera the light around the edge of the iris ring gets bent and this means that some of the light hitting the sensor is slightly de-focussed. The smaller you make the iris the greater the percentage of diffracted light with respect to non diffracted light. Eventually the amount of diffracted and thus de-focussed light will become large enough to start to soften the image.

With a very small sensor even a tiny amount of diffraction will bend the light enough to fall on the pixel adjacent to the one it’s supposed to be focussed on. With a bigger sensor and bigger pixels the amount of diffraction required to bend the light to the next pixel is greater. In addition the small lenses on cameras with small sensors means the iris will be smaller.

In practice, this means that an HD camera with 1/3? sensors will noticeably soften if it is more stopped down (closed) more than f5.6, 1/2? cameras more than f8 and 2/3? f11. This is one of the reasons why most pro level cameras have adjustable ND filters. The ND filter acts like a pair of sunglasses cutting down the amount of light entering the lens and as a result allowing you to use a wider iris setting. This softening happens with both HD and SD cameras, the difference is that with the low resolution of SD it was much less noticeable.