The FX3’s larger brothers, the FX6 and FX9 have a function called “APR” that is used to periodically inspect every pixel on the sensor and normalise or map out any out of spec pixels. With modern 4K cameras having at least 8.8 million pixels the chances of a few going out of spec or being damaged by cosmic rays from time to time is quite high. So on the FX6 and FX9 you will get a reminder to perform the APR process around once a week.
From what I understand, the Alpha series cameras and FX3 also periodically perform a similar process automatically. Because these camera have a mechanical shutter to shut out any external light there is no need for any user intervention to perform this process so you will not be aware that it’s happening. On the FX6 and FX9 the user has to place a cap over the lens or sensor, hence why the camera asks you before it can happen.
But what if you find you have some bright or hot pixels with the FX3? Perhaps you have just travelled on a plane where the high altitude reduces the atmospheres damping effect of the high energy particles from space that can damage pixels. Well you can go into the camera’s menu system and force it to run its pixel mapping process which does the same thing as APR on the other cameras.
You need to go to:
MENU: (Setup) ? [Setup Option] ? select [Pixel Mapping] and then select OK. It doesn’t take long and I would recommend that you do this after flying on a plane or prior to any shoot where you will use large amounts of gain as this is when hot pixels are most likely to show up.
Before the large sensor resolution most professional video cameras used 3 sensors, one each for red, green and blue. And each of those sensors normally had as many pixels as the resolution of the recording format. So you had enough pixels in each colour for full resolution in each colour.
Then along came large sensor cameras where the only way to make it work was by using a single sensor (the optical prism would be too big to accomodate any existing lens system). So now you have to have all your pixels on one sensor divided up between red, green and blue.
Almost all of camera manufacturers ignored the inconvenient truth that a colour sensor with 4K of pixels won’t deliver 4K of resolution. We were sold these new 4K cameras. But the 4K doesn’t mean 4K resolution, it means 4K of pixels. To be fair to the manufactures, they didn’t claim 4K resolution, but they were also quite happy to let end users think that that’s what the 4K meant.
My reason for writing about this topic again is because I just had someone on my facebook feed discussing how wonderful it was to be shooting at 6K with a new camera as this would give lots of space for reframing for 4K.
The nature of what he wrote – “shooting at 6K” – implies shooting at 6K resolution. But he isn’t, his 6K sensor is probably delivering around 4K resolution and he won’t have any room for reframing if he wants to end up with a 4K resolution final image. Now again, in the name of fairness, shooting with 6K of pixels is going to be better than shooting with 4K of pixels if you do choose to reframe. But we really, really need to be careful about how we use terms like 4K or 6K. What do we really mean, what are we really talking about. Because the more we muddle pixels with resolution the less clear it will be what we are actually recording. Eventually no one will really understand that the two are different and the differences really do matter.
So you have just taken delivery of a brand new PXW-FX9. Turned it on and plugged it in to a 4K TV or monitor – and shock horror there are little bright dots in the image – hot pixels.
First of all, don’t be alarmed, this is not unusual, in fact I’d actually be surprised if there weren’t any, especially if the camera has travelled in any airfreight.
Video sensors have millions of pixels and they are prone to disturbance from cosmic rays. It’s not unusual for some to become out of spec. So all modern cameras incorporate various methods of recalibrating or re-mapping those pesky problem pixels. On the Sony professional cameras this is called APR. Owners of the Sony F5, F55, Venice and FX9 will see a “Perform APR” message every couple of weeks as this is a function that needs to be performed regularly to ensure you don’t get any problems.
You should always run the APR function after flying with the camera, especially on routes over the poles as cosmic rays are greater in these areas. Also if you intend to shoot at high gain levels it is worth performing an APR run before the shoot.
If your camera doesn’t have a dedicated APR function, typically found in the maintenance section of the the camera menu system, then often the black balance function will have a very similar effect. On some Sony cameras repeatedly performing a black balance will active the APR function.
If there are a lot of problem pixels then it can take several runs of the APR routine to sort them all out. But don’t worry, it is normal and it is expected. All cameras suffer from it. Even if you have 1000 dead pixels that’s still only a teeny tiny fraction of the 19 million pixels on the sensor.
APR just takes 30 seconds or so to complete. It’s also good practice to black balance at the beginning of each day to help minimise fixed pattern noise and set the cameras black level correctly. Just remember to ensure there is a cap on the lens or camera body to exclude all outside light when you do it!
An F3 user was given access to the service manual to remove a stuck pixel on their F3. It was found in the service manual that you can address pixel manually to mask them. There are pixel positions 1 to 2468 Horizontally and 1 to 1398 vertically. This ties in nicely with the published specifications of the F3 at 3.45 Million Pixels.
At the LLB (Sound, Light and Vision) trade fair in Stockholm this week we had both a SRW9000PL and PMW-F3 side by side on the stand, both connected to matching monitors. After changing a couple of basic Picture Profile settings on the F3 (Cinegamma 1, Cinema Matrix) Just looking at the monitors it was impossible to tell which was which.
Over the next few posts I’m going to look at why sensor size is important. In most situations larger camera sensors will out perform small sensors. Now that is an over simplified statement as there are many things that effect sensor performance, including continuing improvements in the technologies used, but if you take two current day sensors of similar resolution and one is larger than the other, the larger one will usually outperform the smaller one. Not only will the sensors themselves perform differently but other factors come in to play such as lens design and resolution, diffraction limiting and depth of field, I’ll look at those in subsequent posts, for today I’m just going to look at the actual sensor itself.
Pixel size is everything. If you have two sensors with 1920×1080 pixels and one is a 1/3? sensor and the other is a 1/2? sensor then the pixels themselves on the larger 1/2? sensor will be bigger. Bigger pixels will almost always perform better than smaller pixels. Why? Think of a pixel as a bucket that captures photons of light. If you relate that to a bucket that captures water, consider what happens if you put two buckets out in the rain. A large bucket with a large opening will capture more rain than a small bucket.
Bigger pixels capture more light each.
It’s the same with the pixels on a CMOS or CCD sensor, the larger the pixel, the more light it will capture, so the more sensitive it will be. Taking that analogy a step further if the buckets are both of the same depth the large bucket will be able to hold more water before it overflows. It’s the same with pixels, a big pixel can store more charge of electrons before it overflows (photons of light get converted into electrical charge within the pixel). This increases the dynamic range of the sensor as a large pixel will be able to hold a bigger charge before overflowing than a small pixel.
All the electronics within a sensor generate electrical noise. In a sensor with big pixels which is capturing more photons of light per pixel than a smaller sensor, the ratio of light captured to electrical noise is better, so the noise is less visible in the final image, in addition the heat generated in a sensor will increase the amount of unwanted noise. A big sensor will dissipate any heat better than a small sensor, so once again the big sensor will normally have a further noise advantage.
So as you can see, in most cases a large sensor has several electronic advantages over a smaller one. In the next post I will look at some of the optical advantages.