Tag Archives: sensor

Hot Pixels or bright pixels on the FX3, FX30, A7S3 etc

When you have millions of pixels on a video sensor it isn’t surprising to find that every now and then one or two might go out of spec and show up in your footage as a white dot. These “hot” pixels are most commonly seen when using high ISO’s or the upper of the cameras two base ISO’s. Hot pixels are not uncommon and they are not something to worry about.

The Fix:

Thankfully the issue is easily resolved by going to the cameras main menu and – Setup Menu – Setup Option – Pixel Mapping.  Then cap the lens or cap the camera body and run the pixel mapping. It only takes around 30 seconds and it should eliminate any white, black or coloured sensor pixel issues. The camera will ask you to do this periodically anyway and you should do it regularly, especially after flying anywhere with the camera.

Sensor pixels can be damaged by energetic particles that come from cosmic events. So a hot pixel can appear at any time and without warning. They are not something to worry about, it is normal to get some bad pixels from time to time over the life of a camera. When you travel by air there is less of the atmosphere to protect your camera from these particles, so there is a higher than normal likelihood of one going out of spec. Polar air routes are the worst as the earths magnetic field tends to funnel these particles towards the north and south poles. So, whenever you fly with your camera it is a good idea to run Pixel Mapping (or APR if you have an FX6, FX9 etc) before you start shooting. 

FX6 Fan Noise and Fan Modes.

Cooling fans (or perhaps more accurately temperature regulating fans) are an unfortunate necessity on modern high resolution cameras. As we try to read more and more pixels, process them and then encode them at ever greater resolutions more and more heat is generated. Throw in higher frame rates and the need to do that processing even faster and heat becomes an issue, especially in smaller camera bodies. So forced air cooling becomes necessary if you wish to shoot uninterrupted for extended periods..

Many camcorder users complain about fan noise. Not just with the FX6 but with many modern cameras. But fans are something we need, so we need to learn to live with the noise they make. And the fan isn’t just cooling the electronics, it is carefully regulating the temperature of the camera trying to keep it within a narrow temperature range.

The fan regulates the temperature of the sensor by taking warm air from the processing electronics and passing it over fins attached to the back of the sensor. I am led to believe that at start up the fan runs for around 30 seconds to quickly warm up the sensor. From there the camera tries to hold the sensor and electronics at a constant warm temperature, not too cold, not too hot, so that the sensor noise levels and black levels remain constant. The sensor is calibrated for this slightly warm temperature.

As well as running in the default auto mode there are  “minimum” and “off in record” modes for the fan in the technical section of the FX6’s main menu. Minimum forces the fan to run all the time at a low level so it doesn’t cycle on and off, possibly at higher levels. Off in record turns the fan off when recording – however the fan will still come on if there is a risk of damage due to overheating. Off in record can result in minor changes to noise as black levels during longer takes as the camera’s internal temperature rises, but you’ll likely only see this if you look carefully for it.

The Sony FX6 is Full Frame – Sometimes!

Perhaps I’m splitting hairs here a little bit – and I still think the FX6 is an amazing camera. But the more I look at it’s different scan modes and recording mode the more I’ve realised that it’s only actually “Full Frame” in few certain settings. 

When the FX6 is set to UHD and operating from 1 to 60 fps then it’s full frame and the whole width of the sensor is used. Put a Full Frame lens on the camera and you get the same FoV as an FX9 or any other camera with a similar sized sensor.

But if you want to shoot 4K DCI then something strange happens. Switch the FX6 to 4K DCI and the sensor is cropped/windowed by 5% and instead of the field of view becoming 5% wider as happens on most cameras, it instead  becomes 5% narrower. In 4K DCI the FX6 is very slightly less than Full Frame. 

To shoot at more than 60fps the camera has to be in UHD. 60 fps and below it’s full frame but when you go above 60fps the image is cropped even more, this time by 10% so the FoV gets 10% narrower.

If you want to record UHD Raw at any frame rate the image is also cropped by 10% so UHD with raw out at 30fps results in a 10% narrower FoV than when you are not outputting raw. When you enable raw at 4K DCI raw it’s a 5% crop.

So while none of these crops are huge it is worth noting that the FX6 is actually a little less than full frame more often than not!

Just to put all this into some perspective the FX9’s Full Frame Crop 5K mode involves a 17% crop. The FX6 outputting UHD raw or recording UHD at more than 60fps is a 10% crop. That’s not a vast difference. In these modes the FX6 is closer to the FX9 5K mode than to Full Frame.

Why is this? Well the FX6’s sensor is 4.2K pixels across. In the “normal” UHD frame rates (up to 60fps) the full 4.2K is read and downscaled on the fly to 3840 x 2160 UHD.  When you shoot 4K DCI there is no downscale and instead the sensor is read out at 4K and the extra 0.2K of pixels at the edges of the frame are not used – 0.2K being 5% of 4.2K and thus you have a 5% crop and the FoV becomes 5% narrower in DCI 4K than in UHD.

When you shoot above 60 fps then the sensor is read directly at 3840 pixels rather than 4.2K to make the readout simpler and faster. So now we are reading 0.4K fewer pixels from the sides of the sensor which is 10% of the total pixels and we get a 10% narrower FoV above 60fps as a result.

As I said at the start, perhaps I’m splitting hairs. I certainly don’t think this detracts from the FX6 in any significant way. But if it’s a camera you are thinking of getting, you should be aware of this.


Why hasn’t anyone brought out a super sensitive 4K camera?

Our current video cameras are operating at the limits of current sensor technology. As a result there isn’t much a camera manufacturer can do to improve sensitivity without compromising other aspects of the image quality.
Every sensor is made out of silicon and silicon is around 70% efficient at converting photons of light into electrons of electricity. So the only things you can do to alter the sensitivity is change the pixel size, reduce losses in the colour and low pass filters, use better micro lenses and use various methods to prevent the wires and other electronics on the face of the sensor from obstructing the light. But all of these will only ever make very small changes to the sensor performance as the key limiting factor is the silicon used to make the sensor.
 
This is why even though we have many different sensor manufacturers, if you take a similar sized sensor with a similar pixel count from different manufacturers the performance difference will only ever be small.
 
Better image processing with more advanced noise reduction can help reduce noise which can be used to mimic greater sensitivity. But NR rarely comes without introducing other artefacts such as smear, banding or a loss of subtle details. So there are limits as to how much noise reduction you want to apply. 
 

So, unless there is a new sensor technology breakthrough we are unlikely to see any new camera come out with a large, actual improvement in sensitivity. Also we are unlikely to see a sudden jump in resolution without a sensitivity or dynamic range penalty with a like for like sensor size. This is why Sony’s Venice and the Red cameras are moving to larger sensors as this is the only realistic way to increase resolution without compromising other aspects of the image. It’s why all the current crop of S35mm 4K cameras are all of very similar sensitivity, have similar dynamic range and similar noise levels.

 

A great example of this is the Sony A7s. It is more sensitive than most 4K S35 video cameras simply because it has a larger full frame sensor, so the pixels can be bigger, so each pixel can capture more light. It’s also why cameras with smaller 4K sensors will tend to be less sensitive and in addition have lower dynamic range (because the pixel size determines how many electrons it can store before it overloads).

Watch your viewfinder in bright sunshine (viewfinders with magnifiers or loupes).

Just a reminder to anyone using a viewfinder fitted with an eyepiece, magnifier or loupe not to leave it pointing up at the sun. Every year I see dozens of examples of burnt  and damaged LCD screens and OLED displays caused by sunlight entering the viewfinder eyepiece and getting focussed onto the screen and burning or melting it.

It can only take a few seconds for the damage to occur and it’s normally irreversible. Even walking from shot to shot with the camera viewfinder pointed towards the sky can be enough to do damage if the sun is out.

So be careful, cover or cap the viewfinder when you are not using it. Tilt it down when carrying the camera between locations or shots. Don’t turn to chat to someone else on set and leave the VF pointing at the sun. If you are shooting outside on a bright sunny day consider using a comfort shade such as an umbrella or large flag above your shooting position to keep both you and the camera out of the sun.

Damage to the viewfinder can appear as a smudge or dark patch on the screen that does not wipe off. If the cameras was left for a long period it may appear as a dark line across the image. You can also sometimes melt the surround to the LCD or OLED screen.

As well as the viewfinder don’t point your camera directly into the sun. Even an ND filter may not protect the sensor from damage as most regular ND filters allow the infra red wavelengths that do much of the damage straight through.  Shutter speed makes no difference to the amount of light hitting the sensor in a video camera, so even at a high shutter speed damage to the cameras sensor or internal ND’s can occur. So be careful when shooting into the sun. Use an IR ND filter and avoid shooting with the aperture wide open, especially with static shots such as time-lapse.

 

Sensor sizes, where do the imperial sizes like 2/3″ or 4/3″ come from?

Video sensor size measurement originates from the first tube cameras where the size designation would have related to the outside diameter of the glass tube. The area of the face of the tube used to create the actual image would have been much smaller, typically about 2/3rds of the tubes outside diameter. So a 1″ tube would give a 2/3″ diameter active area, within which you would have a 4:3 frame with a 16mm diagonal.

An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm diagonal. This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor. A 1/2″ sensor has a 8mm diagonal and a 1″ sensor a 16mm diagonal.

Yes, it’s confusing, but the same 2/3″ lenses as designed for tube cameras in the 1950’s can still be used today on a modern 2/3″ video camera and will give the same field of view today as they did back then. So the sizes have stuck, even though they have little relationship with the physical size of a modern sensor. A modern 2/3″ sensor is nowhere near 2/3 of an inch across the diagonal.

This is why some manufacturers are now using the term “1 inch type”, as this is the active area that would be the equivalent to the active area of an old 1″ diameter Vidicon/Saticon/Plumbicon Tube from the 1950’s.

For comparison:

1/3″ = 6mm diag.
1/2″ = 8mm
2/3″ = 11mm
1″ = 16mm
4/3″ = 22mm

A camera with a Super35mm sensor would be the equivalent of approx 35-40mm
APS-C would be approx 30mm

PMW-F3 and FS100 Pixel Count Revealed.

This came up over on DVInfo.

An F3 user was given access to the service manual to remove a stuck pixel on their F3. It was found in the service manual that you can address pixel manually to mask them. There are  pixel positions  1 to 2468 Horizontally and  1 to 1398 vertically. This ties in nicely with the published specifications of the F3 at 3.45 Million Pixels.

At the LLB (Sound, Light and Vision) trade fair in Stockholm this week we had both a SRW9000PL and PMW-F3 side by side on the stand, both connected to matching monitors. After changing a couple of basic Picture Profile settings on the F3 (Cinegamma 1, Cinema Matrix)  Just looking at the monitors it was impossible to tell which was which.

Measuring Resolution, Nyquist and Aliasing.

When measuring the resolution of a well designed video camera, you never want to see resolution that is significantly higher than HALF of the sensors resolution. Why is this? Why don’t I get 1920 x1080 resolution from an EX1, which we know has 1920 x1080 pixels, why is the measured resolution often around half to three quarters what you would expect?
There should be an optical low pass filter in front of the sensor in a well designed video camera that prevents frequencies above approx half of the sensors native resolution getting to the sensor. This filter will not have an instantaneous cut off, instead attenuating fine detail at ever increasing amounts centered somewhere around the Nyquist limit for the sensor. The Nyquist limit is normally half of the pixel count with a 3 chip camera or somewhat less than this for a bayer sensor. As a result measured resolution gradually tails off somewhere a little above Nyquist or half of the expected pixel resolution, but why is this?
It is theoretically possible for a sensor to resolve an image at it’s full pixel resolution. If you could line up the black and white lines on a test chart perfectly with the pixels on a 1920 x 1080 sensor then you could resolve 1920 x 1080 lines. But what happens when those lines no longer line up absolutely perfectly with the pixels? lets imagine that each line is offset by exactly half a pixel, what would you see? Well each pixel would see half of the black line and half white line. So each pixel would see 50% white, 50% black and the output from that pixel would be mid grey. With the adjacent pixels all seeing the same thing they would all output mid grey. So by panning the image by half a pixel, instead of now seeing 1920×1080 black and white lines all we see is a totally grey frame. As you continued to shift the chart relative to the pixels, say by panning across it, it would flicker between pin sharp lines and grey. If the camera was not perfectly aligned with the chart some of the image would appear grey or different shades of grey depending on the exact pixel to chart alignment while other parts may show distinct black and white lines. This is aliasing and it’s not nice to look at and can in effect reduce the resolution of the final image to zero. So to counter this you deliberately reduce the system resolution (lens + sensor) to around half the pixel count so that it is impossible for any one pixel to only see one object. By blurring the image across two pixels you ensure that aliasing wont occur. It should also be noted that the same thing can happen with a display or monitor, so trying to show a 1920×1080 image on a 1920×1080 monitor can have the same effect.
When I did my recent F3 resolution tests I used a term called the MTF or modulation transfer function, which is a measure of the contrast between adjacent pixels, so MTF 50 is where there is a 50% of maximum contrast difference between the black and white lines on the test chart.
When visually observing a resolution chart you can see where the lines on the chart can no longer be distinguished from one another, this is the resolution vanishing point and is typically somewhere around MTF15 to MTF5, ie. the contrast between the black and white lines becomes so low that you can no longer distinguish one from the other. But the problem with this is that as you are looking for the point where you can no longer see any difference, you are attempting to measure the invisible so it is prone to gross inaccuracies. In addition the contrast at MTF10 or the vanishing point between black and white will be very, very low, so in a real world image you would often struggle to ever see fine detail at MTF10 unless it was strong black and white edges.
So for resolution tests a more consistent result can be obtained by measuring the point at which the contrast between the black and white lines on the chart reduces to 50% of maximum, or MTF50 (as resolution decreases so too does contrast). So while MTF50 does not determine the ultimate resolution of the system, it gives a very reliable performance indicator that is repeatable and consistent from test to test. What it will tell you is how sharp one camera will appear to be compared to the next.
As the Nyquist frequency  is half the sampling frequency of the system, for a 1920 x 1080 sensor anything over 540 LP/ph will potentially aliase, so we don’t want lots of detail above this.  As Optical Low Pass filters cannot instantly cut off unwanted frequencies there will be a gradual resolution tail off that spans the Nyquist frequency and there is a fine balance between getting a sharp image and excessive aliasing. In addition as real world images are rarely black and white lines (square waves) and fixed high contrast patterns you can afford to push things a little above Nyquist to gain some extra sharpness. A well designed 1920 x 1080 HD video camera should resolve around 1000TVL. This where seeing the MTF curve helps, as it’s important to see how quickly the resolution is attenuated past MTF50.
With Bayer pattern sensors it’s even more problematic due to the reduced pixel count for the R and B samples compared to G.
The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast resolutions significantly higher than 1000TVL will have aliasing issues, indeed the EX1/EX3 can aliase in some situations as does the F3. These cameras are right at the limits of what will allow for a good, sharp image at 1920×1080.

Are Cosmic Rays Damaging my camera and flash memory?

Earth is being constantly bombarded by charged particles from outer space. Many of these cosmic rays come from exploding stars in distant galaxies. Despite being incredibly small some of these particles are travelling very fast and contain a lot of energy for their size. Every now and then one of these particles will pass through your camcorder.  What happens to both CMOS and CCD sensors as well as flash memory is that the energetic particle punches a small hole through the insulator of the pixel or memory cell. In practice what then happens is that charge can leak from the pixel to the substrate or from the substrate to the pixel. In the dark part of an image the amount of photons hitting the sensor is extremely small, each photon (in a perfect sensor) gets turn into an electron. It doesn’t take much of a leak for enough additional electrons to seep through the hole in the insulation to the pixel and give a false, bright readout. With a very small leak, the pixel may still be useable simply be adding an offset to to the read out to account for the elevated black level. In a more severe cases the pixel will be flooded with leaked electrons and appear white, in this case the masking circuits should read out the adjacent pixel.

For a computer running with big voltage/charge swings between 1’s and 0’s this small leakage current is largely inconsequential, but it does not take much to upset the readout of a sensor when your only talking of a handful of electrons. CMOS sensors are easier to mask as each pixel is addressed individually and during the camera start up it is normal to scan the sensor looking for excessively “hot” pixels. In addition many CMOS sensors incorporate pixel level noise reduction that takes a snapshot of the pixels dark voltage and subtracts it from the exposed voltage to reduce noise. A side effect of this is it masks hot pixels quite effectively. Due to the way a CCD’s output is pulled down through the entire sensor, masking is harder to do, so you often have to run a special masking routine to detect and mask hot pixels.

It may not sound much getting a single hot pixel, but if it’s right in the middle of the frame, every time that part of your scene is not brightly illuminated you see it winking away at you and on dark scenes it will stick out like a sore thumb, thankfully masking circuits are very effective at either zeroing out the raised signal level or reading out an adjacent pixel.

Flash memory can also experience these same insulation holes. There are two common types of Flash Memory, SLC and MLC. Single Level Cells have two states, on or off. Any charge means on and no charge means off. A small amount of leakage, in the short term, would have minimal impact as it could take months or years for the cell to full discharge, even then there is a 50/50 chance that the empty cell will still be giving an accurate ouput as it may have been empty to start with. Even so, long term you could loose data and a big insulation leak could discharge a cell quite quickly. MLC or Multi Level Cells are much more problematic, as the name suggests these cells can have several states, each state defined by a specific charge range, so one cell can store several bits of data. A small leak in a MLC cell can quickly alter the state of the cell form one level to the next, corrupting the data by changing the voltage.

The earths magnetic field concentrates these cosmic rays towards the north and south pole. Our atmosphere does provide some protection from them, but some of these particles can actually pass right through the earth, so lead shielding etc has no significant effect unless it is several feet thick. Your camera is at most risk when flying on polar routes. On an HD camera you can expect to have 3 or 4 pixels damaged during a year at sea level, with a CMOS camera you may never see them, with a CCD camera you may only see them with gain switched in.

SxS Pro cards (blue ones) are SLC, SxS-1 (Orange cards) use MLC as MLC works out cheaper as fewer cells are required to store the same amount of data. Most consumer flash memory is MLC. So be warned, storing data long term on flash memory may not be as safe as you might think!

When is 4:4:4 not really 4:4:4.

The new Sony F3 will be landing in end users hands very soon. One of the cameras upgrade options is a 4:4:4 RGB output, but is it really 4:4:4 or is it something else?

4:4:4 should mean no chroma sub-sampling, so the same amount of samples for the R, G and B channels. This would be quite easy to get with a 3 chip camera as each of the 3 chips has the same number of pixels, but what about a bayer sensor as used on the F3 and other bayer cameras too for that matter?

If the sensor is subsampling the aerial image B and R compared to G (Bayer matrix, 2x G samples for each R and B) then no matter how you interpolate those samples, the B and R are still sub sampled and data is missing. Potentially depending on the resolution of the sensor even the G may be sub sampled compared to the frame size. In my mind a true 4:4:4 system means one pixel sample for each colour at every point within the image. So for 2k that’s 2k R, 2K G and 2K B. For a Bayer sensor that would imply a sensor with twice as many horizontal and vertical pixels as the desired resolution or a 3 chip design with a pixel for each sample on each of the R,G and B sensors. It appears that the F3’s sensor has nowhere near this number of pixels, rumour has it at around 2.5k x 1.5k.

If it’s anything less than 1 pixel per colour sample, while the signal coming down the cable may have an even number of RGB data streams the data streams won’t contain even amounts of picture information for each colour, the resolution of the B and R channels will be lower than the Green, so while the signal might be 4:4:4, the system is not truly 4:4:4. Up-converting the 4:2:2 output from a camera to 4:4:4 does not make it a 4:4:4 camera. This is no different to the situation seen with some cameras with 10 bit HDSDI outputs that only contain 8 bits of data. It might be a 10 bit stream, but the data is only 8 bit. It’s like a TV station transmitting an SD TV show on an HD channel. The channel might call itself an HD channel, but the content is still SD even if it has been upscaled to fill in all the missing bits.

Now don’t get me wrong, I’m not saying that there won’t be advantages to getting the 4:4:4 output option. By reading as much information as possible from the sensor, prior to compression there should be an improvement over the 4:2:2 HDSDi output, but it won’t be the same as the 4:4:4 output from an F35 where there is a pixel for every colour sample, but then the price of the F3 isn’t the same as the F35 either!