There is something a little curious about the specs for the sensor in the FS700:
Imager | Exmor Super35 CMOS sensor |
Number of pixels | Total pixels approx. 11.6M Effective pixels in movie shooting (16:9) approx. 8.3M Effective pixels in still picture shooting (16:9) approx. 8.4M (3:2) approx. 7.1M |
Why create a sensor with 11.6 million pixels and then use only 8.3M? It is normal to have some extra pixels that are used for setting black levels etc, but this is a massive difference between the number of actual pixels on the sensor and the number that are used to create the pictures. Where are all the un-used pixels? Given that this is a Super 35mm sensor, the active area used for video is APS-C ish sized, so it’s quite a big sensor already. What would you put a near full frame 35mm 11.6 MP sensor in these days? That’s a low pixel count for a modern large sensor DSLR or stills camera, even the compact NEX7 stills camera has 24MP. If (and this is just random speculation) the FS700 is taking an 8.3MP window out of the middle of the sensor, that makes it a pretty big chip. Another thought is that the FS700 does read the full height and width of the sensor but then uses some pixel skipping only actually reading 8.3MP, but why do that? In stills mode the camera only uses 8.4MP yet with so many extra pixels you could get higher resolution stills. So… why 11.6 MP and what else was this sensor designed for?
hmmm…
well, maybe a 4/3 sensor? just like arris alexa did??
But why design a 4:3 sensor for a camera that doesn’t use it. It takes up valuable space on the silicon wafer to make sensors that are larger than needed, so it follows that this sensor probably has some other application. The question is what?
Will the number of used pixels change when the sensor is used to pick up 4k? How does this compare with the fs100 sensor?
The FS100 has 3.4MP so a lot fewer than the FS700. I don’t know exactly how the camera will work in 4K but as it has 8.3MP it would make sense if the same number of pixels are used and the readout is then quad HD.
Does the FS700 have some sort of digital stabilisation? I think this is the reason the video crop on the sony 24mp sensors is so high.
It does have an electronic stabiliser and that probably does use some extra look around area, maybe 5 or 10%, but that still doesn’t account for the large discrepancy. You can’t just use a load of extra pixels beyond the super 35mm frame area as many lenses would no longer be suitable. It’s also not known whether this stabiliser will work at 4K, it may not as the camera will be bypassing the image processing in order to deliver a 4K raw image.
The Actual Physical Size of the FS700 Sensor is 24.2mm x 14.8mm. However, it uses only 23.6 x 13.3 of effective are. I believe the reason is that Sony is using a CCD shift technology stabilisation system similar to the first systems developed by Minolta (AS – Anti Shake System), where the sensor itself is moved (different from DIS – Digital Image Stabilisation witch stabilise the image through software).
Sony has bought the Camera Division of Konica Minolta and then changed the system name to simply SteadyShot. Now Sony works with different stabilisation systems: SSI (SteadyShot Inside/ Sensor Shifting), OSS (Optical SteadyShot) and SSS (Super SteadyShot).
No one could give a proper explanation of what it was the SSS and why it was better. However, it seems that the SSS used to be a combination of Digital Image Stabilisation + Optical Stabilisation.
Nowadays, it seems that sony will start using the nomenclature SSS for their new system, the Balanced Optical SteadyShot launched on 2012. This new system is nothing more then the combination of SSI (sensor Shifting) and OSS (optical Stabilisation) technologies. Sony has released the FS700 in 2012.
PS: the SS Active Mode, it’s nothing more then an aggressive panning Optical Stabilisation, similar to the ones you can find on Canon lens.
It’s actually very simple:
Sony don’t actually physically shift the sensor. That’s old school inferior technology that can lead to all kinds of issues.
Steadyshot = Optical image stabilisation, usually via a servo driven floating lens element within the lens.
Active steadyshot = pixel shift IS.
Super steadyshot = Optical IS + pixel shift IS.
The balanced optical Steadyshot is known as BOSS, where the lens and sensor are in a single gimbal mounted assembly.
CMOS sensors almost always have a pixel count and surface area greater than the active area. The extra pixels around the edge of the active area are masked and used for pixel noise sampling and signal biasing control.