Tag Archives: size

What’s So Magical About Full Frame – Or Is It all Just ANOTHER INTERNET MYTH?

FIRST THINGS FIRST:
The only way to change the perspective of a shot is to change the position of the camera relative to the subject or scene.  Just put a 1.5x wider lens on a s35camera and you have exactly the same angle of view as a Full Frame camera. It is an internet myth that Full Frame changes the perspective or the appearance of the image in a way that cannot be exactly replicated with other sensor or frame sizes. The only thing that changes perspective is how far you are from the subject. It’s one of those laws of physics and optics that can’t be broken. The only way to see more or less around an object is by changing your physical position.

The only thing changing the focal length or sensor size changes is magnification and you can change the magnification either by changing sensor size or focal length and the effect is exactly the same either way. So in terms of perspective, angle of view or field of view an 18mm s35 setup will produce an identical image to a 27mm FF setup. The only difference may be in DoF depending on the aperture where  f4 on FF will provide the same DoF as f2.8 on s35. If both lenses are f4 then the FF image will have a shallower DoF.

Again though physics play a part here as if you want to get that shallower DoF from a FF camera then the lens FF lens will normally need to have the same aperture as the s35 lens. To do that the elements in the FF lens need to be bigger to gather twice as much light so that it can put the same amount of light as the s35 lens across the twice as large surface area of the FF sensor.  So generally you will pay more for a comparable FF like for like aperture lens as a s35 lens. Or you simply won’t be able to get an equivalent in FF because the optical design becomes too complex, too big, too heavy or too costly.
This in particular is a big issue for parfocal zooms. At FF and larger imager sizes they can be fast or have a big zoom range, but to do both is very, very hard and typically requires some very exotic glass. You won’t see anything like the affordable super 35mm Fujinon MK’s in full frame, certainly not at anywhere near the same price. This is why for decades 2/3″ sensors and 16mm film before that, ruled the roost for TV news as lenses with big zoom ranges and large fast apertures were relatively affordable.
Perhaps one of the commonest complaints I see today with larger sensors is “why can’t I find an affordable fast, parfocal zoom with more than a 4x zoom range”. Such lenses do exist, for s35 you have lenses like the $22K Canon CN7 17-120mm  T2.9, which is pretty big and pretty heavy. For Full Frame the nearest equivalent is the more expensive $40K Fujinon Premista 28-100 t2.9. which is a really big lens weighing in at almost 4kg. But look at the numbers: Both will give a very similar AoV on their respective sensors at the wide end but the much cheaper Canon has a greatly extended zoom range and will get a tighter shot than the Premista at the long end. Yes, the DoF will be shallower with the Premista, but you are paying almost double, it is a significantly heavier lens and it has a much reduced zoom ratio. So you may need both the $40K Premista 28-100 and the $40K Premista 80-250 to cover everything the Canon does (and a bit more). So as you can see, getting that extra shallow DoF may be very costly. And it’s not so much about the sensor, but more about the lens.
The History of large formats:
It is worth considering that back in the 50’s and 60’s we had VistaVision, a horizontal 35mm format the equivalent of 35mm FF, plus 65mm and a number of other larger than s35 formats. All in an effort to get better image quality.
VistaVision (The closet equivalent to 35mm Full Frame).
VistaVision didn’t last long, about 7 or 8 years because better quality film stocks meant that similar image quality could be obtained from regular s35mm film and shooting VistaVision was difficult due to the very shallow DoF and focus challenges, plus it was twice the cost of regular 35mm film. It did make a brief comeback in the 70’s for shooting special effects sequences where very high resolutions were needed. VistaVision was superseded by Cinemascope which uses 2x Anamorphic lenses and conventional vertical super 35mm film and Cinemascope was subsequently largely replaced by 35mm Panavision (the two being virtually the same thing and often used interchangeably).
65mm formats.
 At around the same time there were various 65mm (with 70mm projection) formats including Super Panavision, Ultra Panavision and Todd-AO These too struggled and very few films were made using 65mm film after the end of the 60’s. There was a brief resurgence in the 80’s and again recently there have been a few films, but production difficulties and cost has meant they tend to be niche productions.
Historically there have been many attempts to establish mainstream  larger than s35 formats. But by and large audiences couldn’t tell the difference and even if they did they wouldn’t pay extra for them. Obviously today the cost implication is tiny compared to the extra cost of 65mm film or VistaVision. But the bottom line remains that normally the audience won’t actually be able to see any difference, because in reality there isn’t one, other than perhaps a marginal resolution increase. But it is harder to shoot FF than s35. Comparable lenses are more expensive, lens choices more limited, focus is more challenging at longer focal lengths or large apertures. If you get carried away with too large an aperture you get miniaturisation and cardboarding effects if you are not careful (these can occur with s35 too).
Can The Audience Tell – Does The Audience Care?
Cinema audiences have not been complaining that the DoF isn’t shallow enough, or that the resolution isn’t high enough (Arri’s success has proved that resolution is a minor image quality factor). But they are noticing focus issues, especially in 4K theaters.
 So while FF and the other larger format are here to stay. Full Frame is not the be-all and end-all. Many, many people believe that FF has some kind of magic that makes the images different to smaller formats because they “read it on the internet so it must be true”.  I think sometimes some things read on the internet create a placebo effect where when you read it enough times you will actually become convinced that the images are different, even when in fact they are not. Once they realise that actually it isn’t different, I’m quite sure many will return to s35 because that does seem to be the sweet spot where DoF and focus is manageable and IQ is plenty good enough. Only time will tell, but history suggest s35 isn’t going anywhere any time soon.

Today’s modern cameras give us the choice to shoot either FF or s35. Either can result in an identical image, it’s only a matter of aperture and focal length. So pick the one that you feel most comfortable with for you production. FF is nice, but it isn’t magic.

Really it’s all about the lens.

The really important thing is your lens choice. I believe that what most people put down as “the full frame effect” is nothing to do with the sensor size but the qualities of the lenses they are using. Full frame stills cameras have been around for a long time and as a result there is a huge range of very high quality glass to choose from (as well as cheaper budget lenses). In the photography world APS-C which is similar to super 35mm movie film has always been considered a lower cost or budget option and many of the lenses designed for APS-C have been built down to a price rather than up in quality. This makes a difference to the way the images may look. So often Full Frame lenses may offer better quality or a more pleasing look, just because the glass is better.

I recently shot a project using Sony’s Venice camera over 2 different shoots. For the shoot we used Full Frame and the Sigma Cine Primes. The images we got looked amazing. But then the second shoot where we needed at times to use higher frame rates we shot using super 35 with a mix of the Fujinon MK zooms and Sony G-Master lenses. Again the images looked amazing and the client and the end audience really can’t tell the footage from the first shoot with the footage from the second shoot.

Downsampling from 6K.

One very real benefit shooting 6K full frame does bring, with both the FX9 and Sony Venice (or any other 6K FF camera) is that when you shoot at 6K and downsample to 4K you will have a higher resolution image with better colour and in most cases lower noise than if you started at 4K. This is because the bayer sensors that all the current large sensor camera use don’t resolve 4K when shooting at 4K. To get 4K you need to start with 6K.

How can 16 bit X-OCN deliver smaller files than 10 bit XAVC-I?

Sony’s X-OCN (XOriginal Camera Negative) is a new type of codec from Sony. Currently it is only available via the R7 recorder which can be attached to a Sony PMW-F5, F55 or the new Venice cinema camera.

It is a truly remarkable codec that brings the kind of flexibility normally only available with 16 bit linear raw files but with a files size that is smaller than many conventional high end video formats.

Currently there are two variations of X-OCN.

X-OCN ST is the standard version and then X-OCN LT is the “light” version. Both are 16 bit and both contain 16 bit data based directly on what comes off the cameras sensor. The LT version is barely distinguishable for a 16 bit linear raw recording and the ST version “visually lossless”. Having that sensor data in post production allows you to manipulate the footage over a far greater range than is possible with tradition video files. Traditional video files will already have some form of gamma curve as well as a colour space and white balance baked in. This limits the scope of how far the material can be adjusted and reduces the amount of picture information you have (relative to what comes directly off the sensor) .

Furthermore most traditional video files are 10 bit with a maximum of 1024 code values or levels within the recording. There are some 12 bit codecs but these are still quite rare in video cameras. X-OCN is 16 bit which means that you can have up to 65,536 code values or levels within the recording. That’s a colossal increase in tonal values over traditional recording codecs.

But the thing is that X-OCN LT files are a similar size to Sony’s own XAVC-I (class 480) codec, which is already highly efficient. X-OCN LT is around half the size of the popular 10 bit Apple ProRes HQ codec but offers comparable quality. Even the high quality ST version of X-OCN is smaller than ProRes HQ. So you can have image quality and data levels comparable to Sony’s 16 bit linear raw but in a lightweight, easy to handle 16 bit file that’s smaller than the most commonly used 10 bit version of ProRes.

But how is this even possible? Surely such an amazing 16 bit file should be bigger!

The key to all of this is that the data contained within an X-OCN file is based on the sensors output rather than traditional video.  The cameras that produce the X-OCN material all use bayer sensors. In a traditional video workflow the data from a bayer sensor is first converted from the luminance values that the sensor produces into a YCbCr or RGB signal.

So if the camera has a 4096×2160 bayer sensor in a traditional workflow this pixel level data gets converted to 4096×2160 of Green plus 4096×2160 of Red, plus 4096×2160 of Green (or the same of Y, Cb and Cr). In total you end up with 26 million data points which then need to be compressed using a video codec.

Bayer-to-RGB How can 16 bit X-OCN deliver smaller files than 10 bit XAVC-I?However if we bypass the conversion to a video signal and just store the data that comes directly from the sensor we only need to record a single set of 4096×2160 data points – 8.8 million. This means we only need to store 1/3rd as much data as in a traditional video workflow and it is this huge data saving that is the main reason why it is possible for X-OCN to be smaller than traditional video files while retaining amazing image quality. It’s simply a far more efficient way of recording the data from a bayer camera.

Of course this does mean that the edit or playback computer has to do some extra work because as well as decoding the X-OCN file it has to be converted to a video file, but Sony developed X-OCN to be easy to work with – which it is. Even a modest modern workstation will have no problem working with X-OCN. But the fact that you have that sensor data in the grading suite means you have an amazing degree of flexibility. You can even adjust the way the file is decoded to tailor whether you want more highlight or shadow information in the video file that will created after the X-OCN is decoded.

Why isn’t 16 bit much bigger than 10 bit? Normally a 16 bit file will be bigger than a 10 bit file. But with a video image there are often areas of information that are very similar. Video compression algorithms take advantage of this and instead of recording a value for every pixel will record a single value that represents all of the similar pixels. When you go from 10 bit to 16 bit, while yes, you do have more bits of data to record a greater percentage of the code values will be the same or similar and as a result the codec becomes more efficient. So the files size does increase a bit, but not as much as you might expect.

So, X-OCN, out of the gate, only needs to store 1/3rd of the data points of a similar traditional RGB or YCbCr codec. Increasing the bit depth from the typical 10 bit bit depth of a regular codec to the 16 bits of X-OCN does then increase the amount of data needed to record it. But the use of a clever algorithm to minimise the data needed for those 16 bits means that the end result is a 16 bit file only a bit bigger than XAVC-I but still smaller than ProRes HQ even at it’s highest quality level.

Sensor sizes, where do the imperial sizes like 2/3″ or 4/3″ come from?

Video sensor size measurement originates from the first tube cameras where the size designation would have related to the outside diameter of the glass tube. The area of the face of the tube used to create the actual image would have been much smaller, typically about 2/3rds of the tubes outside diameter. So a 1″ tube would give a 2/3″ diameter active area, within which you would have a 4:3 frame with a 16mm diagonal.

An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm diagonal. This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor. A 1/2″ sensor has a 8mm diagonal and a 1″ sensor a 16mm diagonal.

Yes, it’s confusing, but the same 2/3″ lenses as designed for tube cameras in the 1950’s can still be used today on a modern 2/3″ video camera and will give the same field of view today as they did back then. So the sizes have stuck, even though they have little relationship with the physical size of a modern sensor. A modern 2/3″ sensor is nowhere near 2/3 of an inch across the diagonal.

This is why some manufacturers are now using the term “1 inch type”, as this is the active area that would be the equivalent to the active area of an old 1″ diameter Vidicon/Saticon/Plumbicon Tube from the 1950’s.

For comparison:

1/3″ = 6mm diag.
1/2″ = 8mm
2/3″ = 11mm
1″ = 16mm
4/3″ = 22mm

A camera with a Super35mm sensor would be the equivalent of approx 35-40mm
APS-C would be approx 30mm

Micro 4/3, Super 35, DSLR and the impact on traditional Pro Camcorders.

Micro 4/3, Super 35, DSLR and the impact on traditional Pro Camcorders.

I was asked by one of this blogs readers about my thoughts on this. It’s certainly a subject that I have spent a lot of time thinking about. Traditionally broadcast and television professionals have used large and bulky cameras that use 3 sensors arranged around a prism to capture the 3 primary colours. The 3 chip design gives excellent colour reproduction, full resolution images of the very highest quality. It’s not however without it’s problems. First its expensive matching 3 sensors and accurately placing them on a high precision prism made from very exotic glass. That prism also introduces image artefacts that have to be dealt with by careful electronic processing. The lenses that have to work with these thick prisms also require very careful design.
Single sensor colour cameras are not something new. I had a couple of old tube cameras that produced colour pictures from a single tube. Until recently, single chip designs were always regarded as inferior to multi-chip designs. However the rise of digital stills photography forced manufacturers to really improve the technologies used to generate a colour image from a single sensor. Sony’s F35 camera used to shoot movies and high end productions is a single chip design with a special RGB pixel matrix. The most common method used is by a single sensor is a bayer mask which places a colour filter array in front of the individual pixels on the sensor. Bayer sensors now rival 3 chip designs in most respects. There is still some leakage of colours between adjacent pixels and the colour separation is not as precise as with a prism, but in most applications these issues are extremely hard to spot and the stills pictures coming from DSLR’s speak for themselves.
A couple of years ago Canon really shook things up by adding video capabilities to some of their DSLR’s. Even now (at the time of writing at least) these are far from perfect as they are at the end of the day high resolution stills cameras so there are some serious compromises to the way video is done. But the Canons do show what can be done with a low cost single chip camera using interchangeable lenses. The shallow depth of field offered by the large near 35mm size sensors (video cams are normally 2/3?, 1/2? or smaller) can be very pleasing and the lack of a prism makes it easier to use a wide range of lenses. So far I have not seen a DSLR or other stills camera with video that I would swap for a current pro 3 chip camera, but I can see the appeal and the possible benefits. Indeed I have used a Canon DSLR on a couple of shoots as a B camera to get very shallow DoF footage.
Sony’s new NEX-VG10 consumer camcorder was launched a couple of weeks ago. It has the shape and ergonomics of a camcorder but with the sensor and lenses of a 4/3? stills camera. I liked it a lot, but there is no zoom rocker and for day to day pro use it’s not what I’m looking for. Panasonic and Sony both have professional large sensor cameras in the pipelines and it’s these that could really shake things up.
While shallow DoF is often desirable in narrative work, for TV news and fast action its not so desirable. When you are shooting the unexpected or something thats moving about a lot you need to have some leeway in focus. So for many applications a big sensor is not suitable. I dread to think what TV news would look like if it was all shot with DSLR’s!
Having said that a good video camera using a big sensor would be a nice piece of kit to have for those projects where controlling the DoF is beneficial.
What I am hoping is that someone will be clever enough to bring out a camera with a 35mm (or thereabouts) sized sensor that has enough resolution to allow it to be used with DSLR (or 4/3) stills camera lenses but also be windowed down and provided with an adapter to take 2/3? broadcast lenses without adding a focal length increase. This means that the sensor needs to be around 8 to 10 Mega Pixels so that when windowed down use just the center 2/3? and it still has around 3 million active pixels to give 1920×1080 resolution (you need more pixels than resolution with a bayer mask). This creates a problem though when you use the full sensor as the readout of the sensor will have to be very clever to avoid the aliasing issues that plague the current DSLR’s as you will have too much resolution when you use the full sensor. Maybe it will come with lens adapters that will have to incorporate optical low pass filters to give the correct response for each type of lens.
A camera like this would, if designed right dramatically change the industry. It would have a considerable impact on the sales of traditional pro video cameras as one camera could be used for everything from movie production to TV news. By using a single sensor (possibly a DSLR sensor) the cost of the camera should be lower than a 3 chip design. If it has a 10 MP sensor then it could also be made 3D capable through the use of a 3D lens like the 4/3? ones announced by Panasonic. These are exciting time we live in. I think the revolution is just around the corner. Having said all of this, I think it’s also fair to point out while you and I are clearly interested in the cutting edge (or bleeding edge) there are an awful lot of producers and production companies that are not, preferring traditional, tried and tested methods. It takes them years to change and adapt, just look at how long tape is hanging on! So the days of the full size 2/3? camera are not over yet, but those of us that like to ride the latest technology wave have great things to look forward to.

Why is Sensor Size Important: Part 2, Diffraction Limiting

Another thing that you must consider when looking at sensor size is something called “Diffraction Limiting”. For Standard Definition this is not as big a problem as it is for HD. With HD it is a big issue.

Basically the problem is that light doesn’t always travel in straight lines. When a beam of light passes over a sharp edge it gets bent, this is called diffraction. So when the light passes through the lens of a camera the light around the edge of the iris ring gets bent and this means that some of the light hitting the sensor is slightly de-focussed. The smaller you make the iris the greater the percentage of diffracted light with respect to non diffracted light. Eventually the amount of diffracted and thus de-focussed light will become large enough to start to soften the image.

With a very small sensor even a tiny amount of diffraction will bend the light enough to fall on the pixel adjacent to the one it’s supposed to be focussed on. With a bigger sensor and bigger pixels the amount of diffraction required to bend the light to the next pixel is greater. In addition the small lenses on cameras with small sensors means the iris will be smaller.

In practice, this means that an HD camera with 1/3? sensors will noticeably soften if it is more stopped down (closed) more than f5.6, 1/2? cameras more than f8 and 2/3? f11. This is one of the reasons why most pro level cameras have adjustable ND filters. The ND filter acts like a pair of sunglasses cutting down the amount of light entering the lens and as a result allowing you to use a wider iris setting. This softening happens with both HD and SD cameras, the difference is that with the low resolution of SD it was much less noticeable.

Why Is Sensor Size Important: Part 1.


Over the next few posts I’m going to look at why sensor size is important. In most situations larger camera sensors will out perform small sensors. Now that is an over simplified statement as there are many things that effect sensor performance, including continuing improvements in the technologies used, but if you take two current day sensors of similar resolution and one is larger than the other, the larger one will usually outperform the smaller one. Not only will the sensors themselves perform differently but other factors come in to play such as lens design and resolution, diffraction limiting and depth of field, I’ll look at those in subsequent posts, for today I’m just going to look at the actual sensor itself.

Pixel Size:

Pixel size is everything. If you have two sensors with 1920×1080 pixels and one is a 1/3? sensor and the other is a 1/2? sensor then the pixels themselves on the larger 1/2? sensor will be bigger. Bigger pixels will almost always perform better than smaller pixels. Why? Think of a pixel as a bucket that captures photons of light. If you relate that to a bucket that captures water, consider what happens if you put two buckets out in the rain. A large bucket with a large opening will capture more rain than a small bucket.

pixels Why Is Sensor Size Important: Part 1.
Small pixels capture less light each

Bigger pixels capture more light each.

It’s the same with the pixels on a CMOS or CCD sensor, the larger the pixel, the more light it will capture, so the more sensitive it will be. Taking that analogy a step further if the buckets are both of the same depth the large bucket will be able to hold more water before it overflows. It’s the same with pixels, a big pixel can store more charge of electrons before it overflows (photons of light get converted into electrical charge within the pixel). This increases the dynamic range of the sensor as a large pixel will be able to hold a bigger charge before overflowing than a small pixel.

Noise:

All the electronics within a sensor generate electrical noise. In a sensor with big pixels which is capturing more photons of light per pixel than a smaller sensor, the ratio of light captured to electrical noise is better, so the noise is less visible in the final image, in addition the heat generated in a sensor will increase the amount of unwanted noise. A big sensor will dissipate any heat better than a small sensor, so once again the big sensor will normally have a further noise advantage.

So as you can see, in most cases a large sensor has several electronic advantages over a smaller one. In the next post I will look at some of the optical advantages.

Just how big is the 3D market?

As you walked around the BVE show in London last week you could not help but notice that just about every stand had some kind of reference to 3D production. The impression given was that 3D is here, it is going to be huge and everyone needs to be able to shoot in 3D. But is this really the case? Certainly there has been a big increase in the number of 3D movies produced in the last 18 months and Avatar is now the largest grossing movie ever made, much of the profits coming from 3D screenings. Sky TV in the UK are starting a 3D TV channel later in the year and Discovery, ESPN and others have announced their intentions to launch stereoscopic channels. To add to all this the TV manufacturers are also bringing large ranges of 3D TV’s to market.

But before you all rush out and spend large sums of money on expensive 3D camera rigs you need to look more closely at what’s going on and consider who will actually want to watch 3D. Now I am a fan of 3D, don’t get me wrong and I do believe that 3D is here to stay, but as I see it, until display technology finds a way to eliminate the need to wear special glasses 3D is going to be reserved for special events, movies and spectaculars. Lets face it who’s going to want to have to put on a pair of glasses after a hard days work, just to watch the news or a soap. This is supported by Sky’s recent 3D seminar where they stated that they are only looking at showing 4 hours of new 3D programming every week and the only things they are looking for are movies, major sporting events, special events and one off, mega specials – “planet earth” type big budget docs. The rest of the week will be repeats and re-runs. So, in the UK it’s likely that there will be a couple of OB trucks kitted out for 3D for sports and other events filing a couple of hours a week leaving just two hours which will be split between docs, specials and movies.

Now Sky 3D won’t be the only outlet for 3D in the UK. There will be some corporate productions with budgets big enough for 3D and there will be a market for a few on-demand specialist channels and 3D BluRay’s and DVD’s but the really big market will be the 3D games market. Even so for most production companies, 3D could be an expensive mistake. Investing in expensive 3D rigs, pairs of cameras, 3D production monitors and edit systems won’t be cheap. In addition there is a whole new set of skills to be learnt, shooting 3D is very different to 2D. Perhaps (sadly) the real future of 3D TV lies not with true 3D capture and filming but with clever boxes like the JVC IF-2D3D1 2D to 3D converter which can take existing 2D material and convert it in to pretty convincing 3D for the price of a single pro camcorder. It may even be that one day all home TV’s will contain a similar converter and you will be able to watch whatever you want in Pseudo 3D at the press of a button.

So back to the original question, how big is the 3D market? Well I think it’s actually pretty small, probably best left to a few specialist production companies or 3D consultants and  facilities companies. Certainly lots of 3D TV’s will get sold to affluent techno geeks and home cinema enthusiasts, but lets face it, HD was a hard sell and you don’t need glasses for that.