For many years I have always said that it’s the quality of the lens followed by the sensor that ultimately determines the quality of the images that a video camera will produce. Well, I’m starting to believe that that is no longer the case. Why do I think this? Well, we have reached a point in the development in sensor technology where most sensors made for mid to high end video are capable of resolving more than may be required, capturing greater dynamic range than can be sensibly displayed and with noise levels that are low enough not to be a significant issue. I’m not for one minute suggesting that all sensors are created equal, but most professional video cameras now have sensors that perform to a very high standard, so much so that other factors start to play a more significant role in your choice of camera.
Lets take a look for a moment at 3 cameras from 3 very different companies. Red Epic, the Sony F3 and the Canon C300. Now I have not used a C300 for myself yet, so I’m basing my comments on the opinions of others and the clips that I have been able to view online, so I may be wrong, but basically all three of these cameras have sensors that perform very similarly (with the obvious exception of the Epic’s resolution). When you view footage at 1920 x 1080 from any of these cameras it can be damn hard to tell which is which. There are small, subtle difference between the images but many of these can be changed and tweaked in post production. I’d bet that if you could examine the raw data from the sensors you would see similar dynamic range, similar noise per pixel.
With sensors that now perform at such a high level, it’s post production and workflow, which is actually now becoming a bigger differentiator between cameras, or to be more precise, how you get that wonderful sensor data from the camera to the edit suite. Red’s Epic uses a workflow that records the sensor data in a largely un processed state on to solid state hard drives. The 14 bit bayer sensor data is compressed (default 5:1) but as this is a 4/5K camera your still looking at around 3.5GB per minute. This results in large files that must be processed before you can do anything with them. It’s kind of like developing a negative before you can edit. For those used to working with film this won’t be an issue, but those of us used to an instant, ready to edit workflow might find this a drag. Unless you have a Red Rocket card or a supercomputer the processing is slow and time consuming. You certainly don’t what to be doing it with a laptop. But, because you have the almost raw sensor data to work with you do get an amazing amount of flexibility, great dynamic range and the ability to choose your gain levels and white balance in post production.
Now if we look at the F3 with S-Log the approach is slightly different. The sensors output is processed to give a colour image (but that’s about it) and then just about the entire sensor dynamic range is mapped to a special gamma curve in such a way that each stop of exposure gets allocated roughly the same amount of data. This makes it possible to use the shallower bit depth of 10 bits compared to Red’s 14 bits to record a very similar dynamic range, in effect it is a type of signal compression. Then the F3 feeds that high dynamic range, full R,G,B signal out of its dual link HDSDi connectors so that it can be recorded on an external device. So let’s say we take that output and record it as an uncompressed stream on a Convergent Design Gemini. The Gemini records the dual link RGB output as uncompressed DPX files on to SSD’s. Now I do love the image quality that’s possible with this workflow, but it comes at a price. The price you have to pay is file size. For 24p you looking at 750GB per hour, that’s a lot of GB’s to transfer from the SSD’s to your edit suite. Even if you encode from the DPX files to the codec of your choice at this stage, it’s still a time consuming process. Not as slow as Red, but it still needs to be allowed for. Another point is that the DPX files don’t have audio, so you will need to sync up your audio and video tracks. So, OK we don’t use the Gemini and instead record using a Cinedeck recording to ProRes 444. Now we have smaller files, sync sound but have introduced compression artefacts. These artefacts might be incredibly small but they will be there. We can push things further still with the F3 and take the reduced chroma resolution 4:2:2 S-Log output from the single HDSDi monitor output (or SDi A after firmware update scheduled for early 2012) and record it on something like the Atamos Samurai using ProRes 422. Now the files are getting quite manageable in size, the quality is still good enough for the grading that goes hand in hand with S-Log, but we have thrown away a bit of our colour resolution.
So lot’s of options with the F3 and S-Log. Of course it shouldn’t be forgotten that you don’t have to use S-Log, you can take the 10 bit output with a standard gamma or cinegamma applied and then record that on an external recorder. The downside here though is that now some of the cameras look, i.e. the chosen gamma, is now baked in to the image and it may require a little bit of un-picking in post to achieve your desired look.
Now lets look at the Canon C300. The C300 does not have raw sensor data recording like Red Epic, nor does it have dual HDSDi outputs that can take the full sensor latitude and send it out to an external device as 444 RGB like the F3 with S-Log. In fact, and I’ve already criticised Canon for this, it only has an 8 bit output, so it’s not ideal for grading. But, and it’s a big BUT, what the C300 does have is internal recording using 50Mb/s Mpeg 2 at 4:2:2. Now this isn’t the greatest or best codec in the world and it is only 8 bit, but it is a good solid codec that is very widely supported and well understood. The files are small and easily recorded onto low cost compact flash cards. You see not everyone want’s to heavily grade everything they shoot. Nor do they want to have to hang on external recorders just to meet the standards laid down by some broadcasters such as the BBC. The Sony F3, while an excellent camera, cannot be used on a BBC HD production without an external recorder, but the C300 can. This makes the C300 simpler and easier to use. If you are on the road or a long way from base, being able to use cheap, readily available compact flash cards is a major advantage and the small sized files means that you don’t have to lug around big raid arrays or super computers. The C300 produces a nice clean image, so even though it’s only 8 bit and 50Mb/s it will stand some moderate grading. Lets face it, TV production has been using 8 bit for many years and many, many 8 bit shows go through the grading suite and come out looking great. Perhaps you won’t be able to push and pull the footage from the C300 as much as Red Epic or F3 and S-Log, but then with careful shooting and a good camera setup the C300 should still be able to deliver a great image.
Going back to the title of this post, as I see it and as I hope my examples above show, it’s not all about the sensor anymore. Perhaps now it’s as much about the workflow as anything else. You need to figure out what it is you need to achieve and how you will get there. Do you need to squeeze every last ounce of performance out of the sensor at the expense of a slower or more convoluted workflow or will a simple workflow allow you to spend more time actually shooting or editing your piece helping you achieve the result you desire.
5 thoughts on “It’s no longer just about the sensor.”
Great summary of the state of cameras, Alister! I appreciate the fact that there are different needs for different projects. One question, how do you think the CD nanoflash recording off the F3 compares with the C300 footage? Both are 8 bit, both use compact flash, but the nano can record much higher bit rates that presumably store more info of some type.
The NanoFlash on the F3 works well. I still use mine from time time shooting at 50Mb/s for XDCAM optical disc compatibility. On paper the F3 and a NanoFlash at 50Mb/s should perform very similarly to the C300. Increasing the bit rate on the NanoFlash will decrease compression artefacts and mosquito noise, so you should be able to push the higher bit rate NanoFlash footage further in post than you can the 50Mb/s C300. Of course you could always stick the NanoFlash on the C300 too and then the C300 would get a similar boost.
I was actually going to send you a PM about this when I logged on, but you basically answered everything I was looking for! I LOVE the F3 look. Have not seen much bad footage out of the camera from looking online – maybe badly shot, but the F3 shines through. I’m really torn about which way to go… I have a pre-order in for a C300. I was looking for someone to talk me out of it, but that’s not going to happen here. Thanks for making this decision a little easier.
I couldn´t agree more Alister.
Workflow is what one always has to consider.
Gone are the days when one could deside on what camera to buy only by comparing specifications.
I would also wish that photographers more often would trust their eyes,
rather than counting pixels.
The camera is just a tool for acquiring the images. It will not make a poorly lit scene magically look good, nor will one camera tell the story better than another. These are down to the skill of the cinematographer and crew. Take a well lit scene and it may be that one camera will produce a marginally more pleasing image than the other, but ultimately it’s the way the scene is lit (or not lit) that makes the biggest difference. A good cinematographer can make almost any camera look good, a good camera does not automatically make a good cinematographer.
One thing I would add though is that when you do have a camera that you know is capable of a great image, I think people are more inclined to put more effort into getting that good image. Give someone a poor camera and often they become lazy and just blame it on the camera.