NEX-FS700 8 Bit video but 12 Bit RAW in 4K, 2/3″ lenses using center crop?

advertise-here-275 NEX-FS700 8 Bit video but 12 Bit RAW in 4K, 2/3" lenses using center crop?
fs700-2 NEX-FS700 8 Bit video but 12 Bit RAW in 4K, 2/3" lenses using center crop?
Sony NEX FS700

The more I think about this camera the more exciting it becomes. At release the FS700 will be limited to HD and in many respects will be similar to the FS100. This means that although the FS700 has a 3G capable HDSDi output, when in video mode this output is still restricted to 8 bits due to the internal video processing. However from what I have been able to gather, the proposed 4K mode bypasses this processing altogether and outputs the direct sensor data as 12 bit RAW sensor data. How is this possible? Well any video camera outputting video has to output 3 values for every point within the image. So for a 1920 x 1080 camera there are in effect 3 outputs, one for the luminance value for each point plus two chroma or colour values, one for Cb one for Cr. In a 422 system the resolution of each of the Cb and Cr channels is half the full resolution, so that’s 960 x 1080 Cb and 960 x 1080 Cr. However you look at it that’s a lot of data, even at only 8 bits, but 422 1920×1080 8bit and even 10 bit 422 will fit into a standard 1.5G HDSDi signal. With a 3G HDSDi connection the amount of available data bandwidth is doubled. This in itself gives the ability to transfer 444 HD data with full R, G and B data or Y, Cb, Cr at full resolution for each channel at 8 or 10 bits (FS700 restricted to 8 bit processing).
With 12 bit data however, at 4k there would not be enough bandwidth, even with 3G to transfer a 444 or even a 422 video signal, the extra 2 bits of data needs a lot of extra bandwidth. But Sony are not talking about video data, they are talking about RAW sensor data. The sensor in the FS700 is a bayer sensor. A bayer sensor has an array of pixels with colour filters over the top of the pixels to filter only green light to every other pixel and red and blue light to the remaining pixels. The pixels themselves don’t see different colours, all they see is a brightness or luminance value. It’s not until the luminance data from the sensor is processed (de-bayered) that the colour information is created by extrapolating luminance (brightness) values from the R, G and B filtered pixels. The De-Bayering process creates an R, G and B value for each point in the 4k image, so 3 values for each point. However if we just take the RAW luminance values all we have is a single luminance value for each pixel on the sensor. The De-Bayer process  greatly increases the amount of data that needs to be processed, keeping the data as luminance only minimises how much data there is and makes it possible to pass 12 bits of 4k data over a 3G HDSDi cable.
Now, this signal from the FS700 will not comply with any standard that I know of, so it will need a dedicated recorder or at least a recorder programmed to accept it, but it promises a lot of exciting possibilities.

For a start, you will get the full sensor dynamic range, so we should expect at least 12 stops of DR, maybe a bit more. In addition having a 4k image means that when shooting for HD you can crop in to the image with no resolution loss. Here’s a thought, you should be able to use a 2/3″ ENG broadcast zoom. Yes the image on the sensor will vignette, but you should be able to extract a full 1920×1080 resolution image from the centre part of the 4k image. As this will be using a smaller part of the sensor your DoF for a given field of view will be similar to what you would have with a 2/3″ camcorder. So could the FS700 be that jack of all trades camera many of us are looking for? 4k RAW, s35mm when you are making a filmic piece and then with a simple lens mount adapter (no optical elements needed) stick an ENG zoom on it and use it for news style shooting. At the moment it looks like you will have to extract the HD 2/3″ centre crop from the RAW 4k yourself, but perhaps Sony will be able to add centre crop to the camera firmware at a later date.
However you look at it, the FS700 is a very exciting proposition. I placed my order for one the day it was officially announced.

Advertisements

29 thoughts on “NEX-FS700 8 Bit video but 12 Bit RAW in 4K, 2/3″ lenses using center crop?”

  1. A very interesting analysis indeed! I am curious how the F3 -equvivalent will look like the day it comes, sharing the same 4K sensor, but maybe offering onboard 4K RAW recordings in a new format. It’s fun dreaming away:-)

  2. Hi Alister,
    First of all, thank you so much for all of the invaluable knowledge you share so generously with all of us! It is truly enlightening.
    Couple of questions:
    As a previous owner of the FS100 (which I enjoyed and shot for nearly a year with for a series of ethics spots for internal use by the Navy) I was very impressed with most of the images I got from the camera but was at times disappointed by how it handled highlights and roll off in the highlights. I recently sold my FS100 and upgraded to the F3 partly for the log option and partly for the 10 bit output to my Samurai. My biggest concern shooting with the F3 as I look to the next phase of my project is the lack of 1080p 60fps. I used that to great effect on the previous spots. I know it will shoot at 720p but those who I have spoken to are not impressed when this is resized to 1080p. I have yet had a chance to test this myself.
    Along comes the Sony FS100 which has great options for slo mo, etc. Hmmm…
    You just placed an order for a FS700 and I know you really like the F3 with log. How would you characterize your intended use of the two cameras? Would the F3 still be your main camera for shots where s log latitude and 10 bit color depth would have the greatest impact and the FS700 mainly for smaller form factor and overcranking, and a good B camera when you could afford to employ both? If you had to take only one camera which would you think you would grab in which circumstances. I understand that all of this is theoretical until you have actually had a chance to test out the FS700 but this kind of thinking happens for all of us as we invest in new options.
    On a technical side, since both the Fs100 and F3 supposedly share the same sensor, why is it that spec wise the Fs100 has better low light sensitivity, 16000 iso as opposed to 6400 iso??? This has really confused me…perhaps there is some difference in the optical by pass filters or ???

    thank you for any input you might have.
    All the best
    Patrick

    1. FS100 and F3 have the same sensor. Gain is applied after the sensor. It does not make the sensor itself more sensitive, it is just like turning up the volume on an amplifier. On the F3 Sony decided on one maximum volume level, on the FS100 another. The actual sensitivity of the sensor is the same, but you can turn the volume up more on the FS100. Of course you can also turn up the volume in post by adding gain in post.

      F3 can output 1080p60 to an external recorder via the dual link HDSDi.

      New FS100 firmware adds cinegammas. These will give you much improved highlight roll-off, much more like the F3, but the 8 bit internal processing of the FS100 is the primary reason highlights are not handled so well.

      F3 likely to remain my primary camera until we get 4k raw on the FS700, then all bets are off! My primary reason for getting the FS700 is slo-mo.

      1. thank you very much, Alister
        I was aware the F3 would output 60fps over dual link. I suppose I was just lamenting the lack of it over single link as I record to a samurai and a gemini is out of my price range for my needs. It seems strange they can push the Fs700 to 240 fps and above both on board and output through a singe sdi connector and we only get 60 fps at 720p output on hd sdi on the F3. I suppose the 10 bit data stream could be that much more information that it can’t handle it. Like you, I am considering purchasing the FS700 for slo motion but realize the absurdity of a adding a second camera to my package just for a single function. Frustrating.
        thanks again
        Parick

        1. You can’t get 1080 50/60P from single link SDI there is not enough bandwidth, it has to be dual link. The FS700’s high speed buffer – record – buffer process is only suited to specific high speed shots and you could not use it for general shooting, plus it’s only internal, you cannot record 240fps to an external device.

        2. Patrick, don’t blame the F3, blame the Samurai or the Society of Motion Picture and Television Engineers. There are standards and whenever a company makes equipment they choose from what’s available at the time and conform to it:
          1. SMTPE 292M – single-link – which could only go up to 1080/30p.
          2. SMTPE 372M – allowing 1080/60p over two connectors.
          3. SMTPE 424M – an upgrade to the spec allowing 1080/60p over a single link. 372M and 424M are the same thing, with just the added convenience of one cable instead of two.

          If you want an HD-SDI spigot to work, you’ve got to pick from the above standards. On the Samurai end, they chose not to go 372M or 424M because it’ll double the recording speeds and processing power required, and would’ve made it as expensive as the Gemini.

          Pushing the raw 4K data over the interface will probably utilize HD-SDTI or 424M Ancillary Data in some way, but we’re yet to see. The point is, all the devices in the chain need to support a certain spec and that’s what determines what’s possible and what’s not.

          1. In regards to the amplifier. Think of the fs100 as going up to 10 and the f3 being able to go to 11. Spinal tap

          2. But it’s not the same. An amplifier makes things louder, but the amplifier will not increase the quality of the signal,just makes it bigger that’s all.

  3. Thanks for a very interesting article. I’ll be ordering an FS700 if it turns out that it can do all of that. My one doubt is this: if the FS700 can output 4k raw, who would buy an F3, especially when you take into account the price difference?

    1. That’s a good question indeed. Time will tell. In theory the FS700 with 4K raw may outperform an F3, but you do have to be very careful with your workflow if going from 4K to HD to avoid aliasing. There’s no genlock on the FS700 and the F3 workflow will require less processing and depending on the codec you use need a lot less storage.

  4. Hi there,

    I have a question about the slowmo. How does it work? How many seconds do you get when you trigger at 240fps and how many at 120fps?
    Do you have a buffer where you can trigger and get f.e. the last 4 seconds as well? That would be very valuable for wildlife.
    And what is output via SDI while shooting slowmo? Could you record to a nano flash regular 25p while capturing slowmo internally?
    Would be great if you could provide some info on that.
    Thanks

  5. Alister,
    I’m wondering how you feel about the AVCHD codec on the FS700?
    I’ve never used it but have friends who don’t seem to like it very much. Do you think that is a big liability on the FS700?

    1. AVCHD is very efficient and for most things it does a pretty good job. It does not fair well when there are large changes in the image from frame to frame. The biggest everyday issue with AVCHD is that it is still very processor intensive when it comes to editing, so in most cases it is desirable to transcode it to something like ProRes or Cineform prior to editing. My initial primary use for the FS700 is as a slow mo and effects camera and for this I am quite sure that the AVCHD recordings will be of acceptable quality. Once the 4K option gets released this will bypass the AVCHD codec and use RAW and that’s when things will get very interesting as in theory the FS700 should out perform the F3 (but with a more complex workflow). However this may not come until much later in the year or maybe even next year, so there’s a lot of life left for the F3.

      1. Alister, here is where my technical knowledge lets me down. Are you saying that when using SDI output only when 4K is there you will not ‘suffer’ from the AVCHD limitations? Is the current 1080p output over SDI not bypassing the AVCHD compression?

        1. The HD output over the HDSDI bypasses the compression, it is uncompressed, but the signal does pass through the 8 bit Digital Image Processor (DSP) where the colour HD image is decoded from the bayer image that the sensor. This processing (not to be confused with codec compression) limits the maximum possible image quality that can be passed to the HDSDI.

          When the 4K option comes it will bypass the 8 bit in camera processing stage. The camera will pass 12 bit data from the sensor to the 3G output. The image processing will be done on a computer or other external device.

  6. Alister,
    This is a bit off topic…

    I’ve just shot a few days recording AVCHD.
    I was considering creating a ProRes file by recording the HDMI playback from my FS100 with a PIX240… or is it better to transcode with software?

    Thank you,
    Bill

    1. Normally better to do a direct transcode as you go direct form one the AVCHD to ProRes, while if you playback your going from AVCHD to baseband then baseband to ProRes. It’s also quite possible that the codecs in the computer which have a lot more horse power to call on will be of better quality.

  7. Should not a single sensor camera be 4k to output 1080, 8k to output 4k, and so on? the bayer filter is RGBG 2×2 so to get 1980×1080 from a single sensor that would be 1980×1080 x (2×2) 4k right?

    So the F100, F3, F700, C300, C500 all have a single 8k sensor right?, because we are told they all have the same sensor and some now will output 4k.

    I am sure I am confused on this and am still search for the answer.

    1. A bayer sensor is laid out like this:

      rgrgrgrgrg
      gbgbgbgbgb
      rgrgrgrgrg
      gbgbgbgbgb

      So if the sensor is 4K wide there are 4K green pixels across the sensor, although staggered up and down from one line to the next.
      However for the Blue and Red pixels there are only half of many of these in the both horizontal and vertical direction.

      Buy using clever processing the camera can easily calculate the in between green values pretty accurately. Blue and Red is less accurate and as a result lower resolution. It is generally accepted that a bayer sensor will deliver resolution some where around 0.7 to 0.8 x the pixel count, so a 4K pixels bayer camera will have a resolution of around 2.9 to 3.1K.

      The Sony F3 and FS100 both share the same sensor, a 3.36 Mega pixel sensor which is 2400 pixels horizontally so this means we should expect a resolution of about 1.8 to 1.9K and resolution tests reveal this to be the case. This is about right for an HD camera. The Arri Alexa has a similar pixel count.

      The FS700 uses a new sensor that has an 11.6 Mega Pixel sensor that is 4352 x 2662 pixels, so once the 4K option is released it should resolve about 3.2K.AT HD it should resolve full HD, BUT if you use a 4K sensor like this for HD you are likely to run into some aliasing issues.

      The Canon C300 uses a new type of sensor with 3840 (1920 x 2) pixels horizontally, arranged like a bayer sensor, but it is not read like a bayer sensor, instead they read each green pixel as a separate sample. This yields full 1.9K resolution.

  8. The idea to mount an ENG lens on a large-sensor camera is nothing new, it has been discussed and tested over the past few years. There are already cameras that feature a more convenient center-crop like the GH2 or the RED, ENG lenses with a 2X extender help get much better coverage, and there are some adapters available too.

    The main problem is that ENG lenses are designed for a 3CCD configuration with a prism, which can and does introduce chromatic aberrations to the image. Furthermore, while you can find $2,000 ENG lenses on eBay they’re usually specified as SD lenses. Then there’s powering the lens’ servos and getting lens data interface… it gets fairly adventurous.
    Proper HD ENG lenses run between $20k and $80k which make the whole experiment much less appealing.
    Really the right piece of equipment is the exciting new Fujinon Cabrio 19-90 T2.9 – designed for Super35 sensors with a real ENG form factor and build. MSRP is $38k. Think that’s high? Angenieux also showed in NAB a compact 28-76mm T2.6 lens that’ll have an optional ENG-like servo unit, hand grip and all. Expect it to cost about $75k.
    These might be out of reach for most F700 customers, but are still exciting news since such lenses simply didn’t exist until this year.

    1. I agree that using a sensor crop is nothing new. The fact that ENG lenses are designed for a deep dichroic prism makes only minimal difference to CA or LCA, there will be a small offset to the focus of the blue channel. However in most cases this is so small that it would take a lot of analysis to show this in practice (blue channel light path is normally slightly shorter than R and G to make the lens correction simpler i a prism design). On a 3 Chip camera a lot of the CA and flare is generated in the prism block itself, so a 2/3″ ENG lens may perform comparatively better on a single chip camera than it would on a 3 chip camera anyway.

      You don’t need to get to any kind of data interface, just provide 12 volts to the lens hirose and you’ll have servo zoom control. While you can spend 80k on an ENG zoom, most are significantly cheaper and good HD zooms now start at around $10k, certainly not $20k. But as well as buying lenses, being able to use 2/3″ B4 lenses opens up an incredible range of rental options and many owner operators and production companies will already own legacy B4 glass. I’ve used many B4 lenses via adapter on S35 camcorders and they work well, however generally they do have a different look to dedicated PL lenses. The extra glass in a high ratio zoom does tend to reduce contrast and increase flare, bit if the difference id between getting the shots you want and not getting the shots, I’ll take the compromises any day.

      The Cabrio is without doubt a nice lens, but it is only a 4.7x zoom, which is nothing compared to the 20x or more zoom range commonly available in B4. For news, docs etc a 19-90mm lens is still going to be very restrictive, especially if you are used to something the equivalent of a 20-400mm or 20x zoom.

      Very few FS700 users are going to fork out $38k for a Cabrio, this is more likely to be used on an F3, Epic or Alexa. On the FS700 you will still have the problem of powering the lens, so in many respects it is just as challenging to use as a B4 ENG zoom.

  9. Cool idea but you can’t really successfully use ENG lenses on cameras with a single chip. ENG lenses are optically corrected for beam splitters in 3 chip cameras and properties of its glass. The image plane of each of 3 chips is different (back focus offset), they are offset agains each other by up to 10 microns (I am not talking about their physical arrangement where they are obviously not only on different planes but also at different angles because of the beam splitter). ENG lenses are designed to correct that optically and project red, grean and blue wavelenghts at different image planes. Such lens used on a single chip camera (or a film camera) renders images that show a lot of chromatic abberation / fringing. This is theory, in practice you can stop it down so that the dof is increased and the “error” almost corrected but at this point you have a slow lens that still does not perform at its full potential, not to mention that when you stop it down (on now effectively 2/3″ sensor camera) you can forget about any shallow depth of field control. Here is a publication that describes the offset in 3 CCD cameras:
    http://tech.ebu.ch/docs/tech/tech3294.pdf

    1. Sorry Mateusz, you are wrong, you are completely forgetting that the focal plane shift is simply there to correct for the extra glass in the prism that alters the focussing point of the different wavelengths of light, but adding any similar amount of glass between the lens and a sensor will also introduce exactly the same shift. The adapter by design has a similar in-glass light path to a 3 chip prism and this compensates for the pixel offset so there is no issue as R,G and B are brought into focus on the same plane.

      The other thing everyone ignores is all the other aberrations and artefacts that are introduced by the dichroic prism. Take these out of the equation and the image projected on the sensor is better anyway. The end result is typically less CA using this adapter than using the same lens on a 2/3″ camera.

      The issue you are talking about only occurs when using uncorrected lens adapters, typically simple lens mount adapters with no glass or other optics in the adapter. Even then the effect is very minor.

  10. Dear Alister,

    A quick look at your website discouraged me from arguing with you – looks like your background is in ingineering and on top of it you are a successfull DP – I’ll take your word that the ENG lens will work!

    However, there was no mention of any glass adapter in your post but instead a quite contrary statement: “with a simple lens mount adapter (no optical elements needed) stick an ENG zoom on it and use it for news style shooting”.

    This is why I wrote my initial comment. Now it turns out that you actually agree, because in your reply above you said : “The issue you are talking about only occurs when using uncorrected lens adapters, typically simple lens mount adapters with no glass or other optics in the adapter.” Whether it is minor or not, the problem exists.

    Best,

    Mateusz

    1. Sorry, my apologies. I hadn’t realised which post the comment was in reference to, I just receive the comments as emails my in-box and though it was in reference to the many posts about the B4 to E-Mount optical adapter that I designed for MTF Services. This is an optical converter that allows B4 lenses to be used on large sensor cameras and includes the optics to increase the lenses image circle to cover the full sensor.

      In the case of a mount only adapter (no optics) on the FS700 the pixels are so much larger than the pixels in a 2/3″ HD camera that the very slight defocussing of the R and B would be extremely difficult to see plus the removal of a thick prism from the light path eliminates so many other aberrations that the lens performance will still be acceptable.

Leave a Reply

Your email address will not be published. Required fields are marked *

*