Alphatron EVF-035W Getting closer!

The Alphatron EVF-035W is getting closer to launch. I was able to play with one in Las Vegas at NAB and image in the finder really was very good. I’ve used Cineroid and Zacuto finders and they are not nearly as clear or as sharp as the Alphatron. With the Alphatron I found that I could see the image come in and out of focus quite clearly without resorting to large amounts of peaking, nor could I easily make out the pixel structure of the screen as I can with my Cineroid. With the peaking activated, the fine resolution of the screen means that the peaking edges themselves are much finer so they don’t obscure the image making the peaking more precise and less obtrusive. The Alphatron EVF also has a manually operated shutter in the monocular eyepiece which allows you to prevent sunlight from accidentally burning the LCD screen, that’s a nice touch, I’ve seen many viewfinders destroyed or damaged by the sun. There were still some bugs to be ironed out in the prototype that I used, but TV-Logic make good kit and I’m sure that once the firmware is sorted the Alphatron EVF-035W will become the gold standard by which other aftermarket EVF’s are compared. Looking into the Alphatron reminded me of looking in to a Sony C35W EVF. The Sony costs in excess of £4k while the Alphatron is going to be around the £1k mark. Looking forward to testing a full production unit.

 

“A Diamond in the Mind” Available for pre-order on Amazon

The incredible Duran Duran concert that I was involved with shooting at the end of last year is available for pre-order on Amazon. “A Diamond In The Mind” is due to be released on July 2nd. I was responsible for the camera setups and picture profiles used on the shoot and having seen the end result, I’m really pleased. The film has an incredible look that really captured the magic of the moment. 11 x PMW-F3’s, FS100’s, mini-cams, jibs, Alura and optimo lenses all contribute to an incredible looking concert video.

What is a “Slow Shutter” or SLS

“Slow shutter” is a video term for an electronic shutter that is open for longer than the duration of a single recorded frame. It’s not actually a shutter as in a stills cameras physical shutter, most video camera don’t have a physical shutter. Normally a video camera operates at 25 or 30 frames per second. So, the camera sensor normally captures light for 1/25th or 1/30th of a second in progressive, for interlace it’s half of this as the shutter is open for the duration of each field, one field is half the duration of one frame, before writing the data to one frame of the video.

The sensor is then reset, captures for the next 1/25th or 1/30th and then writes the next frame and so on, creating a video sequence. With a slow shutter the sensor is allowed to capture light for – (the slow shutter speed in fames) x (the number of frames) before that data is written as a single frame. So with a 16 frame slow shutter the sensor is allowed to capture light for 16 x 30 (or 25) frames before creating an image.

So at 30fps, one frame lasts 1/30th. Therfore, 16 x 1/30th = 0.53 seconds. The sensor is being allowed to capture light for 0.53 seconds before getting reset. If you do not use an interval recording mode or time-lapse, each of the 16 video frames the camera records  while the sensor is being allowed to capture light gets written with the same image data (from the previous shutter cycle). So with a conventional video recording the image only refreshes once every 16 frames.

Camera Gain: It doesn’t make the camera more sensitive! (also relevant EI S-Log).

This is something that’s not well understood by many people. It helps explain why the PMW-F3 (and other cameras) EI S-Log function is so useful.

You see, camera gain does not normally actually change the cameras ability to capture photons of light. A CCD or CMOS sensor has a number of photo sites that capture photons of light and convert those photons into electrons or electrical charge. The efficiency of that capture and conversion process is fixed, it’s known as the QE or quantum efficiency. There are a  lot of factors that effect this efficiency, such as the use of micro lenses, whether the sensor is back or front illuminated etc. But all of these factors are physical design factors that do not change when you add extra camera gain. The sensitivity of the sensor itself remains constant, no matter what the camera gain is set to.

Camera gain is applied to the signal coming out of the sensor. It’s a bit like turning up the volume on a stereo amplifier. If you have a quite piece of music, turning up the volume makes it louder, but the original piece of music is still a quiet piece of music. Turning up the volume on your stereo, as well as making the music louder will also make any hiss or background noise in the music louder and it’s exactly the same with a video camera. As you increase the gain, as well as the wanted video signal getting bigger (brighter) all the unwanted noise also get bigger. So adding gain on your video camera doesn’t actually make the camera more sensitive, but it does make what light the camera has captured brighter in the recordings and output, giving the impression that the camera has become more sensitive, however this is at the penalty of increased background noise.

As well as adding gain to the image in the camera, we can also add gain in post production. Traditionally gain has been added in camera because the gain is added before the recording process. In the uncompressed analog days the recording process itself added a lot of noise. In the digital age the process of compressing the image adds noise.  8 bit recordings have quite small number of grey shades. So any gain added in post production amplifies not only the camera signal but also the added recording or compression noise so generally gives an inferior result to adding gain in camera. With an 8 bit signal the stretching of the relatively few grey shades results in banding.

Now, however the use of lower noise sensors and much improved 10 bit or higher recording codecs or even uncompressed recording means that adding gain in post as opposed to in camera is not such a bad thing. In some cases you can use post production noise reduction prior to adding post gain and by leveraging the processing and rendering power of a computer, which will normally be of greater quality than the in camera processing, you can get a cleaner, lower noise output than you would using in camera gain. So before you flick on the gain switch of your camera, if your using only very light 10 bit or higher compression (HDCAM SR, Cineform, ProRes HQ) or uncompressed do consider that you may actually be better waiting until you get into post before you add gain.

Some modern cameras, like Red or the Sony F3 can use something called EI gain. EI gain does not actually add any gain to the recorded signal (or signal output in the case of the F3). Instead it adds gain to the monitor output only and adds metadata to the recording to tell the post facility or conversion software to add gain. This way you see on the monitor what the image should look like when the gain has been added, but the recording itself has no gain added giving the post production team the ability to fine tune exactly how much gain is applied.

Can I use 8 bit to record S-Log?

My opinion is that while 8 bit, 422 can be used for S-Log, it is not something I would recommend. I’d rather use a cinegamma with 8 bit recording where possible. 10 bit 422 S-log is another matter altogether, this is well worth using and works very well indeed. It’s not so much whether you use 444, 422 or maybe even 420, but the number of bits that you use to record your output.

What you have to consider is this. With 8 bit, you have 240 shades of grey from black to super white. Of the 256 bits available, 16 are used for sync, white is at 235 and super white 256 so black to 100% white is only 219. With Rec-709, standard gamma, on an F3 and most other cameras you get about an 8 stop range, so each stop of exposure has about 30 shades of grey. The stops above middle grey where faces and skin tones are have the most shades, often around 50 or more. Then you hit the knee at 90% and each stop only has a handful of shades (why over exposure looks bad).

When you go to S-Log, you now have around 13 stops of DR (14 with S-log2 and S-Log3), so now each stop above middle grey only has approx 18 shades. Potentially using 8 bit for S-Log, before you even start to grade, your image will be seriously degraded if you have any flat or near flat surfaces like walls or the sky in your scene.

Now think about how you expose S-Log. Mid grey sits at 38% when you shoot. If you then grade this to Rec-709 for display on a normal TV then you are going to stretch the lower end of your image by approx 30%, so when you stretch you 18 steps of S-Log grey to get to Rec-709 you then end up with the equivalent of only around 12 shades of grey for each stop, that’s less than half of what you would have if you had originally shot using Rec-709. I’m sure most of us have at some point seen banding on walls or the sky with standard gammas and 8 bit, just imagine what might happen if you effectively halve the number of grey shades you have.

By way of a contrast, just consider that 10 bit has 956 grey shades from black to super white. the first 64 bits are used for sync and other data, 100% white is bit 940 and super white 1019. So when shooting S-Log using 10 bit you have about 73 grey shades per stop, a four fold improvement over 8 bit S-Log so even after shooting S-Log and grading to Rec-709 there are still almost twice as many grey shades than if you had originally shot at 8 bit Rec-709.

This is a bit of an over simplification as during the grading process, if your workflow is fully optimised you would be grading from 8 bit to 10 bit and there are ways of taking your original 8 bit master and extrapolating additional grey shades from that signal through smoothing or other calculations. But the reality is that 8 bits for a 13 stop dynamic range is really not enough.

The whole reason for S-Log is to give us a way to take the 14 stop range of a high end camera sensor and squeeze as much of that signal as possible into a signal that remains useable and will pass through existing editing and post production workflows without the need for extensive processing such as de-bayering or RAW conversion. This isn’t to much of a problem if you have a 10 bit recording, but with an 8 bit recording making it work well is challenging. It can be done, but it is not ideal.

Adobe CS6 new pricing model.

I think I have got this right! Instead of buying Adobe’s production suite software outright, you can now licence the entire suite on a monthly basis, choosing either a year long renewable contract or a month by month contract. If you already have a CS3 or higher product there is a reduced subscription rate. So now for just $49 a month ($29 if you already have a CS3 or higher licence) I can have access to all the latest creative suite applications. Compared to paying thousands of dollars for the full suite of Adobe software applications this new subscription model looks to be much more affordable. I do note however that you can still buy various creative suite bundles and the new prices appear lower than before. CS6 now includes Speed Grade which is a fantastic grading tool that even includes correction and alignment tools for 3D material too. Another addition is Prelude which is an ingest and logging tool that allows you to create log sheet, plus add markers and notes to clips which will pass through the complete CS6 workflow. The licence allows you to instal the apps on two machines, so this great for those of us with a laptop on the road and a workstation in the office.

Sonnet EchoExpress Thunderbolt Adapter SxS Speed Tests.

Sonnet EchoExpress

I have had this little box for a couple of months now, but until the recent release of SxS drivers by Sonnet you couldn’t use it as an SxS card reader. There are two versions of the EchoExpress, the standard one, which is the one I have and a “Pro” version that offers higher speed transfers when using PCIe 2.0 adapters. When Apple removed the express card slot from their MacBook Pro laptops, they severely restricted the ability to connect high speed external hard drives. I have a Convergent Design Gemini which records on to SSD’s and the fastest way to offload these on location (for me at least) was to plug an eSATA PCI Express card into the slot on may older MacPro and then connect the Gemini Docking station to one port and then an external eSATA drive to the other. However, the processing power on my older MacBook was falling somewhat behind the modern machines and when trying to transcode from the uncompressed Gemini DPX files to ProRes or DNxHD was taking ages. So I decided to upgrade to a new MacBook Pro, but this then meant the loss of the Express Card slot. This is where the Sonnet EchoExpress became a “must have” add on, as it provides an external ExpressCard slot connected to the computer using Thunderbolt.

By using the EchoExpress box along with a Sonnet eSATA express card adapter I can connect eSATA devices to my MacBook Pro. The transfer speeds with my original version EchoExpress are not as fast as when I had a built in ExpressCard slot, but it’s still a massive improvement over USB, about 4 times faster. Initially SxS cards didn’t work with the EchoExpress, but Sonnet recently released a dedicated SxS driver that allows the EchoExpress to work as a SxS card reader.

So how fast is it? One thing to consider is that when using the EchoExpress as a card reader, on a MacBook Pro or 21″ iMac you only have a single Thunderbolt port, so there is no way to connect a second EchoExpress to add an eSATA port. That restricts you to using either the computers internal drive or an external Firewire 800 drive. For my tests I made copies of a full 16Gb Blue SxS card to both the internal drive as well as an external Seagate GoFlex FreeAgent drive fitted with a Firewire 800 interface.  There was very little difference between the transfer speeds to the laptops internal drive and the Firewire drive, so I suspect that the transfer speed is limited to that of the Sonnet EchoExpress.

Copying 16Gb from the SxS card via the EchoExpress took just a shade over 4 minutes. That’s pretty good performance and only marginally slower than when I had an express card slot built in to the computer. Typically with a built in slot it would take about 3 1/2 minutes. Compare that to copying the exact same data from the camera using USB which took 11 minutes! So, as an SxS card reader the Sonnet EchoExpress works really well offering transfers around 3 times faster than USB which is a big time saver. Imagine you have been shooting all day and have 5 hours of footage. With USB it would take you at least an hour to transfer your data, with the EchoExpress just 20 minutes.

I give the Sonnet EchoExpress a big thumbs up. Now all I need is a Thunderbolt hub.

NEX-FS700 8 Bit video but 12 Bit RAW in 4K, 2/3″ lenses using center crop?

Sony NEX FS700

The more I think about this camera the more exciting it becomes. At release the FS700 will be limited to HD and in many respects will be similar to the FS100. This means that although the FS700 has a 3G capable HDSDi output, when in video mode this output is still restricted to 8 bits due to the internal video processing. However from what I have been able to gather, the proposed 4K mode bypasses this processing altogether and outputs the direct sensor data as 12 bit RAW sensor data. How is this possible? Well any video camera outputting video has to output 3 values for every point within the image. So for a 1920 x 1080 camera there are in effect 3 outputs, one for the luminance value for each point plus two chroma or colour values, one for Cb one for Cr. In a 422 system the resolution of each of the Cb and Cr channels is half the full resolution, so that’s 960 x 1080 Cb and 960 x 1080 Cr. However you look at it that’s a lot of data, even at only 8 bits, but 422 1920×1080 8bit and even 10 bit 422 will fit into a standard 1.5G HDSDi signal. With a 3G HDSDi connection the amount of available data bandwidth is doubled. This in itself gives the ability to transfer 444 HD data with full R, G and B data or Y, Cb, Cr at full resolution for each channel at 8 or 10 bits (FS700 restricted to 8 bit processing).
With 12 bit data however, at 4k there would not be enough bandwidth, even with 3G to transfer a 444 or even a 422 video signal, the extra 2 bits of data needs a lot of extra bandwidth. But Sony are not talking about video data, they are talking about RAW sensor data. The sensor in the FS700 is a bayer sensor. A bayer sensor has an array of pixels with colour filters over the top of the pixels to filter only green light to every other pixel and red and blue light to the remaining pixels. The pixels themselves don’t see different colours, all they see is a brightness or luminance value. It’s not until the luminance data from the sensor is processed (de-bayered) that the colour information is created by extrapolating luminance (brightness) values from the R, G and B filtered pixels. The De-Bayering process creates an R, G and B value for each point in the 4k image, so 3 values for each point. However if we just take the RAW luminance values all we have is a single luminance value for each pixel on the sensor. The De-Bayer process  greatly increases the amount of data that needs to be processed, keeping the data as luminance only minimises how much data there is and makes it possible to pass 12 bits of 4k data over a 3G HDSDi cable.
Now, this signal from the FS700 will not comply with any standard that I know of, so it will need a dedicated recorder or at least a recorder programmed to accept it, but it promises a lot of exciting possibilities.

For a start, you will get the full sensor dynamic range, so we should expect at least 12 stops of DR, maybe a bit more. In addition having a 4k image means that when shooting for HD you can crop in to the image with no resolution loss. Here’s a thought, you should be able to use a 2/3″ ENG broadcast zoom. Yes the image on the sensor will vignette, but you should be able to extract a full 1920×1080 resolution image from the centre part of the 4k image. As this will be using a smaller part of the sensor your DoF for a given field of view will be similar to what you would have with a 2/3″ camcorder. So could the FS700 be that jack of all trades camera many of us are looking for? 4k RAW, s35mm when you are making a filmic piece and then with a simple lens mount adapter (no optical elements needed) stick an ENG zoom on it and use it for news style shooting. At the moment it looks like you will have to extract the HD 2/3″ centre crop from the RAW 4k yourself, but perhaps Sony will be able to add centre crop to the camera firmware at a later date.
However you look at it, the FS700 is a very exciting proposition. I placed my order for one the day it was officially announced.

How the FS-700 shoots slow motion.

Using Sony’s new NEX-FS700 to shoot slow motion is simplicity itself. To enter the slow mode button you simply press a switch marked S&Q on the left side of the camera. First press of the button puts the camera into S&Q motion where it will shoot full resolution HD at up to 60fps. In this mode you just shoot as you would normally, only now at a higher speed than normal. Press the S&Q button again and the camera enters super slow motion mode.
In supper slow mo the FS-700 will shoot at 120fps or 240fps at full 1920×1080 resolution. You can also shoot at 480fps and 960fps at reduced, but still very useable resolutions. When shooting at 120fps you are limited to a recording burst of 16 seconds and at 240fps the burst period is 8 seconds.
There are two ways to trigger the recording burst. You can trigger recording immediately after the press of the record button or you can set the camera to record the burst period prior to pressing the record button. If using the trigger at start mode, on pressing the record button a message saying “buffering” appears in the viewfinder. After 8 (or 16 secs) the camera starts to write the recording to the SD card (or FMU) and you see a slightly slowed preview of the recording (roughly half speed) of what you have just shot. So this takes about 2x the record time, roughly 16 seconds, to write the file and during this period you cannot shoot anything else. Pressing the record button during the write process, stops it at that point, keeping the written file to that point and the camera goes back to standby ready to record another shot.
In trigger at end mode, you point the camera at the scene you want to capture and shortly after the thing you want to record happens you press the record button and the camera then starts to write the previous 8(16) seconds to the SD card, again you see half speed(approx) playback of the clip as it is written to the card.

The fact that you can’t shoot anything else during the write process is a little frustrating, but it’s a small price to pay for the ability to shoot at 240fps, although it does mean you can’t really use the FS-700 to shoot long duration events without gaps in super slow mo. The great thing is that as all the processing is done in the camera playback of the clips is no different to playing any other AVCHD clip. 8 seconds at 240fps results in an 80 second clip at 24fps. I could have really done with the trigger at end mode on a recent shoot I was doing with Red Epics where we were shooting pyrotechnic and special effects events that often took some time to trigger, but only lasted fractions of a couple of seconds. With the Epic’s we often ended up with several minutes of footage prior to the action we wanted, wasting storage space and making more footage for the editor to go through.

I really enjoyed using the FS-700, my guess is that slow motion is about to become the new time-lapse.