This was released a couple of weeks ago, but I was away at the time. Sony have released a firmware update package for both the F5/F55 cameras and the R5 recorder. You’ll find all the details and the firmware here: http://community.sony.com/t5/F5-F55/NEW-Firmware-v1-10-with-release-notes/td-p/102111
What is raw and why is everyone talking about it?
OK, OK, many of you will know this already, but for those that don’t understand what raw is all about I’m going to try to explain.
First lets consider how conventional video is recorded. When TV was first invented back in the late 1930’s a way was needed to squeeze a signal with a large dynamic range into a sensible sized signal. One important thing to consider and remember (if this article is going to make any sense) is that each additional stop of exposure has double the brightness of the previous stop. This doubling of brightness translates into a doubling of the bandwidth or data required to transmit or store it. With a limited bandwidth system like TV broadcasting, if nothing was done to reduce the bandwidth required by ever brighter stops then you would only be able to broadcast a very narrow brightness or dynamic range.
Our own visual system is tuned to pay most attention to shadows and mid tones. After all, if anything was going to eat our ancient ancestors it was most likely going to come out of the shadows. In addition the things most important to us tend to be faces, plants and other things that are visually in the mid range. As a result we tend not to notice highlights and brighter parts of the world. So, if you take a picture or a video and reduce the amount of bandwidth or data used for the highlights we don’t tend to notice it in the same way that we would notice a reduction of data in the mid range. In order to keep video transmission and storage bandwidths under control something called a gamma curve is applied to recordings and broadcasts. This gamma curve gradually reduces the amount of bandwidth/data used as the brightness of the image increases. Gamma is a form of video compression and as with almost all types of video compression you are throwing away picture information. For the darker parts of the picture there is almost no compression, while the brighter parts, especially the highlights are highly compressed. For more info on Gamma take a look at Wikipedia.
So that’s gamma and gamma is used by all conventional video cameras. The problem with gamma is that if you overexpose an image, lets say a face, you push that face up into the more compressed part of the exposure range and this starts to get quite noticeable. Even though in post production you can reduce the brightness of the overexposed face it will still often not look right because of the extra compression imparted on the face and subsequent loss of data due to the over exposure.
To make matters worse, when your working with conventional video you have a very limited amount of bandwidth (think of it as a fixed size bucket) within which you must store all you picture information. Try to put too much information into that bucket and it will overflow. As the dynamic range of modern cameras increases we end up trying to squeeze ever greater amounts of picture information into that same sized bucket. The only way we can fit more stops into our fixed size bucket is by compressing the highlights even more. This means that the recording system becomes even less forgiving of over exposure. It’s a bit of a catch 22 situation: A camera with a greater dynamic range will often be less tolerant of incorrect exposure than a camera with a smaller dynamic range (and thus less highlight compression).
But what if we could do away with gamma curves altogether? Well if we could do away with gamma curves then our exposure would be less critical. We could over expose a face and provided it wasn’t actually clipped (exceeding peak white) it could be corrected down to the right brightness in post production and it would look just fine. This would be fantastic, but the amount of data you would need to record without gamma would be massive.

Enter the Bayer Pattern sensor! Raw can work with any type of sensor, but it’s Bayer type sensors that we normally associate with raw. A Bayer sensor is a single sensor with a special array of coloured filters above the pixels that allow it to reproduce a colour image. It’s important to remember that the pixels themselves are just light sensitive devices. They do not care what colour light falls on them. They just output a brightness value depending on how much light falls on them. If we take a pixel with a green filter above it, only green light will fall on the pixel and the pixel will output a brightness value. But the signal is still just a brightness value, it is not a colour signal. It does not become a colour signal until the output from the sensor is “De-Bayered”. De-Bayering is the process of taking all those brightness values from the pixels and converting them into a colour video signal. So again taking green as an example, we read out the first pixel (top left) and as this was behind a green filter we know that it was seeing green light. For the next pixel we know it was under a blue filter, but we still need a green value for our final picture. So we use the green pixels adjacent to the blue one to calculate an estimated green value for that location. This process is repeated for all 3 primary colours for every pixel location on the sensor. This gives us a nice colour image, but also creates a lot of data. If we started off with 4096×2160 pixels (4K sensor) we would initially have 8.8 Million data samples to record or store. However when we convert this brightness only information to RGB colour we get 4096×2160 of green, 4096×2160 of blue and 4096×2160 of red. A whopping 26.5 Million data samples. A traditional video camera does all this De-Bayering prior to recording, but what if we skipped this process and just recorded the original sensor brightness samples? We could save ourselves a huge amount of data.
The other thing that normally happens when we do the De-Bayering etc is that we make adjustments to the De-Bayered signal levels to allow for things like white balance and camera gain. Adjusting the gain or white balance of a camera does not change the way the sensor works. The same amount of light falls on the same pixels and the output of the sensor does not change. What we change is the proportions of Red Green and Blue that we mix together to get the correct white balance or we add additional amplification to the signal (like turning up the volume on an audio amplifier) adding gain, to make the picture brighter.
Raw just records the unprocessed sensor output.
So if we just record the raw data coming off the sensor we dramatically reduce the amount of data we need to record. As the recorded signal won’t be immediately viewable anyway (it will need to be de-Bayered first), we don’t need to use a gamma curve. As the amount of data is lower than it would be for a fully processed colour image we can actually record the data linearly without image compression. The downside is that to view the recorded image we must process and De-Bayer the image while it’s playing back. The plus side is that at the same time as De-Bayering we can add our colour balance adjustment and any gain we need, all of this can be done in the edit suite giving much finer control and the ability to correct it and re-do it if you want. What we are doing is moving the in image processing from in the camera, to in the edit suite. In addition there is the fact that the picture is linear, without gamma compression which makes it incredibly forgiving of overexposure.
If you have never worked with raw then I suggest you give it a try. Many stills cameras can shoot in raw and it’s essentially exactly the same process with a stills camera as a video camera. If you have a camera that will do both Jpeg and raw at the same time have a go at shooting with both modes and then adjusting both in a paint package like photoshop. The difference in post production flexibility is astounding.
Of course as with all these things there is no free lunch. Your still recording a lot of data with linear raw so your recorded files will be much larger than traditional compressed video. In addition De-Bayering and processing the images takes time. Modern computers are getting faster and storage is getting cheaper, working with raw is easier now than it’s ever been. I can work with the 4K raw files from my laptop (Retina MacBook Pro) in real time by using 1/4 resolution playback. The final renders from Resolve do take a little bit of time, but once you’ve taken a bite from the raw apple it will keep tempting you back for more!
Storm Chasing Trip, April 13th to April 20th.

Going to NAB? Got a week spare afterwards and fancy something different? Why not join me on a Storm Chasing adventure. I will be taking a very small group tornado chasing between April 13th and April 20th. This is the start of the tornado season in the USA and in recent years this time of year has seen some very big tornado outbreaks. Now unlike the typical thrill seeker tornado trips that try to get as close as possible to the tornadoes my aim is to get into the best positions for beautiful and awe inspiring shots of the storms and tornadoes. This may mean hanging back just a little bit to give ourselves time to get tripods out and get stable properly exposed high quality video and stills. I have absolutely no desire to actually get into a tornado or be so close that you can never stop.

In addition I will be looking for opportunities to capture time-lapse of developing storms, beautiful storm structures and spectacular lightning. It should be noted however that sometimes, in order to get a decent view of a tornado we may need to get quite close, but I will not deliberately enter into poor visibility or any other high risk situation.
As part of the trip I will provide tuition and assistance for anyone that needs it. We can look at camera set-ups, picture profiles, time-lapse techniques and any other aspect of video production. The cost of the trip is $1,900 USD which includes accommodation. We normally stay in mid budget motels. Food and drink is not included and you will need to make your own way to/from Dallas, Texas, arriving in Dallas on the 13th of April and Departing Dallas on the 21st of April.

What can you expect to see? Well, there are no guarantees. We are at the mercy of the weather, but it is tornado season. I would expect to see impressive “Supercell” thunderstorms that twist and turn, towering from a base at 1,500ft all the way up to 70,000 ft. I would expect to see spectacular lightning from these storms both from cloud to ground as well as across the vast spreading anvils of these storms. I would not be at all surprised to encounter large hail, maybe golfball or bigger (although I try to avoid any direct encounters with hail bigger than golf balls). There may be haboob dust storms, damaging straight line winds and if we are lucky tornadoes.
Where will we go? Who knows, the only thing I do know is that we will start and finish in Dallas. I will make a daily weather forecast and we will go to the area that offers the best prospect of seeing storms. In April this usually means travelling around the states of New Mexico, Texas and Oklahoma, but it would not surprise me if we end up in Kansas, Colorado, Nebraska or Iowa. There is often a lot of driving, but that is part of the adventure, a road trip across the mid-west seeing small western towns, the vast prairies and cattle ranches.
If your interested please use the contact form to send me a message.
What does the Auto FB adjust routine on an EX1. EX3 or PMW-200 do?
For a zoom lens to be Parfocal, that is to stay in focus as you zoom in or out, the distance between the sensor and the rear element of the lens has to be set very accurately. If it is not then the focus will shift as you zoom in or out. This is why on most pro video cameras or lenses there is a back focus or Flange Back adjustment that alters this distance over a very small range, often only around +/- 0.5mm.
With lenses that are electronically controlled, like the one on the PMW-200/EX1 it is more complex. The lens itself is not ParFocal, the lenses natural focus changes as you zoom. This makes the design of the lens simpler and thus cheaper as well as compact and light weight. But because of this the camera/lens must use a Look up table of focal length to desired focus distance to dynamically alter the focus as you zoom to make the non-parfocal lens ( called vari-focal) behave like a parfocal one. This table needs to be calibrated from time to time, especially if the lens has been bumped or knocked (even when not in use) and in the case of the PMW-200, EX1 and EX3 (plus other similar cameras) this is what the Auto FB adjust routine does.
If you find that when zooming in and out your focus is not tracking accurately you may need to run the Auto FB routine to calibrate your lens. Sometimes rough handling of the camera, for example in transit, can throw out the lenses calibration.
Alphatron EVF upgrades… Including waveform and vectorscope!

Hot of the show floor from CabSat is a great new upgrade for the Alphatron 035W viewfinder. The firmware for the viewfinder has been updated to include a waveform monitor and vectorscope. The size of these can be adjusted so you can have a small inset waveform in the bottom left of the screen or a much larger waveform across the bottom of the screen. This is a great upgrade (especially for anyone think of using it with an F5/F55) and best of all it can be applied to any Alphatron EVF. I believe this is available free of charge to anyone that has an Alphatron EVF which is even better.
There are also some hardware changes which includes a new optic in the monocular that combined with a new filter and protection layer on the LCD screen means that sun damage is now extremely unlikely even if you don’t close the shutter. Lots of good news coming from Alphatron!
Sony responds to Red’s patent infringement claims.
In response to Reds attempt to sue Sony over claimed patent infringements Sony have made the following statement:
On February 12, 2013, Red Digital Cinema (“Red”) sued Sony Corporation of America and Sony Electronics Inc. and alleged that the Sony PMW-F5, PMW-F55, and F65 digital cinema cameras infringe two Red patents. The F65 has been commercially available for over a year and the F5 and F55 were announced in October, 2012.
Sony has now had an opportunity to study Red’s complaint and the asserted patents, and categorically denies Red’s allegations. Sony intends to defend itself vigorously in the Red lawsuit. Sony looks forward to prevailing in court, thus vindicating the Sony engineers who developed Sony’s quality digital cinema cameras.
Taken from http://pro.sony.com/bbsc/ssr/show-highend/
Sensitivity and sensor size -governed by the laws of physics.
Sensor technology right now has not really changed for quite a few years. The materials in sensor pixels and photo-sites to convert photons of light into electrons are pretty efficient. Most manufacturers are using the same materials and are using similar tricks such as micro lenses to maximise the sensors performance. As a result low light performance largely comes down to the laws of physics and the size of the pixels on the sensor rather than who makes it. If you have cameras with the same numbers of pixels per sensor chip, but different sized sensors, the larger sensors will almost always be more sensitive and this is not something that’s likely to change in the near future. It hasn’t actually changed for quite a few years now.
Both on the sensor and after the sensor the camera manufacturers use various noise reduction methods to minimise and reduce noise. Noise reduction almost always has a negative affect on the image quality. Picture smear, posterisation, a smoothed plastic like look can all be symptoms of excessive noise reduction. There are probably more differences between the way different manufacturers implement noise reduction than there are differences between sensors.
The less noise there is from the sensor the less aggressive you need to be with the noise reduction and this is where you really start to see differences in camera performance. At low gain levels there may be little difference between a 1/3″ and 1/2″ camera as the NR circuits cope fairly well in both cases. But when you start boosting the sensitivity by adding gain the NR on the small sensor camera has to work much harder than on the larger sensor camera. This results in either more undesirable image artefacts or allowing more noise to be visible on the smaller sensor camera. So when faced with challenging low light situations, bigger will almost always be better when it comes to sensors. In addition dynamic range is linked to noise as picture noise limits how far the camera can see into the shadows, so generally speaking a bigger sensor will have better dynamic range. Overall camera real camera sensitivity has not changed greatly in recent years. Cameras made with one size of sensor made today are not really any more sensitive than similar ones made 5 years ago. Of course the current trend for large sensor cameras has meant that many more cameras now have bigger sensors with bigger pixels and these are more sensitive than smaller sensors, but like for like, there has been little change.
Alphatron 035W EVF on the Sony PMW-F5 or F55. Zebras at 34%.

I have a few shoots and projects coming up that require a very portable setup with little to no time to use a light meter etc (tornado chasing next month – anyone want to join me??). Currently the metering and measurement options on the PMW-F5 and F55 are limited to zebras and the zebras don’t go down below 50%. I’m going to be shooting 4K raw, so the camera will be in S-Log2. I can use a LUT to display a S-Log to 709 image in the viewfinder, but this makes it hard to appreciate the full range of what the camera is capturing. When shooting a dark storm against a bright sky the dynamic range of the scene can be massive, so I like to see the native image rather than via a LUT to help judge over exposure a bit more accurately. When I’ve done this before, as an exposure tool, I’ve taped a grey card to the car so if I need a quick exposure reference I can point the camera at the card and in the case of the PMW-F3 use the centre spot meter to get a quick exposure guide. The issue is that for S-Log2 middle grey should be approx 34%, so zebras that only go to 50% are not much use. I can use white as an alternative, which should fall around 68% but it’s not ideal. Anyway, I was exploring various options when I remembered that my Alphatron EVF had zebras that could easily go down to 34%. So I decided to check out the Alphatron on my F5 as an alternative to my Sony L350. Both LCD panels have similar resolution, so it was interesting to compare them anyway.

The Sony L350 EVF is a very nice viewfinder, but it’s not cheap, running at around £2K/$3K (although that does include the mount). It has very good contrast and resolution that is high enough that you can’t see the pixels (just) when you look through the monocular. It’s also very versatile as the monocular flips up, both towards the rear and side.
The Alphatron EVF-035W-3G is also a very nice viewfinder, but at half the price of the Sony is considerably cheaper. It only opens up to the rear, but it does incorporate a very handy shutter in the loupe that when closed will prevent sun damage to the LCD screen. Interestingly both viewfinders specify the same 960×540 half HD resolution and contrast ratios of 1000:1. One side note: If you want a rubber eye cup with a set of rubber blades that open as you put your eye against the eyepiece to prevent the sun from damaging your expensive viewfinder, BandPro sell them for about $160 each.

Back to the viewfinders….. So how different are they? Well to be honest not very different. My Alphatron is an old pre-production one, so may be very slightly different to a production unit. Looking into the viewfinder loupe the image in the Aphatron is considerably larger than the Sony, you can just see the pixels in the Alphatron, but not in the Sony. This is simply due to the greater magnification from the optics in the Alphatron. The screen sizes and resolutions are the same. I think the Sony optics are a little better with less aberrations and distortion, but the viewed image is much smaller. When focussing I found both to provide similar performance, I could focus equally well with both viewfinders, if anything the Alphatron has a slight edge due to the larger image, but it’s a close call.

You can zoom in pixel to pixel on both viewfinders, both viewfinders have peaking, possibly marginally better on the Sony, but again really not a great deal of difference. Interestingly the Sony peaking system works on vertical edges while the Alphatron appears to favour horizontal.
Contrast, brightness, colour and smear wise both EVF’s are again very similar, maybe the Sony is just a little better on contrast. I think I might need to calibrate my the colours on my Alphatron slightly, this is easy enough in the menus. I do suspect that they are both using the same LCD panel. Powering and feeding the Alphatron is simple enough, I used a D-Tap to TV-Logic power adapter cable for this test and then took an SDI feed from the Sub SDI bus. But you could also use one of the Aux power outputs on the V-Mount adapter or R5 to power the Alphatron only when the camera is on.

There you have it – The Alphatron 035W EVF is a legitimate option for use with the PMW-F5 and F55. The ability to use the Zebras to measure S-Log2 middle grey is a nice bonus, in addition you have other exposure tools such as false colour, oh if only I had these with the Sony EVF! I’m going to have to think long and hard about this. If I had thought about it sooner I could have saved myself £2K by not getting the Sony EVF and using the Alphatron that I already owned. Where possible I will use my TV-Logic 056W monitor (see my review of this great monitor here) with it’s built in waveform display for accurate exposure assessment, but sometimes it’s not practical to have a 5.6″ monitor hanging off the side of the camera and in this situation the extra exposure tools of the Alphatron will be very handy. One last thing, if you are thinking of going down the Alphatron EVF route, do remember you will need a bracket of some kind. The F5/F55’s handle has plenty of 3/8″ and 1/4″ threads, plus there are a few on the top of the camera body, so lots of options. I have the Element Technica Micron top plate and handle and I used a bracket from this. ET do make a dedicated mount for the Alphatron finder that is very nice.
What is PsF, or why does my camera output interlace in progressive?
This one keeps coming around again and again and it’s not well understood by many.
When the standards for SDI and connecting devices via SDI were originally set down everyone was using interlace. The only real exception was people producing movies and films in 24p. In the 1990’s there became a need to transfer film scans to digital tape and to connect monitors to film scanners. The led to the adoption of a method of splitting a progressive frame into two halves by splitting out the odd and the even numbered lines and then passing these two halves of the progressive frame within a conventional interlaced signal.
In effect the odd numbered lines from the progressive frame were sent in what would be the upper field of an interlace stream and then the even numbered lines in what would be the lower field. So in effect the progressive frame gets split into two fields, a just like an interlaced video stream, but as the original source is progressive there is no time difference (temporal difference) between when the odd and even are were captured, so despite the split, what is passed down the SDI cable is still a progressive frame. This is PsF (Progressive Segmented Frame).
This system has the added benefit that even if the monitor at the end of the SDI chain is interlace only, it will still display the progressive material more or less correctly.
But here’s the catch. Because the progressive frame, split into odd and even lines and then stuffed into an interlace signal looks so much like an interlace signal, many devices attached to the PsF source cannot distinguish PsF from real interlace. So, more often than not the recorder/monitor/edit system will report that what it is receiving is interlace, even if it is progressive PsF. In most cases this doesn’t cause any problems as what’s contained within the stream does not have any temporal difference between the odd and even lines. The only time it can cause problems is when you apply slow motion effects, scaling effects or standards conversion processes to the footage as fields/lines from adjacent frames may get interleaved in the wrong order. Cases of this kind of thing are however quite rare and unusual.
Some external recorders offer you the option to force them to mark any files recorded as PsF instead of interlace. If you are sure that what you are sending to the recorder is progressive, then this is a good idea. However you do need to be careful because what will screw you up is marking real interlace footage as PsF by mistake. If you do this the interlaced frames will be treated as progressive. If there is any motion in the frame then the two true interlace fields will contain objects in different positions, they will have temporal differences. Combine those two temporally different fields together into a progressive frame and you will see an artefact that looks like a comb has been run through the frame horizontally, it’s not pretty and it can be hard to fix.
So, if you are shooting progressive and your external recorder or other device say’s it’s seeing interlace from your HDSDI, don’t panic. This is quite normal and you can continue to record with it.
If you are importing footage that is indicated as being interlace, but you know it’s progressive PsF into most edit packages you can normally select the clips and “interpret footage” or similar to change the clip header files to progressive instead of interlace and again all will be fine.
UPDATE: Since first writing this the use of a true 24/25/30p progressive output has become far more common. PsF still remains a perfectly valid ITU/SMPTE standard for Progressive, but not every monitor supports it. Early implementations of 24/25/30p over SDI were often created using non standard methods and as a result there are many cameras, monitors and recorders that support a 24/25/30p input or output, but may not be compatible with devices from other manufacturers. The situation is improving now, but issues remain due to the multitude of different standards and non standard devices. If you are having compatibility issues sometimes going up to 50p/60p will resolve it as the standards for 50/60p are much better defined. Or perhaps you may need to use a device such as a Decimator between the output and input to convert or standardise the signal.
PMW-F5 and PMW-F55 Gotchas, issues and workarounds for early firmware cameras.
So, I’ve just spent 3 days demoing the F5 and F55 and the BVE show in london. It’s actually been a great learning experience for me as even though I have been lucky enough to have shot with the cameras several times already, at BVE I was asked to show all kinds of different modes and setups. Many of which I have not used myself. In doing so I came across a few anomalies in the way the menus work, a few what Sony might call “features”. Anyway I thought I would start to list them here in case anyone else gets stuck, or perhaps more importantly as a reminder to myself of how how to get around the features. As I come across more I’ll add them here. The cameras are shipping with firmware that is in development. There are some bug fix firmware releases coming very soon (maybe even the next few days) and we can expect many small updates over the coming months as more feedback makes it’s way back to Sony. Over all everything works, but there are a few peculiar things that might trip you up. These notes are to the best of my knowledge correct at the time of writing, but may become out of date as new firmware is released.
Remember that if your shooting 4K 4096 x 2160 the aspect ratio is 17:9, so you might want to add a 16:9 framing marker from the viewfinder marker menu.
Understand the difference between the “System”, “Base Setting” modes. Make sure you read and understand page 32 of the manual. There are two key modes, Cine EI Mode and Custom mode. Cine EI mode locks the camera into Exposure Index, S-Log2 mode. You need to be in this mode if you want to shoot raw on the R5, in fact you can only select this mode if you have an R5 attached. You cannot change the ISO in this mode. When your in this mode your gain is indexed, that is, the camera always records at the base ISO (1250 on the F55 and 2000 on the F5). I assume that in later firmware you will be able to index the ISO, ie change the ISO of the LUT’s and metadata to fine tune your dynamic range.. The colour Gamut is S-Gamut. Only the colour temperature can be changed
In Custom Mode there are colour gamut two sub modes. The two modes are Normal and S-Gamut. S-Gamut offers a wider colour gamut than the normal colour gammut. In either mode the colour temperature and gain can be adjusted as can the gain/ISO. However in S-Gamut you can only choose between the 3 preset white balance settings of 3200, 4300 and 5500. If you want to dial in your own white balance or set a manual white balance (done in the camera menu) you have to be in “Normal” mode. A common reason for not being able to change the gamma curve (gamma options greyed out) is having the camera set to S-Gamut as this locks the camera to S-Log2.
In order to record raw with the R5 set “Shooting Mode” in “Base Setting” to “Cine EI,” and “Main Operation” in “Base Setting” to “RAW” of the System menu.
To output 4K using HDMI you must have the camera set to 4K 4096 x 2160. Then you have to go to the video output menu, output page and first turn off the 4K SDI output. Once you turn off the 4K SDI output you will then see the option to turn on the 4K HDMI output. You won’t see the 4K HDMI option until you have turned off the 4K SDI.
MLUT’s are only available when in Cine EI mode. In Custom mode, even if you select S-Log as your gamma curve, you won’t get any MLUT’s. You have to be in EI mode and have MLUT’s activated for the viewfinder or one of other outputs to get the MLUT options. You only get the MLUT options in the menu when the camera is set to Cine EI.
Exposure Dissparity between XAVC S-Log2 material and raw. I need to try to get to the bottom of this one. Some of my raw material ends up looking hugely overexposed compared to the S-Log2. Anyone else seeing this? It all looks fine in the viewfinder when I’m shooting, but the raw looks over exposed while the S-Log2 is OK. So far it’s always graded back to sensible levels.