Here is a short selection of a few clips that I shot around the Cape Peninsular while waiting for my flight home after some workshops. Shot on an F55 with a 20mm and 85mm Sony PL lens. Most of it is 2k raw at 240fps, but there are some normal speed shots in there too as well as some S&Q motion time-lapse. Big thank you to Charles Maxwell for taking the time out to drive me around the Peninsular.
Category Archives: Uncategorized
PMW-300 and PMW-200 Firmware Update. Reduces Chromatic Aberration.
Quite a few PMW-300 users have been having issues with Chromatic Aberration (CA) typically pink and blue halos around areas of high contrast. CA is caused by the fact that different wavelengths of light are bent and focussed at different points by the lenses. So blue light will be out of focus when red is in focus and vice versa. There are two ways to combat this. The first is the use of combinations of special (and often very expensive) glass that compensate for this, the other is electronic correction performed in the camera. One issue with the optical approach is that the sharper you make the lens the worse the problem becomes, as if you bring red into very precise focus, the slight defocus of the blue becomes more noticeable. So as we raise the resolution of the cameras we shoot with and this need ever sharper and higher resolution lenses the problem becomes harder to deal with purely optically. As a result modern video cameras rely more and more on electronic CA reduction, sometimes called ALAC (automatic lens aberration correction).
Sony’s EX cameras include ALAC and it does a really good job of masking the CA. The PMW-200 and PMW-300, even though they both use the same lens as the EX’s show a lot more CA. Sony have now addressed this and released firmware updates for both the PMW-200 and PMW-300. It appears that with older firmware versions the ALAC only compensated for horizontal aberrations. The new firmware improves the horizontal correction and adds vertical correction. The difference this update makes is in most cases quite dramatic, almost totally eliminating the CA.
You can download the updates from here: http://support.sonybiz.ca/esupport/Navigation.do?filetype=Firmware
PMW-300 V1.12 http://support.sonybiz.ca/esupport/DownloadView.do?id=10193&eulaId=20001
PMW-200 V1.30 http://support.sonybiz.ca/esupport/DownloadView.do?id=10190&eulaId=20001
XAVC write back from Adobe Premiere CC.
FINALLY!!!! I have got this working!
It’s been possible to create XAVC files from Abobe Premiere for a little while but until today I have never managed to create a file that will actually play back in a Sony camera. Today however I finally have it working.
So what do you need to do to make it work? First of all make sure you have the correct versions of the software. You will need Adobe Premiere CC version 7.2.1 and Adobe Media Encoder CC version 7.2.0.43 or later. In addition you will want Sony Content Browser version 2.2 or later.
Complete you edit as normal in Premiere, then go to “Export” “Media” to open the export dialog. Under “Format” choose “MXF-OP1a”.
Make sure the “Export Audio” check box is ticked.
Under the “Video” tab for the encoding properties use the “video codec” selector to choose the type of XAVC you want. Currently you can select between HD, 2K, 3840×2160 and 4096×2160 (remember an F5 can only play back HD or 2K). Then select the frame rate you desire.
Now go to the Audio tab. The audio codec should be “uncompressed”. Under the “Basic Audio Settings” you need to select the following:
Channels: 8 Channel
Sample Size: 24 bit
At this point it is probably a good idea to save your settings to create a preset for XAVC to save time next time you want to export an XAVC file.
Now either use the direct export button to render your XAVC mxf file or use the queue button to add it to the render queue in Media Encoder. I find Media Encoder faster, so normally use the queue function.
Once the clip has rendered, you are done with Premiere and Media Encoder. Now you need to open Sony’s Content Browser.
Format an SxS card in the camera and either insert it in your card reader or connect the camera to the computer via USB. From within Content Browser select the root folder of your SxS card (the card itself, not any of the folders on the card). Then either right click on the card or go “File” – “Import”. Now navigate to the folder where you saved your freshly rendered XAVC file and click “Start”. The clip will be copied to your SxS card and the appropriate XML files and other data added. Once done, eject the SxS card and put the card in the camera or disconnect the USB cable from the camera (use the proper “Eject” function first) and you should then be able to play back the clips in camera (make sure the camera frame rate matches that of the clips).
New AXS cards for Sony raw.

Well I’ve been away on a shoot the last couple of weeks shooting the Northern Lights in real time 4K. A full write up will follow.
While I was away Sony Japan made an announcement about some new AXS cards for recording raw in the R5 recorder. The new cards are the same size as an SxS card and come in two sizes, 512GB and 1TB. To use the cards in the R5 you use an adapter that converts from the original card size to the SxS card size.
What this will allow in the future is the use of a single card reader for both conventional SxS and the new smaller AXS cards. Who knows what else might be possible in the future? Maybe recording raw internally in the F5/F55, higher speed recording or higher data rate for XAVC. The 1TB card is projected to cost around $4K and the 512GB card around $2K. They will be available some time around the beginning of March.
Sony launches 4K Handycam and new Action Cam.

CES is underway in the USA, this is the biggest consumer electronics show in the world.
If anyone has any doubts that 4k isn’t real and that 4k isn’t coming down on us like a steam roller then CES is where you need to take a look as 4k is one of the big features of the show with everything from 4k TV’s, 4k computers, and 4k cameras.
From Sony we have one new 4K camera and a major update to the ActionCam (as well as an interesting baby Sony Alpha camera that shoots HD). NOTE I INCORRECTLY STATED EARLIER THAT THE ACTION CAM WAS 4K. IT IS NOT, IT IS HD.

The FDR-AX100 is a compact handheld camcorder that boast 4k performance from a 1″ 20MP sensor (14MP in video mode) that sits behind a 12x f2.8-f4.5 lens. It even has built in ND filters. It records at up to 30fps using Sony’s XAVC-S codec, so that means it’s limited to 3840×2160 Quad HD.
One thing that surprises me is the recording media. My first thoughts would be that this would use XQD, but it does not. For XAVC-S recording it uses cheap SDXC class 10 cards!
There is a single large ring on the lens that looks like it can be switched between zoom control and focus control and then a small wheel for iris under the lens barrel.
Of course until I get my hands on one I can’t comment on the image quality, but if this sensor is similar to the one in the RX100 stills camera then it might be surprisingly good. I will be considering this camera as a grab and go companion for my PMW-F5. The price? Well Sony are marketing this as 4K for 2K, the price set to be around $2K USD and available at the end of March.

Another interesting camera is the HDR-AS100V. No it’s not 4K it’s only HD. This is a new addition to Sony’s Action Cam range. It appears to be exactly the same size as the AS-15 and AS-10 but now it’s white and now it shoots in HD at up to 60fps using Sony’s XAVC-S codec. The cameras body is now splash proof so you can use it for many applications without the waterproof housing.
The sensor is a new EXMOR-R back illuminated sensor up from 16 MP to 18 MP and boasting better low light performance and less noise. Like the previous models the camera has image stabilisation, which has always been one of the key point s of the Sony cameras for me. Wifi remote control and NFC are all included and coming in the summer is the ability to stream live from the camera. The price… well a very competitive $299 USD and available in March (just in time for storm chasing season).
New Log curve and colour space on the PMW-F5 and F55, SLog3 and SGamut3.cine
So, Sony have added a second log curve and a new colour space to the PMW-F5 and F55 cameras. This new curve and colour space does not replace the exisiting SGamut or SLog2 gamma curve. It gives a new option, but what does this new option bring to the party?
Well, first off I’m still waiting on the official white paper from Sony, so I’m making a lot of assumptions here based on what I have been able to work out for myself about this new curve based on tests with the camera and plotting the gamma curve and gamut for myself. The new gamma curve is very close to the Cineon gamma curve and the colour space is very close to the P3 DCI standard.
Looking at the colour space SGamut3.cine is very close to SGamut. So if you want the biggest possible colour gamut you should probably continue to use SGamut. But SGamut is such a large colour space that it really needs special and careful handling in post production to get the most from it. I think we all know that when you shoot with a gamma curve with a very big dynamic range and look at it on a conventional monitor it looks very flat and lacks contrast. Well the same happens with a colour gamut. Look at SGamut material on a conventional monitor and it lacks colour contrast and looks washed out. The temptation is to simply crank up the saturation to compensate, but this is far from ideal and can result in some undesirable colour shifts. What you really need to do is use a LUT to convert the SGamut colour space into the display range you are working to, but this requires careful fine tuning to match the colour space you are working in. For example you will want different LUT’s when outputting for REc-709 video or for when outputting for a P3 cinema compliant DCP.
So it appears that to try to make things a little simpler for colourists, Sony have added the SGamut3.cine colour space. This is still a very big colour space, but now very close to the industry standard DCI P3 colour space used for digital cinema post production and presentation. Looking at the SGamut3.cine images in Resolve the colours look washed out, as expected but appear to be very true to life. So any LUT’s used will have to do less work to bring your colours accurately into your chosen output colour space. By starting off with this new colour space it looks like it will be easier for colourists to quickly manipulate the image without needing complex LUT’s. It also appears closer to Rec-709, so again it should be a little easier and quicker to work with in video post production. Do remember though, if you really want the very best from the camera, SGamut does offers the biggest range, but unless you handle it correctly you may not be getting the most out of it.
So what about the new SLog3 gamma curve? Being very close to the Cineon gamma curve, SLog3 mimics film a little more closely than SLog2 (Cineon was designed to mimic film, SLog2 is designed to maximise data recorded from a video sensor). This means that SLog3 has a density response similar to film. Which in turn means that your exposure will be brighter than SLog2. Nominal Middle Grey with SLog3 is somewhere around 40% and 90% white is somewhere around 61% (awaiting for official white paper for confirmation, but my own plots should be within a couple of percent). The peak recording level with SLog3 is 94%, it does not go above 94% as does SLog2 which can go all the way up to 108%. This lower peak recording level probably comes from the fact that Cineon allows for an even greater dynamic range than 14 stops, but 94% represents a film stock with a 14 stop range. A lower maximum recording value does not mean less dynamic range, just less data being used to record the total range.
What this means is that SLog3 is allocating more data to the mid to low range than SLog2, so shadows and darker mid tones will have slightly more levels per stop than SLog2. Also mid tones will look brighter as the exposure is slightly brighter. But this comes at the expense of the range from 90% white and above being recorded with less data. Brighter Slog3 skin tones will have very slightly less data levels than Slog2, while darker skin tones will have a little more data. Slog3 will probably be easier for most people to expose correctly as it is a bit brighter overall, but will be a little less forgiving of any over exposure compared to SLog2.
In post production as the curve is so close to Cineon you will be able to use almost any Cineon compatible LUT’s or Looks. It’s also very, very close to Arri’s Log-C curve so LUT’s designed for Log-C will work very well with SLog3 and Sgamut3.cine making it much easier for many colourists to transition from cameras like the Alexa or a film based workflow to material from the F5/F55.
In summary the new Slog3/SGamut3.cine combination will be a little easier to expose by eye or with a light meter as the density profile is similar to film. However it will be a little less forgiving of any over exposure. In post it may be a little easier to work with as it is so similar to Cineon, Log-C and P3, the new colour range possibly providing a better and easier to manipulate “out of the box” image. However SGamut still appears to offer the largest colour gamut and SLog2 offers a little better over exposure latitude. SLog3 has a little better under exposure latitude.
PXW-Z100 Frame Grabs
I got a chance to go and shoot with Omega Broadcasts demo unit PXW-Z100 today. I really forgot how nice it is to have a 20x zoom lens on a camera.
Anyway attached are a couple of 4K frame grabs for you to take a look at. Click on the thumbnails to go to the full size image. But do remember these are 8 bit jpegs, so not quite as good as the original image. I’ll be writing a full review of the camera very soon. Overall I really like it. Solid build, good pictures, 20x zoom and 4K.



Want to know more?
Have you read something here that you don’t fully understand. are you looking at getting in to the tough world of professional video production? Need to improve you green screen or chroma key skills. I’m running a range of workshops for all skill levels in Austin, Texas next week at Omega Broadcast. Click here for more details.
If you can’t get to Austin, how about Toronto Canada. I’m working with Vistek Toronto and will be holding workshops at SIRT Pinewood Studios on the 12 and 13th of December. Click here for full details.
Come and join me. I’m really good at teaching stuff that can be difficult to get your head around in a straight forward way.
Understanding Log and Exposure Levels (also other gammas). PLEASE READ and understand.
Please, please read this and try to understand how shooting with a high range gamma curve such as a cinegamma or hypergamma or log recording works. The principles are not well understood by many, even highly experienced DP’s and DIT’s get this so horribly wrong.
Why do so many get it all wrong? Because we are brought up used to looking at a monitor or viewfinder and seeing a picture that looks correct.
Why doesn’t the picture look right when we shoot log (or other extended range gamma)? It’s simply because the monitor does not have the right gamma curve (unless you have a log monitor), so there is a miss-match between the camera and monitor.
So what does this mean? DO NOT USE THE MONITOR TO JUDGE YOUR EXPOSURE unless you have a well calibrated Look Up Table between the camera and monitor!
For many people this takes a huge leap of faith. To shoot with a picture that looks wrong goes against everything most camera people are taught. Directors and producers will look at the monitor and not like what they see, perhaps encouraging you to adjust your exposure, because it looks wrong. In the end many give in and instead of exposing the Log or other gamma correctly they will adjust the exposure to something they are more comfortable with, something that is a bit brighter. But this is a mistake, an easy one to make but one that may mean your pictures just won’t look as good as they should. Please see this article on exposure with extended range gamma curves.
Some more things to consider before I go further:
Most TV and film production monitors are based on the REC-709 standard. The input into these monitors will normally be digital, either HDSDI or HDMI.
A digital signal contains a range of data values. For 10 bit video we have a total range of data bits from 0 to 1023. Our monitor will show data bit 64 as black (the values below this used for super blacks and sync) and data bit 1019 will make the monitor show the brightest level that it can. Normally data bit 940 is considered “white” and anything above this is brighter than white. It may be 8 bit not 10 bit, 8 bit uses values from 0 to 255. For this article I will use 10 bit values, but the principles are exactly the same whether 10 bit or 8 bit. Also I’m only considering brightness here, not colour.
A typical LCD monitor or TV set has a very limited contrast range and can only display about a 6 or 7 stop dynamic range. OLED’s are a bit better.
Thanks to the Rec-709 gamma curve in the monitor, when we send data bit 940 to the monitor we see what appears to be white. Send bit 64 and we see black, send bit 440 (approx) and we see a shade of grey that appears to be half way between black and white, also known as middle grey.
Middle grey is approx 2.5 stops darker than white (as in a piece of white paper or similar) and if we go around 2.5 stops darker than middle grey we will see something very close to black. So we can see that using bits 64 to 940 we will get around a 5 stop dynamic range on the monitor with a bit of extra range from bit 940 to 1019, so overall there’s our typical 6 stop monitor range.
Now, what happens then if we have a camera with a much greater dynamic range than 6 stops? Well, the monitor can never show the cameras bigger range accurately as it can only ever show 6 stops, if we feed say 14 stops into the monitor the brightness range on the monitor will still only be 6 stops. So now the contrast of the picture is reduced as we are squeezing the cameras large contrast range into the monitors much smaller contrast range.
Now lets consider the camera.
Lets consider a Rec-709 camera. If I shoot a white card, I record it using bit 940, if I shoot a grey card I record it using bit 440, that way the white card looks white and the grey card looks grey on my monitor which uses those same levels for those same shades, then I have a little bit of extra space above 940 for a little extra dynamic range. Remember, near black to white is approx 5 stops of dynamic range.
But what if I want to extend my range beyond 5 stops? If white is bit 940 and my top limit is bit 1019, I really don’t have a lot of data space to record a load of extra range, so I have to do something else.
What do the camera manufacturers do to record a bigger dynamic range? They shift the data values used down. Taking SLog2 as an example, instead of using bit value 940 to represent white, they now use bit 600 (approx) and for middle grey, instead of bit 440 we now use bit 347. This now gives us a large amount of spare data from bit 600 to 1019 to record a greatly extended range beyond our original 5 stops.
This shift downwards of our data levels does not just happen with log recording it also happens when you use almost any non-standard gamma curve. For example Sony’s Hypergammas and Cinegammas also lower the bit value for white down to between bit 700 and 800 and middle grey can go as low as bit 320 (depending on the curve used). Again this then frees off extra data above bit 800 to extend the dynamic range beyond our Rec-709 6 stops.
But this now gives us a problem. If I am using SLog2 and expose CORRECTLY and as a result record middle grey at bit 347 (32%), when I send bit 347 to my Rec-709 monitor it will look dark because a rec-709 monitor will show bit 347 as darker than bit 440 so darker than the normal middle grey displayed by the monitor.
It’s very, very important to understand that just because the picture looks dark, you are NOT under exposed in any way. It is just the miss-match between the camera an monitor that is making the picture LOOK dark. IT IS THE MONITOR THAT IS WRONG NOT YOUR EXPOSURE.
Now the next common mistake is the thought that: “OK, my picture looks dark, so when I take it in to post production and raise the levels, it’s going to get noisy”. Well, this is to small degree true but it is not nearly as bad as many assume. The reason it’s not as bad as many assume is that you must remember that YOU WERE CORRECTLY EXPOSED. You are not trying to lift an under-exposed image. Remember what I said at the beginning: “The noise in a digital camera comes almost entirely from the sensor”.
So, with the same camera, if we expose any given gamma correctly then as the amount of light falling on the sensor is the same, the ratio of sensor noise to signal coming from the sensor does not change. So taking a face as an example, exposed correctly (ie. with middle grey at the correct level for the gamma curve in use) the amount of noise on that face will remain constant across all the different gamma curves. Do note however that some cameras may have different ISO ratings for different gammas and this might have a small impact on noise levels (but that’s the subject for a different article).
Now consider what happens when we go into the edit suite. If the gamma you are shooting with is quite close to the gamma curve of your target display device, which in most cases will be Rec-709 for TV or the Web. Then a small level change in post will bring middle grey and your whites up to the level the monitor is expecting and won’t add any significant noise, after all we are working with digital images and digital processing and don’t forget – you were not underexposed, just using different data bits to represent different brightness levels.
But what about a more aggressive gamma curve like SLog2 or any other log gamma. This is going to need some big level changes, surely this is going to get noisy. Again, no, not if you handle it correctly. You really should be using a dedicated grading tool for any log material as this will apply corrections that are designed for log and this will minimise any added noise. But the other thing to consider is that this is where you should be using a LUT or Look Up Table on your output to convert you data values from Log values to Rec-709 values.
By placing a LUT on the output of your project you shift your data levels from one range to another. Your grading is done to the original material in it’s original range so that you can retain that full range and then your LUT is used at the end of your grade (on the last node) to then convert your data values from log values to 709 values. When you do this you are simply moving your data values. So if the original input value for a part of the image is is bit 347, SLog2 middle grey for example. On your output you just use bit value 440 (709 middle grey) instead. Your just transposing data from one range to another and this does not add noise in the same way as adding gain does.
Now, looking at Log and the way it works. You should note that in order to squeeze 14 stops of dynamic range into our normal recording codec you use a lot of compression in the brighter stops. Remember, every time you add a stop of exposure, to record everything in that additional stop you should be recording the new stop with twice as much data as the previous. But that’s impossible with conventional recording, the amount of data required is simply too big. So log records every stop using roughly the same amount of data. This means that the brighter stops are very highly compressed, so it’s very important not to over expose log to get the best results.
So in summary: When you shoot and expose correctly with a gamma curve with a large dynamic range (cinegamma, hypergamma, log etc) it will look darker on your conventional monitor or viewfinder. That is how it should be, that is correct exposure, you are not underexposed, so the picture will not be noisy. The dark looking picture is because your monitor gamma does not match the cameras, it is the monitor that is wrong, not your exposure. The picture will not be noisier than any other correctly exposed picture, even though it looks dark because of the monitor miss-match. So have the confidence to shoot with these slightly dark looking images. Especially if your shooting log where over exposure can seriously compromise your end results.
Convergent Design Odyssey 7Q has landed.

This is not a review, just my first impressions. Let me start by saying I have a very good relationship with the team at Convergent Design, so maybe I’m biased. But then I’ve always liked their products and thats really just because that make really good, innovative gear at prices that shake up the competition.
I was going to hold off and writing about the 7Q until I could put together a more in depth review and a video to explain the key features and camera setup, I’m still going to do that, but I’m just so impressed by the 7Q that I wanted to share my first impressions.
First of all it is light for it’s size, it’s also low power. I’ve been running it off a single NP-F970 battery and I get at least a couple of hours from that small battery.
There are a few very small things that are not perhaps obvious in the setup and workflow, but those are very minor. For example when you want to switch from normal recording modes to recording raw from the FS700 you must first load the software from a storage memory area in the unit into the operating area and that takes around 3 minutes. Also before you can view your rushes on a computer you have to run a routine on the Odyssey that closes any open files and makes the clips visible to the computers software. This is part of the safe eject process and takes a few moments, you can’t just pull the SSD’s out and play the footage back, you must eject them correctly.
That screen, oh what a screen. Forget the recording capability for a moment, this is one of the best (if not the best) monitors I’ve ever had. Being able to turn all the key monitor functions like focus assist, zebras, LUT’s etc. on and off without having to go into a menu is wonderful and the display is crystal clear even outside on a sunny day (although a hood will be needed for the very best results). I van see the Odyssey becoming a “go to” monitor for many people, it’s very impressive.
The Raw workflow with FS700 is straight forward once you have your settings correct. VERY important to set the FS700 to SLog2 in a picture profile (the 7Q will flash a message to do this on the screen if you don’t) and even more important to make sure you are at 0db gain as changing the gain on the camera effects the raw recording level and if your not at 0db you will have reduced dynamic range. The 2K raw pictures look stunning, 95% of what I get with my F5/R5, there are some differences and I’ll cover those in the longer review and the differences are more to do with the camera than the 7Q. This is so much better ergonomically than an IFR5/R5 and I think that for FS700 owners in the future 4K compressed will make more sense than 4K raw. Way to go Convergent Design!