Are You Screwing Up Your Footage In Resolve?

First of all let me say that DaVinci Resolve is a great piece of software. Very capable, very powerful and great quality. BUT there is a hidden “Gotcha” that not many are aware of and even more are totally confused by (including me for a time).

This has taken me days of research, fiddling, googling and messing around to finally be sure of exactly what is going on inside Resolve. I am NOT a Resolve expert, so if anyone thinks I have this wrong do please let me know, but here goes……

These are the important things to understand about Resolve.

Internally Resolve Always Works With Data Levels (bit 0 to bit 1023 or CV0-CV1023 – CV stands for Code Value).

Resolve’s Scopes Always Measure The Internal Data Levels – These are NOT necessarily the Output Levels.

There Are 3 Data Ranges Used For Video – Data CV0 to CV1023, Legal Video 0-100IRE = CV64 to CV940 and Extended Range Video 0-109IRE CV64 to CV1023 (1019 over HDSDI).

Most Modern Video Cameras Record Using Extended Range Video, 0-109IRE or CV64 to CV1019.

Resolve Only Has Input Options For Data Levels or Legal Range Video. There is no option for Extended Range video.

If Transcoding Footage You May Require Extended Range Video Export. For example converting SLog video or footage from the majority of modern cameras which record up to 109IRE.

Resolve Only Has Output Options For Data Levels or Legal Range Video. There is no simple option to output, monitor or export using just the 64 to 1019 range as a 64 to 1019 range.

So, clearly anyone wanting to work with Extended Range Video has a problem. Not so much for grading perhaps, but a big issue if you want to transcode anything. Do remember that almost every modern video camera makes use of the full extended video range. It’s actually quite rare to find a modern camera that does not go above 100IRE.

So why not just use data levels for everything? Well that is an option. You can set your clips attributes (in the media pane) to Data Levels, set you monitor output to Data Levels and when you render choose Data Levels. In fact this is what YOU MUST DO if you want to convert files from one format to another without any scaling or level shifts. But be warned, never, ever, grade like this unless you add a Soft Clip LUT (more on that in a bit) as you will end up with illegal super blacks, blacks that are blacker than black and will not display correctly on most devices.

There are probably an awful lot of people out there using Resolve to convert XAVC or other formats to ProRes and in the process unwittingly making a mess of their footage, especially SLog2 and  hypergammas.

On input you can choose clip attributes for Data 0-1023 or Video 64-940 as well as Auto (in most cases if Resolve detects luma levels under 64 the footage is treated as Data, otherwise video levels). Anything set to video levels or detected as video levels gets scaled from the sources  CV64-940 range to Resolve’s internal CV0-1023 range.

As Resolves waveform/vector scopes etc always measure the internal scaled range there is no way to tell just by looking at the scopes what range your original material was in or whether it’s been scaled. If you do want to check the range of the source clip, try reducing the video level in the colour panel. If your clip is extended range then you should be able to se the previously hidden high range by pulling the levels down. A legal range clip on the other hand will have nothing above Resolves 1023 so the peak level will just drop.

On output you can choose Data 0-1023 or Legal Video 64-960 for your output or monitoring range (Resolve uses 960 which is the CbCr max value, Y is 940). For Resolve to handle the majority of modern cameras and many modern workflows where outputting 64-1023  may be required, there is no option!!!!!! So if you are working with video levels, anything you want to work with using extended range ends up either scaled on input or clipped/range restricted, blacks crushed, on output.

For example:

Import Hypergamma or SLog which is 64-1023, don’t touch or grade the footage, then export using video levels and the range is clipped and will no longer have the highlights recorded above 100IRE in the original. The original input files will be CV64-1023 but the video range output files will be CV64-940, the range is clipped off at 940 (100IRE).  If you set the clip attributes to “video 64-940” then on input CV940 is mapped to CV1023 in Resolve, so anything you shot between 100 and 109IRE (940-1019) goes out of range and is not seen on the output (It’s still there inside Resolve, but you can’t get to it unless you grade the footage). There just isn’t a correct option to pass through Full Range video 1:1, unless you use data in, data out, but then you run the risk of having illegal super blacks. If you leave the clip attributes as video and the export using Data Levels then your original CV64 black gets pulled down to CV0 so your blacks are crushed, however you do then retain the stuff above 100IRE.

If you’re using Resolve to convert XAVC SLog2 or SLog3 to something else, ProRes perhaps, this means that any Look Up Tables used in the downstream application will not behave as expected because your output clip will have the wrong levels. So for file conversions you MUST use data levels on the input clip attributes and data levels on output to pass the video through as per the original, even though you are working with footage that complies with perfectly correct, Extended Range video standards. But you must never edit or grade like this as you will get super blacks on your output….. Unless you generate a soft clip LUT.

 If you import a full range video clip that goes from CV64 to CV1019(1023) (0 to 109IRE) and do nothing to it then it will come out of Resolve as either data levels CV0 to CV1023 (-7IRE to 109IRE) or legal video CV64 to CV940 (0 – 100IRE), neither of which is ideal when transcoding footage. 

 So what can you do if you really need an Extended Range workflow? Well you can generate a Soft Clip LUT in Resolve to manage your output range. For this to work correctly you need to work entirely with data levels. Clip attributes must be set to Data levels, Monitor out to Data Levels and Exports should be at Data Levels. This is NOT necessary for direct 1:1 transcoding as the assumption is that you want a direct 1:1 copy of the original data, just in a different format.

You use Resolves Soft Clip LUT generator (on the Look Up Tables settings page) to create a 1D LUT with a Black Clip of 64 and a White Clip of 1019. This LUT is then applied as a 1D Output LUT. If you are using an existing output LUT (1D or 3D) then you can use the Soft Clip LUT generator to make a modified version of that existing LUT, adding the 64 and 1019 clip levels.

 So what is it doing?

As you are working at Data Levels your clips and footage will come in to Resolve 1:1. So a clip with a range of CV0-1023 will come in as CV0-1023, a CV64-940 clip will come in with CV64-940 and a CV64-1019 clip as CV64-1019. Most video clips from a modern camera will use CV64-1019.  A clip using CV64-1019 will be imported and handled as CV64-1019 within the full 0-1023 range, but the levels are not shifted or altered so if it’s CV220 in the original it will be CV220 inside Resolve. One immediate benefit is that Resolves scopes are now showing the actual original levels of the source clip, as shot. Phew – that’s a lot of CV’s in that paragraph, hope your following along OK.

You grade your footage as normal. The Soft Clip LUT will clip anything below CV64 (0 IRE, video black) but allow the full extended video range up to CV1019(1023) to be used. It won’t shift the level, just not allow anything to go below CV64. If grading for output do ensure that you really do want extended range (If you want to stay broadcast safe use video range).

The output to your HDSDI monitor will be unscaled data CV0-1019, but because of the LUT clipping at 64, there will be nothing below 64, no super blacks, this is how it should be, this is correct and what you want for an extended range workflow, perhaps for passing your footage on to another video editing application for finishing or where it will be mixed with other full range footage. The majority of grading workflows however will probably be conventional Legal Video Range.

When you render a file using data levels, the file will go from CV0-1019 but again because of the Soft Clip LUT there will be nothing below 64 (black) but you can use the Full Range above CV940 so super whites etc will be passed through correctly to the rendered file. This way you can make use of the complete extended video range.

 In Summary:

If you want to use Resolve to convert files from one codec to another, without changing your levels you must ensure the Clip Attributes are set to Data, your monitor out must be set to Data Levels and you must Render using Data Levels. If you don’t there is a very high likelihood that your levels will be incorrect or altered, almost certainly different to what you shot.

If you wish to grade and output anything above 100IRE (perhaps when mixing graded footage with full range camera footage) then again you must use data levels throughout the workflow but you should add a Soft Clip LUT with CV1019 as the upper clip and CV64 as the lower clip to prevent illegal black levels but retain full video range to 109IRE.

It would be so much simpler if Resolve had an extended range video out option.

What causes CA or Purple and Blue fringes in my videos?

Blue and purple fringes around edges in photos and videos are nothing new. Its a problem we have always had. telescopes and binoculars can also suffer. It’s normally called chromatic aberration or CA. When we were all shooting in standard definition it wasn’t something that created too many issues, but with HD cameras and 4K cameras it’s a much bigger issue because as you increase the resolution of the system (camera + lens) generally speaking, CA becomes much worse.

As light passes through a glass lens the different wavelengths that result in the different colours we see are diffracted and bet by different amounts. So the point behind the lens where the light comes into sharp focus will be different for red light to blue light.

A simple glass lens will bend red, green and blue wavelengths by different amounts, so the focus point will be slightly different for each.
A simple glass lens will bend red, green and blue wavelengths by different amounts, so the focus point will be slightly different for each.

The larger the pixels on your sensor the less of an issue this will be. Lets say for example that on an SD sensor with big pixels, when the blue light is brought to best focus the red light is out of focus by 1/2 a pixel width. All you will see is the very slightest red tint to edges as a small bit of out of focus red spills on to the adjacent pixel. Now consider what happens if you increase the resolution of the sensor. If you go from SD to HD the pixels need to made much smaller to fit them all on to the same size sensor. HD pixels are around half the size of SD pixels (for the same size sensor). So now that out of focus red light that was only half the width of an SD pixel will completely fill the adjacent pixels so the CA becomes more noticeable.

In addition as you increase the resolution of the lens you need to make the focus of the light “tighter” and less blurred to increase the lenses resolving power. This has the effect of making the difference between the focus points of the red and blue light more distinct, there is less blurring of each colour, so less bleed of one colour into the other and as a result more CA as the focus point for each wavelength becomes more distinct. When each focus point is more distinct the difference between the in focus and out of focus light becomes more obvious, so the colour fringing becomes more obvious.

This is why SD lenses very often show less CA than HD lenses, a softer more blurry SD lens will have less distinct CA. Lens manufacturers will use exotic types of glass to try to combat CA. Some types of glass have a negative index so blue may focus closer than red and then other types of glass may have a positive index so red may focus closer than blue. By mixing positive and negative glass elements within the lens you can cancel out some of the colour shift. But this is very difficult to get right across all focal lengths in zoom lenses so some CA almost always remains. The exotic glass used in some of the lens elements can be incredibly expensive to produce and is one of the reasons why good lenses don’t come cheap.

Rather than trying to eliminate every last bit of CA optically the other approach is to electronically reduce the CA by either shifting the R G B channels in the camera electronically or reducing the saturation around high contrast edges. This is what ALAC or CAC does. It’s easier to get a better result from these systems when the lens is precisely matched to the camera and I think this is why the CA correction on the Sony kit lenses tends to be more effective than that of the 3rd party lenses.

Sony recently released firmware updates for the PMW200 and PMW300 cameras that improves the performance of the electronic CA reduction of these cameras when using the supplied kit lenses.

Sony PMW-F5 and F55 to get ProRes and DNxHD codecs.

In a very welcome announcement today, Sony have stated as part of their on-going commitment to making the F5 and F55 cameras as versatile and flexible as possible and following customer feedback there will be a hardware upgrade option for both cameras that will allow you to add the popular ProRes and DNxHD codecs to the internal recording options.

This is great news, although I have to say I really like the XAVC codec for acquisition, as it will really help those still using FCP7 and provide the cameras with a codec for just about every possible scenario.

No details of when, or how much, but great news. You can find a few more details here: http://community.sony.com/t5/F5-F55/Announcing-ProRes-and-DNxHD-support-for-both-F5-and-F55/m-p/293988#U293988

Premiere Pro CC now supports Sony Raw – WITHOUT the Sony Plug-In.

I was having “Media Pending” issues with Sony raw footage in Premiere on my mac. I did some digging and it appears that in the last update to Premiere Pro CC (version 7.2) Adobe included native support for Sony raw at 4K, 2K and HFR. If you are running Premiere CC and you still have the Sony Raw Plug-In installed it makes Premiere unresponsive and will result in a lot of “Media Pending” messages when you try to work with Raw footage. After removing the Sony raw importer plug-in all my media pending issues went away and I can now use 2K HFR footage in Premiere CC.

To uninstall the plugin on a Mac (called “ImporterSonyRawBundle”)  go to “Applications” then “Adobe Premiere Pro CC” and then right click on the “Adobe Premiere Pro CC” app file and select “Show Package Contents” then open “Content” and then “Plug-ins” you should find the file you need to trash.

Storm Chasing Tour and Workshop, 2014.

Me shooting a tornado with the PMW-F5 and AXS-R5 on my Miller Solo tripod.
Me shooting a tornado with the PMW-F5 and AXS-R5 on my Miller Solo tripod.

So just there’s just a little over 3 months to go until my annual storm chasing expedition to the USA. Last years trip was an amazing success with some incredible storms and tornadoes captured on video and stills. This years trip is 11 days long to maximise the chances of seeing jaw dropping weather.

Please understand that I am not going to be trying to get as close as possible to a tornado or put my life or anyone else’s life in danger. This trip is about capturing dramatic and beautiful images of the Great Plains of the USA along with the severe storms that are common in spring. If you have seen any of my storm videos you will know that the structure of some of the storms that I expect to witness are incredible. The lightning shows breath taking and the scenery impressive. If you are a scenic or landscape photographer or videographer then this is a trip for you.

Dramatic Supercell Thunderstorm
Dramatic Supercell Thunderstorm

During the trip I will be on hand to share my knowledge and help you improve your photo and video skills. We will have a motion control rig for time-lapse, 4K cameras and all kinds of other cool gadgets to play with.

More details of the trip can be found here: https://www.xdcam-user.com/tornado-chasing/  . Please not that spaces are extremely limited, so it’s first come, first served.

If you really want to make the most of your trip then why not join me a few days early in Austin, Texas where I will be running a music video production workshop at Omega Broadcast on the 22/23 of May, more details of this to follow soon. After the storm chasing tour it’s Cinegear in LA on the 6/7th of June.

The Cape Peninsular shot on an F55 with two lenses.

Here is a short selection of a few clips that I shot around the Cape Peninsular while waiting for my flight home after some workshops. Shot on an F55 with a 20mm and 85mm Sony PL lens. Most of it is 2k raw at 240fps, but there are some normal speed shots in there too as well as some S&Q motion time-lapse. Big thank you to Charles Maxwell for taking the time out to drive me around the Peninsular.

ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

TV Logic VFM-058W 1920 x 1080 compact monitor.

TV-Logic VFM-058W monitor on set.
TV-Logic VFM-058W monitor on set.

I’m a big fan of TV Logics monitors. I’ve been using a TV Logic 056W as my primary on camera monitor for some time and it’s been a good solid and reliable workhorse. One of the great things about it is it’s weight, it’s extremely light, which makes it very simple to mount on the camera.

TV Logic have now released a new monitor, very similar to the 056W. The new monitor is just a shade smaller at 5.5″ but now features a full 8 bit 1920 x 1080 resolution panel.  At first I was sceptical that I would see any benefit from this high resolution on such a small screen, but I have been pleasantly surprised, I can see noticeably more detail in my images thanks to the screens higher resolution. This makes focus checking easier, especially when shooting in 4K.

TV Logics VFM-058W showing waveform display.
TV Logics VFM-058W showing waveform display.

The 058W’s feature set is very similar to the 056W. Inputs are 3G HDSDI and HDMI. There is also an HDMI out, very handy for feeding another monitor if you only have a single HDMI or SDI out on the camera. There are the usual Waveform and Vectorscope displays for exposure control along side luma zone (a kind of false colour) and over range error checking . There is coloured peaking (focus assist) to help with critical focus as well as an always handy zoom mode that allows you to zoom in to the picture for focus checking. There are various underscan/overscan viewing modes along with all the usual aspect ratio markers and safe area overlays.

DSLR shooters are also taken care of thanks to the monitors ability to take  the output from a DSLR and scale the image so that it fills the screen.

One feature I really like is the way the monitors 3 assignable function buttons work. Instead of having to go into the menus to assign your desired functions to the buttons, all you have to do is press and hold the button. After a couple of seconds a drop down menu appears with all the available options, then you simply scroll to the function you want with the scroll wheel and press the wheel to select. It’s fast and simple to change the assigned function as your needs change.

As well as providing a sharp and clear image, the monitors colours are also nice and accurate. You can even use TV-Logics calibration utility and a measuring probe to full calibrate the display and save the calibration settings as a LUT in the monitor. I’d really like to investigate the monitors LUT capabilities as this could prove very useful when shooting with Log.

There is a built in speaker for audio monitoring as well as a 3.5mm headphone jack. If you want you can monitor your audio levels via on screen audio meters.

The 058W is not quite as light as the 056W as it has a tough looking magnesium alloy housing, but it’s still nice and lightweight. It also lacks the analog input that the 056W has, although to be honest I’ve never used that feature on my 056W. The higher resolution screen is very nice, the new button layout (all along the top of the monitor) is a big improvement over the 056W and overall this monitor feels a little more robust (although my 056W has been all over the world with anything breaking). With 14″ threads on all 4 sides, mounting is easy, so the VFM-058W will now replace my 056W as my on camera monitor.

Well done TV-Logic. Another really good monitor.

PMW-300 and PMW-200 Firmware Update. Reduces Chromatic Aberration.

Quite a few PMW-300 users have been having issues with Chromatic Aberration (CA) typically pink and blue halos around areas of high contrast. CA is caused by the fact that different wavelengths of light are bent and focussed at different points by the lenses. So blue light will be out of focus when red is in focus and vice versa. There are two ways to combat this. The first is the use of combinations of special (and often very expensive) glass that compensate for this, the other is electronic correction performed in the camera. One issue with the optical approach is that the sharper you make the lens the worse the problem becomes, as if you bring red into very precise focus, the slight defocus of the blue becomes more noticeable. So as we raise the resolution of the cameras we shoot with and this need ever sharper and higher resolution lenses the problem becomes harder to deal with purely optically. As a result modern video cameras rely more and more on electronic CA reduction, sometimes called ALAC (automatic lens aberration correction).

Sony’s EX cameras include ALAC and it does a really good job of masking the CA. The PMW-200 and PMW-300, even though they both use the same lens as the EX’s show a lot more CA. Sony have now addressed this and released firmware updates for both the PMW-200 and PMW-300. It appears that with older firmware versions the ALAC only compensated for horizontal aberrations. The new firmware improves the horizontal correction and adds vertical correction. The difference this update makes is in most cases quite dramatic, almost totally eliminating the CA.

You can download the updates from here: http://support.sonybiz.ca/esupport/Navigation.do?filetype=Firmware

 

PMW-300 V1.12   http://support.sonybiz.ca/esupport/DownloadView.do?id=10193&eulaId=20001

 

PMW-200 V1.30 http://support.sonybiz.ca/esupport/DownloadView.do?id=10190&eulaId=20001

Go big but go small! 4K in a compact package.

So, I’m happily shooting lots of 4K with my Sony F5/R5. I really love this camera and get beautiful results time and time again. It amazingly versatile thanks to it wide range of recording options and interchangeable lens mount, but…. it’s quite a big camera, definitely more tripod/shoulder mount than handheld. For many of the documentary productions I’m involved in a small handheld camera is required for pick-up shots or for slinging over your shoulder while racing around on a snow scooter or diving out of a car to shoot a tornado. I start storm chasing in May, so I need to pick something up before then.

Sony PXW-Z100 4K camcorder
Sony PXW-Z100 4K camcorder

I’ve been keeping an eye out for a compact 4K camcorder. At first I started looking at the Sony PXW-Z100 or FDR-AX1. These are both very capable camcorders. The have nice 20x zoom lenses and use either XAVC or XAVC-S. The pictures from the Z100’s that I’ve played with have been very good…. provided the light levels are good. These two cameras have very small sensors. There are pro’s and con’s to this. The small sensor size makes it easy to add a good quality 20x zoom lens and give deep DoF (something desirable for a 4K run and gun camera). But small sensors have small pixels and this makes them less sensitive and restricts the dynamic range. So the Z100 is still an option and I’m still considering one, but now there are more cameras on the horizon that look very interesting.

Sony AX100 compact 4K camcorder.
Sony AX100 compact 4K camcorder.

The first I spotted was another camera from Sony. The FDR-AX100 should be available in April with a price tag around $2K. So for a start it’s a lot cheaper than the Z100. It’s also a lot smaller, which is good (for me at least, remember this is a grab and go camera to work alongside my F5/R5) as it will save space and weight when travelling compared to the bulkier Z100. The AX100 has a 12x power zoom matched to a 1″ 20 megapixel sensor. Apparently this is the same sensor as the RX100 II, which produces lovely photos and HD video. The bigger sensor, means bigger pixels, so it should be reasonably sensitive. It may even end up more sensitive than the Z100, time will tell, I’d really like to get one to test and review. Ergonomically this is a handheld video camera, designed for exactly that with both a flip out LCD screen and a small rear viewfinder. It records using XAVC-S on to SD cards so cheap and easy to work with media, but I’m concerned about the quality of the UHD (3840×2160) video when the bit rate is only 50Mb/s. It should be good, but I want to see it for myself.

What about non Sony options? (I’m not a Sony employee, I’m a freelance DP). Well there are a couple.

Blackmagic 4K camera - I'm looking for something a bit smaller!
Blackmagic 4K camera – I’m looking for something a bit smaller!

There is the new Blackmagic 4K production camera. This is a little more compact than the F5/R5, but not by much. At the new reduced price of $3K it’s a lot more “disposable” than the F5/R5 meaning I would be less worried about chucking it about or hanging it over my shoulder via a camera strap. It has some appealing features including a global shutter (wish I could afforded an F55 with an R5) which would be great for shooting thunderstorms and lightning as well as raw or ProRes recording, but I would be back to the same lens challenges. No nice lightweight servo zoom here. By the time I’ve added hand grips etc I will be back to a large and bulky camera, so the BM 4K is not what I’m looking for right now, but an interesting camera all the same.

Panasonic GH4 compact camera that shoots 4K video.
Panasonic GH4 compact camera that shoots 4K video.

Then there is the Panasonic GH4. This is the dark horse right now. The GH3 shoots great HD video and the GH4, on paper at least sounds like it will do a good job at 4K. Being a compact (micro 4/3rds, MFT) DSLR type camera means I will still have lens issues, again no silky smooth, variable speed 20x servo zoom. But thanks to the Metabones MFT to Canon adapter I should be able to use all my Canon lenses and Panasonic have a number of compact zoom lenses including a 14-140mm and a few power zooms, although most of these are in the f3.5 – f4 range so not very fast. The GH4 records 4K 4096×2160 at 24fps or UHD 3840×2160 at up to 30fps to SDHC cards at 100Mb/s (Long GoP). This should produce good looking pictures and again SD cards are cheap and readily available. What really appeals to me about the GH4 is that it doesn’t look like a video camera, so you can shoot almost anywhere with it. In addition it is a stills camera, so I don’t need to include an additional stills camera in my shooting kit. It even has a built in time-lapse function. The sensor is “only” 16 Mega pixels. For video less is more, the lower pixel count will help compensate for the smaller than 35mm sized sensor and should help lessen any aliasing issues (remember this is a stills camera. The OLPF will be designed for 16MP stills).

So right now I’m still sitting on the fence. It will be really interesting to see the first reviews of the Sony AX100 and GH4. Right now I’m leaning towards the Panasonic GH4 as it ticks many boxes, handy 4K video camera and useful stills camera, but at the end of the day much will depend on the quality of the 4K video from these cameras.