So Sony have just launched the A7S III. And very impressive it is. Amazing low light performance, great dynamic range and lots of nice 10 bit codecs. You can even get a 16 bit raw output if you want. I can’t wait to get one. But I really don’t see the A7S III as a threat to or replacement of my FX9 or any other 4K professional video camera.
All the same discussions took place when the original A7S was launched. Sony F5 owners looked at the A7S and said – heck how can that little camera shoot full frame 4K while my camera can’t even shoot s35 4K. Why can the A7S have AF when my F3/F5 doesn’t. How can a camera that produces such beautiful images only cost 1/5th of what my F5 costs. But here we are 6 years on and the A7S and A7S II didn’t replace any of the bigger cameras and when the FS5 was launched people snapped up the FS5, often to replace an A7.
I don’t ever want to go back to having to carry and use a big box of different ND filters for different light levels. I find the small LCD screen on the back of a DSLR to be of very limited use and while the A7S III does have a very good EVF it’s placement makes it hard to use it on a tripod or in anything other than a simple hand hold with the camera up against your face.
If you want to shoot log then you really want built in LUTs. There are the battery and power questions. How do you power the camera and accessories without needing two or more power systems or a rig to take a big external battery and a bunch of adapters? Then there’s having buttons and switches for all the frequently accessed functions. I could go on but you only have to look at the many franken-rigs that end up built around DSLR type cameras just to make them usable to see the problems. Almost always the first purchase to go with a DSLR is a cage. Why do you need a cage? Because you know your going to have to bolt a ton of stuff to that once small, portable camera to turn it into a professional video making tool.
Sure, I will almost certainly get an A7S III and it will be a great camera to compliment my FX9. And yes, there may even be some projects where I only take the A7S III, just as there have been shoots where I have used just my A7S. But it won’t ever replace my FX9, they are two very different tools, each with its own strengths and weaknesses.
The image quality gap between traditional large professional video cameras and handheld stills type cameras will continue to get smaller and smaller as electronics continue to be further miniaturised, that is inevitable, but the cameras form factor will still be important.
The small cars of today often have all the same bells and whistles as a large luxury car of 10 years ago. Let’s say you’ve gone on vacation (remember those?) and it’s a road trip. You get to the car rental office and you have a choice between a large, spacious, stable, less stressed car or a small car that has to work a bit harder to get you to the same place. Both will get you there, but which do you choose? There might be some instances where the small car is preferrable, perhaps you will be in a lot of narrow city streets a lot. But for most road trips I suspect most people will opt for the big comfy cruiser most of the time.
For me the A7S III will be that nippy little car, a camera that I can pop in a pocket to grab beautiful images where I can’t use a bigger camera. But for my main workhorse I don’t want fiddly, I don’t want a ton of accessories hanging off it just to make it workable. I want the luxury cruiser that will just take it all in it’s stride and get on with the job and right now that’s my FX9.
This came up during a Facebook discussion. Can you use a light meter with the FX9 and will the exposure be correct?
When I first met the FX9 at Sony’s Pinewood Studios facility we tested and checked all sorts of different aspects of the cameras performance against various light meters and test charts. I found that the camera matched what we expected perfectly well.
But just to be sure I have just tested my own example against my trusty Sekonic light meter and once again I am happy to say that everything seems to match as expected.
In this simple setup I used a couple of different charts with middle grey and 90% white – I do find that there is some variation between charts in how reflective the 90% and 18% reflectivity areas are. So I’ve used a couple here and my main reference is the large DSC Labs white and middle grey chart.
I used the dimmers on my lights so that my metered exposure reading for 24fps 1/48th shutter came to exactly f5.6. Then I set the lens to f5.6 (Sony 24-70mm GMaster).
The result is pretty much as close to a perfect exposure as one can expect. So don’t be afraid to use a light meter with the FX9.
I’ve been experimenting a bit trying to reducing the effect of Green/Cyan CA with the FX9. I have discovered a couple of things that can help reduce interactions between the cameras processing and areas of high contrast that may be exhibiting bright green/cyan fringes.
First thing to note is that a lens with less or no CA will not have the same issue. But as no lens is totally CA free the below settings can help.
These changes are for Custom Mode Only.
1: Turn OFF the aperture correction in the paint menu. Turning off the aperture correction noticeably reduces the cameras tendency to create black halos in areas of extreme contrast and it also reduces the enhancement of areas of strong CA. This has a softening/smoothing effect on bright CA making it much less noticeable. There is very, very little loss of sharpness in the rest of the image and I actually prefer the way the camera looks with this turned off.
2: Use the Multi-Matrix to reduce the brightness of Green/Cyan CA. The most common examples of CA causing an issue are with background out of focus high contrast areas. In this case the CA is normally Green/Cyan. It’s possible to tune the cameras multimatrix to reduce the brightness of these green/cyan edges. If you turn ON the mutli-matrix and then select CY+ and set this to -30 you will see a very useful reduction in the intensity of the CA. For a stronger reduction in addition select CY and set this to -15. Changing these setting will have an impact on the reproduction of other cyan objects in your shots, but you should see this in the VF and testing various scenes these changes are typically not noticable. In some cases I am finding I actually like this slightly modified look!
Use both of the above together for the strongest impact. But if you are only going to use one, turn off the aperture correction.
It’s amazing how often people will tell you how easy it is to change the white balance or adjust the ISO of raw footage in post. But can you, is it really true and is it somehow different to changing the ISO or white balance of Log footage?
Let’s start with ISO. If ISO is sensitivity, or the equivalent of sensitivity how on earth can you change the sensitivity of the camera once you get into post production. The answer is you can’t.
But then we have to consider how ISO works on an electronic camera. You can’t change the sensor in a video camera so in reality you can’t change how sensitive an electronic camera is (I’m ignoring cameras with dual ISO for a moment). All you can do is adjust the gain or amplification applied to the signal from the sensor. You can add gain in post production too. So, when you adjust the exposure or using the ISO slider for your raw footage in post all you are doing is adjusting how much gain you are adding. But you can do the same with log or any other gamma.
One thing that makes a difference with raw is that the gain is applied in such a way that what you see looks like an actual sensitivity change no matter what gamma you are transforming the raw to. This makes it a little easier to make changes to the final brightness in a pleasing way. But you can do exactly the same thing with log footage. Anything you do in post must be altering the recorded file, it can never actually change what you captured.
Changing the white balance in post: White Balance is no different to ISO, you can’t change in post what the camera captured. All you can do is modify it through the addition or subtraction of gain.
Think about it. A sensor must have a certain response to light and the colours it sees depending on the material it’s made from and the colour filters used. There has to be a natural fixed white balance or a colour temperature that it works best at.
The Silicon that video sensors are made from is almost always more sensitive at the red end of the spectrum than the blue end. So as a result almost all sensors tend to produce the best results with light that has a lot of blue (to make up for the lack of blue sensitivity) and not too much red. So most cameras naturally perform best with daylight and as a result most sensors are considered daylight balanced.
If a camera produces a great image under daylight how can you possibly get a great image under tungsten light without adjusting something? Somehow you need to adjust the gain of the red and blue channels.
Do it in camera and what you record is optimised for your choice of colour temperature at the time of shooting. But you can always undo or change this in post by subtracting or adding to whatever was added in the camera.
If the camera does not move away from its native response then if you want anything other than the native response you will have to do it in post and you will be recording at the cameras native white balance. If you want a different colour temp then you need to add or subtract gain to the R & B channels in post to alter it.
Either way what you record has a nominal white balance and anything you do in post is skewing what you have recorded using gain. There is no such thing as a camera with no native white balance, all cameras will favour one particular colour temperature. So even if a manufacturer claims that the white balance isn’t baked in what they mean is they don’t offer the ability to make any adjustments to the recorded signal. If you want the very best image quality, the best method is to adjust at the time of recording. So, as a result a lot of camera manufacturers will skew the gain of the red and blue channels of the sensor in the camera when shooting raw as this optimises what you are recording. You can then skew it again in post should you want a different balance.
With either method if you want to change the white balance from what was captured you are altering the gain of the red and blue channels. Raw doesn’t magically not have a white balance, so shooting with the wrong white balance and correcting it in post is not something you want to do. Often you can’t correct badly balanced raw any better than you can correct incorrectly balanced log.
How far you can adjust or correct raw depends on how it’s been compressed (or not), the bit depth, whether it’s log or linear and how noisy it is. Just like a log recording really, it all depends on the quality of the recording.
The big benefit raw can have is that the amount of data that needs to be recorded is considerably reduced compared conventional component or RGB video recordings. As a result it’s often possible to record using a greater bit depth or with much less compression. It is the greater bit depth or reduced compression that really makes a difference. 16 bit data can have up to 65,536 luma gradations, compare that to the 4096 of 12 bit or 1024 of 10 bit and you can see how a 16 bit recording can have so much more information than a 10 bit one. And that makes a difference. But 10 bit log v 10 bit raw, well it depends on the compression, but well compressed 10 bit log will likely outperform 10 bit raw as the all important colour processing will have been done in the camera at a much higher bit depth than 10 bit.
As there is no Glastonbury Festival this year the organisers and production company have been releasing some videos from last year. This video was shot mostly with Venice using Cooke 1.8x anamorphics. The non Venice material is from an FS5. It’s a behind the scenes look at the activities and performances around the Glastonbury Big Top and the Theater and Circus fields.
I see it so many times on various forums and user groups – “I didn’t see it until I looked at it at home and now I find the footage is unusable”.
We all want our footage to be perfect all of the time, but sometimes there might be something that trips up the technology that we are using. And that can introduce problems into a shot. The problem is perhaps that these things are not normal. As a result we don’t expect them to be there, so we don’t necessarily look for them. But thinking about this, I also think a lot of it is because very often the only thing being used to view what is being shot is a tiny LCD screen.
For the first 15 years of my career the only viewfinders available were either a monocular viewfinder with a magnifier or a large studio style viewfinder (typically 7″). Frankly if all you are using is a 3.5″ LCD screen, then you will miss many things!
I see many forum post about these missed image issues on my phone which has a 6″ screen. When I view the small versions of the posted examples of the issue I can rarely see it. But view it full screen and it becomes obvious. So what hope do you have of picking up these issue on location with a tiny monitor screen, often viewed too closely to be in good focus.
A 20 year old will typically have a focus range of around 12 diopters, but by the time you get to 30 that decreases to about 8, by 40 to 5 and 50 just 1 or 2. What that means (for the average person) is that if you are young enough you might be able to focus sufficiently on that small LCD when it’s close enough to your eyes for you to be able to see it properly and be able to see potential problems. But by the time you get to 30 most people won’t be able to adequately focus on a 3.5″ LCD until it’s too far from their eyes to resolve everything it is capable of showing you. If you are hand holding a camera with a 3.5″ screen such that the screen is 30cm or more from your eyes there is no way you can see critical focus or small image artefacts, the screen is just too small. Plus most people that don’t have their eyesight tested regularly don’t even realise it is deteriorating until it gets really bad.
There are very good reason why viewfinders have diopters/magnifiers. They are there to allow you to see everything your screen can show, they make the image appear larger, they keep out unwanted light. When you stop using them you risk missing things that can ruin a shot, whether that’s focus that’s almost but not quite right, something in the background that shouldn’t be there or some subtle technical issue.
It’s all too easy to remove the magnifier and just shoot with the LCD, trusting that the camera will do what you hope it to. Often it’s the easiest way to shoot, we’ve all been there I’m sure. BUT easy doesn’t mean best. When you remove the magnifier you are choosing easy shooting over the ability to see issues in your footage before it’s too late to do something about it.
The Sony PXW-Z90 is a real gem of a camcorder. It’s very small yet packs a 1″ sensor , has real built in ND filters, broadcast codecs and produces a great image. On top of all that it can also stream live directly to Facebook and other similar platforms. In this video I show you how to set up the Z90 to stream live to YouTube. Facebook is similar. The NX80 from Sony is very similar and can also live stream in the same way.
In case you missed the live stream I have uploaded the recording I made of my almost hour long video with hints, tips and ideas for rigging the PXW-FX9. In the video I cover things like base plates including VCT and Euro Plate. I look at hand grip options, rod rails and matte boxes as well as power options including V-mount adapters and the XDAC-FX9. Of course everything in the video is based on my own personal needs and requirements but I think there is some good information in there for anyone looking to accessorize their FX9, whether for working from a tripod or handheld.
Sony have released the PXW-FX9 user guide that I wrote for them. The guide is in the form of a searchable PDF designed for reading on a mobile device. The idea being that you can keep it on your phone in case you need to reference it on a shoot. It’s not meant to replace the manual but to compliment it and answer questions such as – what is S-Cinetone?
To download the guide go to the main Sony PXW-FX9 landing page and scroll down towards the bottom. There you should find a link that will take you to the guide download page as well as other resources for the FX9.
Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.