I’ve been experimenting a bit trying to reducing the effect of Green/Cyan CA with the FX9. I have discovered a couple of things that can help reduce interactions between the cameras processing and areas of high contrast that may be exhibiting bright green/cyan fringes.
First thing to note is that a lens with less or no CA will not have the same issue. But as no lens is totally CA free the below settings can help.
These changes are for Custom Mode Only.
1: Turn OFF the aperture correction in the paint menu. Turning off the aperture correction noticeably reduces the cameras tendency to create black halos in areas of extreme contrast and it also reduces the enhancement of areas of strong CA. This has a softening/smoothing effect on bright CA making it much less noticeable. There is very, very little loss of sharpness in the rest of the image and I actually prefer the way the camera looks with this turned off.
2: Use the Multi-Matrix to reduce the brightness of Green/Cyan CA. The most common examples of CA causing an issue are with background out of focus high contrast areas. In this case the CA is normally Green/Cyan. It’s possible to tune the cameras multimatrix to reduce the brightness of these green/cyan edges. If you turn ON the mutli-matrix and then select CY+ and set this to -30 you will see a very useful reduction in the intensity of the CA. For a stronger reduction in addition select CY and set this to -15. Changing these setting will have an impact on the reproduction of other cyan objects in your shots, but you should see this in the VF and testing various scenes these changes are typically not noticable. In some cases I am finding I actually like this slightly modified look!
Use both of the above together for the strongest impact. But if you are only going to use one, turn off the aperture correction.
It’s amazing how often people will tell you how easy it is to change the white balance or adjust the ISO of raw footage in post. But can you, is it really true and is it somehow different to changing the ISO or white balance of Log footage?
Let’s start with ISO. If ISO is sensitivity, or the equivalent of sensitivity how on earth can you change the sensitivity of the camera once you get into post production. The answer is you can’t.
But then we have to consider how ISO works on an electronic camera. You can’t change the sensor in a video camera so in reality you can’t change how sensitive an electronic camera is (I’m ignoring cameras with dual ISO for a moment). All you can do is adjust the gain or amplification applied to the signal from the sensor. You can add gain in post production too. So, when you adjust the exposure or using the ISO slider for your raw footage in post all you are doing is adjusting how much gain you are adding. But you can do the same with log or any other gamma.
One thing that makes a difference with raw is that the gain is applied in such a way that what you see looks like an actual sensitivity change no matter what gamma you are transforming the raw to. This makes it a little easier to make changes to the final brightness in a pleasing way. But you can do exactly the same thing with log footage. Anything you do in post must be altering the recorded file, it can never actually change what you captured.
Changing the white balance in post: White Balance is no different to ISO, you can’t change in post what the camera captured. All you can do is modify it through the addition or subtraction of gain.
Think about it. A sensor must have a certain response to light and the colours it sees depending on the material it’s made from and the colour filters used. There has to be a natural fixed white balance or a colour temperature that it works best at.
The Silicon that video sensors are made from is almost always more sensitive at the red end of the spectrum than the blue end. So as a result almost all sensors tend to produce the best results with light that has a lot of blue (to make up for the lack of blue sensitivity) and not too much red. So most cameras naturally perform best with daylight and as a result most sensors are considered daylight balanced.
If a camera produces a great image under daylight how can you possibly get a great image under tungsten light without adjusting something? Somehow you need to adjust the gain of the red and blue channels.
Do it in camera and what you record is optimised for your choice of colour temperature at the time of shooting. But you can always undo or change this in post by subtracting or adding to whatever was added in the camera.
If the camera does not move away from its native response then if you want anything other than the native response you will have to do it in post and you will be recording at the cameras native white balance. If you want a different colour temp then you need to add or subtract gain to the R & B channels in post to alter it.
Either way what you record has a nominal white balance and anything you do in post is skewing what you have recorded using gain. There is no such thing as a camera with no native white balance, all cameras will favour one particular colour temperature. So even if a manufacturer claims that the white balance isn’t baked in what they mean is they don’t offer the ability to make any adjustments to the recorded signal. If you want the very best image quality, the best method is to adjust at the time of recording. So, as a result a lot of camera manufacturers will skew the gain of the red and blue channels of the sensor in the camera when shooting raw as this optimises what you are recording. You can then skew it again in post should you want a different balance.
With either method if you want to change the white balance from what was captured you are altering the gain of the red and blue channels. Raw doesn’t magically not have a white balance, so shooting with the wrong white balance and correcting it in post is not something you want to do. Often you can’t correct badly balanced raw any better than you can correct incorrectly balanced log.
How far you can adjust or correct raw depends on how it’s been compressed (or not), the bit depth, whether it’s log or linear and how noisy it is. Just like a log recording really, it all depends on the quality of the recording.
The big benefit raw can have is that the amount of data that needs to be recorded is considerably reduced compared conventional component or RGB video recordings. As a result it’s often possible to record using a greater bit depth or with much less compression. It is the greater bit depth or reduced compression that really makes a difference. 16 bit data can have up to 65,536 luma gradations, compare that to the 4096 of 12 bit or 1024 of 10 bit and you can see how a 16 bit recording can have so much more information than a 10 bit one. And that makes a difference. But 10 bit log v 10 bit raw, well it depends on the compression, but well compressed 10 bit log will likely outperform 10 bit raw as the all important colour processing will have been done in the camera at a much higher bit depth than 10 bit.
As there is no Glastonbury Festival this year the organisers and production company have been releasing some videos from last year. This video was shot mostly with Venice using Cooke 1.8x anamorphics. The non Venice material is from an FS5. It’s a behind the scenes look at the activities and performances around the Glastonbury Big Top and the Theater and Circus fields.
I see it so many times on various forums and user groups – “I didn’t see it until I looked at it at home and now I find the footage is unusable”.
We all want our footage to be perfect all of the time, but sometimes there might be something that trips up the technology that we are using. And that can introduce problems into a shot. The problem is perhaps that these things are not normal. As a result we don’t expect them to be there, so we don’t necessarily look for them. But thinking about this, I also think a lot of it is because very often the only thing being used to view what is being shot is a tiny LCD screen.
For the first 15 years of my career the only viewfinders available were either a monocular viewfinder with a magnifier or a large studio style viewfinder (typically 7″). Frankly if all you are using is a 3.5″ LCD screen, then you will miss many things!
I see many forum post about these missed image issues on my phone which has a 6″ screen. When I view the small versions of the posted examples of the issue I can rarely see it. But view it full screen and it becomes obvious. So what hope do you have of picking up these issue on location with a tiny monitor screen, often viewed too closely to be in good focus.
A 20 year old will typically have a focus range of around 12 diopters, but by the time you get to 30 that decreases to about 8, by 40 to 5 and 50 just 1 or 2. What that means (for the average person) is that if you are young enough you might be able to focus sufficiently on that small LCD when it’s close enough to your eyes for you to be able to see it properly and be able to see potential problems. But by the time you get to 30 most people won’t be able to adequately focus on a 3.5″ LCD until it’s too far from their eyes to resolve everything it is capable of showing you. If you are hand holding a camera with a 3.5″ screen such that the screen is 30cm or more from your eyes there is no way you can see critical focus or small image artefacts, the screen is just too small. Plus most people that don’t have their eyesight tested regularly don’t even realise it is deteriorating until it gets really bad.
There are very good reason why viewfinders have diopters/magnifiers. They are there to allow you to see everything your screen can show, they make the image appear larger, they keep out unwanted light. When you stop using them you risk missing things that can ruin a shot, whether that’s focus that’s almost but not quite right, something in the background that shouldn’t be there or some subtle technical issue.
It’s all too easy to remove the magnifier and just shoot with the LCD, trusting that the camera will do what you hope it to. Often it’s the easiest way to shoot, we’ve all been there I’m sure. BUT easy doesn’t mean best. When you remove the magnifier you are choosing easy shooting over the ability to see issues in your footage before it’s too late to do something about it.
The Sony PXW-Z90 is a real gem of a camcorder. It’s very small yet packs a 1″ sensor , has real built in ND filters, broadcast codecs and produces a great image. On top of all that it can also stream live directly to Facebook and other similar platforms. In this video I show you how to set up the Z90 to stream live to YouTube. Facebook is similar. The NX80 from Sony is very similar and can also live stream in the same way.
In case you missed the live stream I have uploaded the recording I made of my almost hour long video with hints, tips and ideas for rigging the PXW-FX9. In the video I cover things like base plates including VCT and Euro Plate. I look at hand grip options, rod rails and matte boxes as well as power options including V-mount adapters and the XDAC-FX9. Of course everything in the video is based on my own personal needs and requirements but I think there is some good information in there for anyone looking to accessorize their FX9, whether for working from a tripod or handheld.
Sony have released the PXW-FX9 user guide that I wrote for them. The guide is in the form of a searchable PDF designed for reading on a mobile device. The idea being that you can keep it on your phone in case you need to reference it on a shoot. It’s not meant to replace the manual but to compliment it and answer questions such as – what is S-Cinetone?
To download the guide go to the main Sony PXW-FX9 landing page and scroll down towards the bottom. There you should find a link that will take you to the guide download page as well as other resources for the FX9.
Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.
I received this timely reminder from the guys at Pag Batteries and it contains important information even if you don’t have one of Pag’s excellent batteries. The main one being that you should not store lithium batteries full charged.
If you are currently unable to work as a result of the global pandemic, then you need to make sure that your Li-Ion camera batteries are in good health when it becomes possible to return to work.
Batteries naturally self-discharge over time. If their state-of-charge is less than 10% before an extended period of inactivity, they could become difficult for you to recover.
It is also undesirable for batteries to be 100% charged for storage as this can damage the cells and lead to a shorter overall life.
PAG recommends that you charge your Li-Ion batteries to 50% (anywhere between 20% and 80% is desirable) prior to long term storage of more than 2 weeks. PAGlink batteries should also be in an unlinked state during this period.
PAGlink Sleep Mode for Storage
PAGlink Batteries can be put into Sleep Mode for long term storage, using the battery display menu system. It shuts down the internal electronics and greatly reduces battery self-discharge. The battery can be woken-up with 2 presses of the display button.
Please refer to Section 6 of the User Guide for PAGlink Batteries via the links below: