These are mainly stability releases that fix some minor bugs, but if you have an FX3 on the original version 1 firmware then this version adds the CineEI mode and LUTs. It is a major update that is well worth having.
Before attempting to update the camera you should insert a fully charged battery
The FX3 is updated via a computer application. While there is a Mac application it is a complete pain in the rear end to get it to work and I would urge you to find a windows PC to do the update, it is far simpler and far more likely to be successful. The good news is that once you have updated to version 2.02 future updates can be done by uploading the update file to an SD card and initiating the update from the camera like the FX30.
The FX30 is updated by placing the downloaded BODYDATA.DAT file on to an SD card that was previously formatted in the camera. Then place the card in the camera and go down to the SETUP – SETUP OPTION – VERSION page of the menu. Here you should see the cameras current firmware version plus a “SOFTWARE UPDATE” button. Press (select) the software update button.
On the next page it will say “Update ?” and show the old firmware version and the new firmware version. Then just below this is a box where it says “Please follow these precautions until the very end”.
What isn’t clear at this step is that you need to scroll down inside that box and read the full list of precautions before the camera will allow you to do the update.
Scroll down until you get to “This update may take several minutes, device automatically reboots when complete”. If you don’t scroll down and just press the “Execute” button you get a large popup telling you to “Follow the precautions to the very end” and pressing “OK” simply takes you will go back to the previous page. So do make sure you scroll down through the full list of precautions before you press execute. Once the update starts the screen will go blank, the only clue that the update is happening will be the slow flash of the media LED on the back of the camera. The update takes about 10 minutes to complete and the camera will reboot when it’s done.
Many people wish to bake in the cameras Exposure Index settings when shooting using CineEI in order to avoid having to make an exposure correction in post production (given that the cameras are ISO invariant when shooting Log in reality it makes vey little difference whether you add gain in the camera or in post production – gain is gain). On cameras such as the FS7, FX6 or FX9 one way to do this is by baking in the built in S-Log3 LUT. To avoid confusion – that is using the CineEI mode with the “S-Log3” LUT enabled and in the LUT settings “Internal recording” set to ON so that you are recording the “S-Log3” LUT.
While this will bake in the EI change, this technique comes with many issues. For a start, just as when you use S-Log3 in custom mode and alter the ISO, whenever you move away from the cameras base ISO you loose dynamic range. When you bake in a LUT and change the EI, you are in effect changing the ISO and there will be a corresponding loss of dynamic range. When you bake in a LUT this loss of dynamic range is exacerbated by a reduced or altered recording range.
At lower EI’s the available recording range shrinks as the LUT is made darker and at the same time upper recoding level of the LUT is reduced. At 200 EI the recording range only gets to around 78%. At the bottom end the shadows are crushed and shadow information lost by the range reduction. This then causes a post production issue because LUT’s designed for the normal S-Log3 input range of 0-94% will now be applied to recordings with a much reduced range and after application of a LUT in post the final output won’t get to 100% without further complex grading where the image will need to be stretched more than normal and this degrades quality.
At high EI’s the LUT becomes brighter but the clip point remains the same. So for each stop you go up, 1 stop of highlights just disappears beyond the LUT’s hard clip point and can’t be ever recorded. Again in post this can cause issues because when you apply a normal S-Log3 LUT the heavy clipping in the recording causes the highlights to look very heavily clipped (because they are). Again, for the best results you will need to grade your footage to allow for this.
So, in practice the idea of baking in the S-Log3 LUT to eliminate the need to do any post production corrections doesn’t work because the addition of the S-Log3 LUT introduces new limitations that will need to be corrected if you want good looking images. Plus adding the S-Log3 LUT in camera and then adding another LUT on top in post is never going to deliver the best results due to the way LUT’s divide the image into brightness zones.
And – if you are baking in the S-Log3 LUT, then really this is no longer EI as there is now no longer an offset between the exposure and the recording, you are simply recording at a higher/lower ISO.
To be honest, not many of us have to charge extremely cold batteries, but on my Northern Lights trips my batteries often get very cold and often I will charge them while they are still very cold. I’ve always known that this isn’t health for the battery, but what i didn’t know was that actually it can be quite dangerous.
It turns out that if you charge a lithium battery that is very cold (below 0c/32f) it will appear to charge more or less as expected. But, when the cells are very cold metallic lithium gets deposited on the anode of the cell. If you repeatedly charge the battery at low temperatures the lithium will continue to build up on the anode causing a safety risk. This metallic lithium can cause the cell to become less stable and more prone to bursting into flames if the battery is over charged, gets hot or is shocked such as through being dropped or crushed.
So – all you Aurora chasers and others that shoot in very cold conditions – let your batteries warm up before you charge them. It won’t be obvious that charging them is causing harm and the last thing you want is a battery suddenly bursting into flames on a job some time down the road because you’ve dropped or bumped it.
Sony’s Cinema Line cameras all have the ability to record metadata from inertia and gyroscope sensors about the way the camera moves while shooting. This metadata can then be used to stabilise your footage in post production. The stabilisation that this can provide is normally very good and tends to look a lot more natural than using post production stabilisation that looks at the footage and tries to hold it steady. However, until recently the only way to make use of this metadata was via Sony’s Catalyst Browse software.
Now however an Open Source project known as GyroFlow has made it possible to use the Sony metadata in FCP-X and DaVinci Resolve via an OpenFX plugin and a FCP-X plug-in. In addition there is a standalone GyroFlow application that can stabilise the footage and then export a stabilised version of the clip.
GyroFlow is a collaborative Open Source project, so different developers are working on different aspects and plug-ins, so it is a bit more disjointed than a lot of commercial products. But, it is free and it will get better, so why not give it a try. The main website for the project is here: http://gyroflow.xyz/
I’ve noticed some users concerned or confused by the R and B gain values that they see in the cameras white balance settings after dialling in a custom white balance and tint, or after taking a white balance from a white card. The R and B gain values indicate the offset that is being applied to the Red and Blue channels relative to the Green channel and in fact they are perfectly normal.
Typically the concern occurs when someone has used a white card to set their white balance and then these seemingly random numbers appear against the Red and Blue gain. But they are not random, they are expected, normal, and not normally something to every worry about.
The FX6 and FX9 are set up such that the indicated Red and Blue gains will only ever both be 0 when the white balance of the camera is at exactly 3200K. At any other white balance there will be an offset to the R and B gain – and that is completely normal. It is these offsets that balance the Red and Blue levels so that the white balance appears correct. At a lower colour temperature you will see a positive blue value and a negative red value. Above 3200K there will be a positive Red value and a negative Blue value. A positive tint value will make both the Red and Blue more positive and a negative tint value will make both the Red and Blue values more negative.
All of this is perfectly normal and perfectly expected. If you have taken a white balance off a white card and then dial in a preset value you might find that the you can’t get the last 2 digits back to a zero.
For example after white balancing off a card you have 3653K but you then try to dial in 3200K, but the closest you can get is 3193K or 3213K. This is because the smallest steps the colour temperature changes in is 20K (on the FX6 above 5640K the steps gradually get larger and larger). But this really isn’t something to worry about 3193K or 3213K are both so close to 3200K that either will do and calibration and temperature differences will mean that the actual variations between different cameras or the camera and a colour meter will be greater than this error anyway. No two cameras will ever be truly identical and differences between lenses will cause add to this normal variation. There is no need to worry about the last 2 digits not being zero’s.
At the end of the day, these tiny differences are not something to worry or be concerned about. But if you do want to return the last digits back to zero you can do this by dialling the white balance all the way down to 2000K.
I’ve just return from the arctic cabins that I use for my Northern Lights Aurora tours following a great trip where the group got to see the Aurora on 3 nights. In this video there is footage from two nights, the 13th and 14th of January.
I recommend watching the video direct on YouTube and on a nice big screen in 4K if you can.
Most of it is real time video, not the time-lapse that is so often used to shoot the Aurora. The Sony FX3 (like the A7S3) is sensitive enough to video a bright Aurora with a fast lens without needing to use time lapse. On the FX3 I used a Sony 24mm f1.4 GM lens, this is a great lens for astro photography as stars are very sharp even in the corners of the frame. The Aurora isn’t something that is ever dazzlingly bright, so you do need to use a long shutter opening. So, often I am shooting with a 1/15th or 1/12th shutter. I have been using the CineEI mode at 12,800 ISO and also using the S-Log3 flexible ISO mode to shoot at 25600 ISO. This isn’t something I would normally do – add gain while shooting S-Log3, but in this particular case it is working well as the Aurora will never exceed the dynamic range of the camera, but the footage does need extensive noise reduction in post production (I use the NR tools built into DaVinci Resolve).
I also shot time lapse with my FX30 using a DJI RS2 gimbal. On the FX30 I had a Sigma 20mm f1.4 with a metabones speedbooster. I shot using S&Q motion at 8 frames per second, this gives only a slight speed up and a more natural motion that time lapse shot at longer intervals. By shooting at 8 frames per second I can use a 1/4 of second shutter and this combined with the FX30’s high base ISO of 2500 (for S-Log3) produces a good result even with quite dim Auroras.
By shooting with S-Log3 you can still grade the footage and this is a quick way to get a time-lapse sequence without having to process thousands of still frames. It also needs only a fraction of the storage space.
In a few days I will be heading off to the north of Norway for my annual trip to shoot the Northern Lights. This year I really do hope to stream the Aurora live.
Aurora captured by my FX3 in 2022.
I’ve tried to livestream the Aurora before, but not really been successful. We go to a very remote location to get away from city lights and light pollution. But that means the cellphone connection isn’t great. And then I have had issues with getting the streaming hardware to work correctly in the extreme cold, it’s often well below -20c. I really want to stream the output of my FX3 rather than shooting the back of the camera with a phone as I have done before. Hopefully I will actually succeed this time. There have been some major updates to the software on my Xperia Pro phone and now the HDMI input app includes rtmp streaming direct from the app, so now I can stream from the FX3 via HDMI and the Xperia Pro more easily than before.
The next big unknown is when will the Aurora be visible. To see the Aurora I need clear skies and then the Aurora has to actually be present. There is no guarantee that it will be visible and I certainly can’t predict exactly when. So – I can’t tell you when I will be live. Most likely it will be sometime between January 12th and January 22nd, after 16:00 GMT and before 02:00 GMT. I may be live many times on different nights.
I will also be on facebook and this would be a good way to keep updated as I will try to post on facebook prior to going live on YouTube.
As well as the FX3 I’m taking an FX30 and it will be interesting to see how this performs trying to shoot the Aurora. Main lenses for the Aurora will be the Sony 24mm f1.4 GM, 20mm f1.8 G but I will also have a Sigma 20mm f1.4 with metabones speedbooster for the FX30.
If you are using Zebras to measure the exposure of a log gamma curve you should consider using a narrower Zebra window.
Why?
From middle grey to white (50% to 90%) in the world of standard dynamic range Rec-709 each stop occupies approximately 16% of the recording range. Typically the default zebra window or zebra range used by most cameras is 10% (often +/- 5%). So, when Zebras are set to 70% they will appear at 65% and go away at 75%. For Rec-709 and most conventional SDR gammas this window or range is around 3/4 of a stop, so less than 1 full stop and generally reasonably accurate.
But if using most Cineon based log curves, such as Sony’s S-Log3, between middle grey and white (41% to 61%) each stop only occupies around 8% of the recording range, half the range used by Rec-709. As a result if you use a default 10% zebra window, zebras will appear over a 1.2 stop range, this is excessive and introduces a large margin of exposure error. Compared to Rec-709 the zebras will only be half as precise, especially if you are trying to measure the brightness of a grey card or white card.
I recommend reducing the width of the Zebra window to 6% when using Zebras to measure skin tones within the S-Log3 image (if measuring a Rec-709 LUT there is no need to change the window). This will then give a similar range and accuracy to a 10% window in 709. If you are using zebras to measure a white card or grey card then consider bringing the zebra window down to 2% to gain a more accurate reading of the white/grey card.
FX6(left) and FX3 (right) zebras set to measure S-Log3 white card exposure.
The zebra window or range can normally adjusted in the cameras menu under the zebra settings. On the Sony Alpha’s and and FX3/FX30 you can adjust the range of the C1 and C2 custom zebras.
When you have millions of pixels on a video sensor it isn’t surprising to find that every now and then one or two might go out of spec and show up in your footage as a white dot. These “hot” pixels are most commonly seen when using high ISO’s or the upper of the cameras two base ISO’s. Hot pixels are not uncommon and they are not something to worry about.
The Fix:
Thankfully the issue is easily resolved by going to the cameras main menu and – Setup Menu – Setup Option – Pixel Mapping. Then cap the lens or cap the camera body and run the pixel mapping. It only takes around 30 seconds and it should eliminate any white, black or coloured sensor pixel issues. The camera will ask you to do this periodically anyway and you should do it regularly, especially after flying anywhere with the camera.
Sensor pixels can be damaged by energetic particles that come from cosmic events. So a hot pixel can appear at any time and without warning. They are not something to worry about, it is normal to get some bad pixels from time to time over the life of a camera. When you travel by air there is less of the atmosphere to protect your camera from these particles, so there is a higher than normal likelihood of one going out of spec. Polar air routes are the worst as the earths magnetic field tends to funnel these particles towards the north and south poles. So, whenever you fly with your camera it is a good idea to run Pixel Mapping (or APR if you have an FX6, FX9 etc) before you start shooting.
The XAVC family of codecs was introduced by Sony back in 2014. Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.
XAVC-I
XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.
XAVC-L
XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).
Performance and Efficiency.
Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.
Motion Artefacts.
BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.
Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot.
Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.
Manage your privacy
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.