Handy Tips For Using The Sony Variable ND Filter Values.

Sony rate the ND filters in most of there cameras using a fractional value such as 1/4, 1/16, 1/64 etc.

These values represent the amount of light that can pass through the filter, so a 1/4 ND lets 1/4 of the light through. 1/4 is the equivalent to 2 stops ( 1 stop = half,  2 stops = 1/4,  3 stops = 1/8,  4 stops = 1/16,  5 stops = 1/32, 6 stops = 1/64,  7 stops = 1/128).


These fractional values are actually quite easy to work with in conjunction  with the cameras ISO rating.

If you want to quickly figure out what ISO value to put into a light meter to discover the aperture/shutter needed when using the camera with the built in ND filters, simply take the cameras ISO rating and multiply it by the ND value. So 800 ISO with 1/4 ND becomes 800 x 1/4 = 200 (or you can do the maths as 800 ÷ 4). Put 200 in the light meter and it will tell what aperture to use for your chosen shutter speed.

If you want to figure out how much ND to use to get an equivalent overall ISO rating (camera ISO and  ND combined) you take the ISO of the camera and divide by the ISO you want and this gives you  a value “x” which is the fraction in 1/x. So if you want 3200 ISO then take the base of 12800 and divide by 3200 which gives 4, so you want 1/4 ND at 12800.

Premiere Pro 2022 and Issues With S-Log3 – It’s Not A Bug, It’s A Feature!

This keeps cropping up more and more as users of Adobe Premiere Pro upgrade to the 2022 version.

What people are finding is that when they place S-Log3 (or almost any other log format such as Panasonic V-Log or Canon C-Log) into a project, instead of looking flat and washed as it would have done in previus versions of Premiere, the log footage looks more like Rec-709 with normal looking contrast and normal looking color. Then when they apply their favorite LUT to the S-Log3 it looks completely wrong, or at least very different to the way it looked in previous versions.

So, what’s going on?

This isn’t a bug, this is a deliberate change. Rec-709 is no longer the only colourspace that people need to work in and more and more new computers and monitors support other colourspaces such as P3 or Rec2020. The Macbook Pro I am writing this on has a wonderful HDR screen that supports Rec2020 or DCI P3 and it looks wonderful when working with HDR content!
 
Color Management and Colorspace Transforms.
 
Premiere 2022 isn’t adding a LUT to the log footage, it is doing a colorspace transform so that the footage you shot in one colorspace (S-Log3/SGamut3.cine/V-Log/C-Log/Log-C etc) gets displayed correctly in the colorspace you are working in.

S-Log3 is NOT flat.
 
A common misconception is that S-log3 is flat or washed out. This is not true. S-log3 has normal contrast and normal colour.
 
The only reason it appears flat is because more often than not people use it in a miss matched color space and the miss match that you get when you display material shot using the S-log3/SGamut3 colorspace using the Rec-709 colorspace causes it to be displayed incorrectly and the result is images that appear to be flat, lack contrast and colour when in fact your S-Log3 footage isn’t flat, it has lots of contrast and lots of colour. You are just viewing it incorrectly in the incorrect colorspace.

So, what is Premiere 2022 doing to my log footage?
 
What is now happening in Premiere is that Premiere 2022 reads the clips metadata to determine its native colorspace and it then adds a colorspace transform to convert it to the display colourspace determined by your project settings.
 
The footage is still S-Log3, but now you are seeing it as it is actually supposed to look, albeit within the limitations of the display gamma. S-log3 isn’t flat, it just that previously you were viewing it incorrectly, but now with the correct colorspace being added to match the project settings and the type of monitor you are using the S-Log3 is being displayed correctly having been transformed fro S-Log3/SGamut3 to Rec-709 or whatever your project is set to.
 
If your project is an HDR project, perhaps HDR10 to be compatible with most HDR TV’s or for a Netflix commission then the S-log3 would be transformed to HDR10 and would be seen as HDR on an HDR screen without any grading being necessary. If you then changed you project settings to DCI-P3 then everything in your project will be transformed to P3 and will look correct without grading on a P3 screen. Then change to Rec709 and again it all looks correct without grading – the S-Log3 doesn’t look flat, because in fact it isn’t.

Color Managed Workflows will be the new “normal”.
 
Colour managed workflows such as this are now normal in most high end edit and grading applications and it is something we need to get used to because Rec709 is no longer the only colorspace that people need to deliver in. It won’t be long before delivery in HDR (which may mean one of several different gamma and gamut combinations) becomes normal. This isn’t a bug, this is Premiere catching up and getting ready for a future that won’t be stuck in SDR Rec-709. 

A color managed workflow means that you no longer need to use LUT’s to convert your Log footage to Rec-709, you simply grade you clips within the colorspace you will be delivering in. A big benefit of this comes when working with multiple sources. For example S-Log3 and Rec-709 material in the same project will now look very similar. If you mix log footage from different cameras they will all look quite similar and you won’t need separate LUT’s for each type of footage or for each final output colorspace.

The workaround if you don’t want to change.
 
If you don’t want to adapt to this new more flexible way of working then you can force Premiere to ignore the clips metadata by right clicking on your clips and going to “Modify” and “Interpret Footage” and then selecting “Colorspace Override” and setting this to Rec-709. When you use the interpret footage function on an S-Log3 clip to set the colorspace to Rec709 what you are doing is forcing Premiere to ignore the clips metadata and to treat the S-Log3 as though it is a standard dynamic range Rec-709 clip. In a Rec-709 project his re-introduces the gamut miss-match that most are used to and results in the S-Log3 appearing flat and washed out. You can then apply your favourite LUTs to the S-Log3 and the LUT then transforms the S-Log3 to the projects Rec-709 colorspace and you are back to where you were previously.
 
This is fine, but you do need to consider that it is likely that at some point you will need to learn how to work across multiple colorspaces and using LUTs as colorspace transforms is very inefficient as you will need separate LUTs and separate grades for every colorspace and every different type of source material that you wish to work in. Colour managed workflows such as this new one in Premiere or ACES etc are the way forwards as LUTs are no longer needed for colorspace transforms, the edit and grading software looks after this for you. Arri Log-C will look like S-Log3 which will look like V-Log and then the same grade can be applied no matter what camera or colorspace was used. It will greatly simplify workflows once you understand what is happening under the hood and allows you to output both SDR and HDR versions without having to completely re-grade everything.

Unfortunately I don’t think the way Adobe are implementing their version of a colour managed workflow is very clear. There are too many automatic assumptions about what you want to do and how you want to handle your footage. On top of this there are insufficient controls for the user to force everything into a known set of settings. Instead different things are in different places and it’s not always obvious exactly what is going on under the hood. The color management tools are all small addons here and there and there is no single place where you can go for an overview of the start to finish pipeline and settings as there is in DaVinci Resolve for example. This makes it quite confusing at times and it’s easy to make mistakes or get an unexpected result.  There is more information about what Premiere 2022 is doing here: https://community.adobe.com/t5/premiere-pro-discussions/faq-premiere-pro-2022-color-management-for-log-raw-media/

Should You Use In Camera Noise Reduction Or Not?

This is another common question on many user groups. It comes up time and time again. But really there is no one clear cut answer. In a perfect world we would never need to add any noise reduction, but we don’t live and shoot in a perfect world. Often a camera might be a little noisy or you may be shooting with a lot less light than you would really like, so in camera NR might need to be considered.

You need to consider carefully whether you should use in camera NR or not. There will be some cases where you want in camera NR and other times when you don’t.

Post Production NR.
An important consideration is that adding post production NR on top of in-camera NR is never the best route to go down. NR on top of NR will often produce ugly blocky artefacts. If you ever want to add NR in post production it is almost always better not to also add in camera NR. Post production NR has many advantages as you can more precisely control the type and amount you add depending on what the shot needs. When using proper grading software such as DaVinci Resolve you can use power windows or masks to only add NR to the parts of the image that need it.

Before someone else points it out I will add here that it is almost always impossible to turn off all in camera NR. There will almost certainly be some NR added at the sensor that you can not turn off. In addition most recording codecs will apply some noise reduction to avoid wasting data recording the noise, again this can’t be turned off. Generally higher bit rate, less compressed codecs apply less NR. What I am talking about here is the additional NR that can be set to differing levels within the cameras settings that is in addition to the NR that occurs at the sensor or in the codec.

Almost every NR process, as well as reducing the visibility of noise will introduce other image artefacts. Most NR process work by taking an average value for groups of pixels or an average value for the same pixel over a number of frames. This averaging tends to not only reduce the noise but also reduce fine details and textures. Faces and skin tones may appear smoothed and unnatural if excessively noise reduced. Smooth surfaces such as walls or the sky may get broken up into subtle bands or steps. Sometimes these artefacts won’t be seen in the cameras viewfinder or on a small screen and only become apparent on a bigger TV or monitor. Often the banding artefacts seen on walls etc are a result of excessive NR rather than a poor codec etc (although the two are often related as a weak codec may have to add a lot of NR to a noisy shot keep the bit rate down).

If you are shooting log then any minor artefacts in the log footage from in camera noise reduction may be magnified when you start grading and boosting the contrast. So, generally speaking when shooting log it is always best to avoid adding in camera NR. The easiest way to avoid noise when shooting with log is to expose a bit brighter so that in the grade you are never adding gain. Take gain away in post production to compensate for a brighter exposure and you take away much of the noise – without giving up those fine textures and details that make skin tones look great. If shooting log, really the only reason an image will be noisy is because it hasn’t been exposed bright enough. Even scenes that are meant to look dark need to be exposed well. Scenes with large dark areas need good contrast between at least some brighter parts so that the dark areas appear to be very dark compared to the bright highlights. Without any highlights it’s always tempting to bring up the shadows to give some point of reference. Add a highlight such as a light fixture or a lit face or object and there is no need to then bring up the shadows, they can remain dark, contrast is king when it comes to dark and night scenes.

If, however you are shooting for “direct to air” or content that won’t be graded and needs to look as good as possible directly from the camera then a small amount of in camera NR can be beneficial. But you should test the cameras different levels to see how much difference each level makes while also observing what happens to subtle textures and fine details. There is no free lunch here. The more NR you use the more fine details and textures you will lose and generally the difference in the amount of noise that is removed between the mid and high setting is quite small. Personally I tend to avoid using high and stick to low or medium levels. As always good exposure is the best way to avoid noise. Keep your gain and ISO levels low, add light if necessary or use a faster lens, this is much more effective than cranking up the NR.

ILME-FX6 Version 2 Firmware Update

Coming soon, very soon is the version 2 firmware update for the FX6. Like the recently released version 3 update for the FX9 this update is a significant upgrade for the FX6 adding lots of new and very useful features.

AF Touch Tracking:

The big feature that almost every FX6 user has been wanting since the day it was launched is touch tracking AF.  This feature allows you to touch the LCD screen where you want the camera to focus. The touch tracking AF works in conjunction with the cameras face detection AF to provide what Sony are calling “Advanced AI based AF”. If you touch on a face for example, that face is then prioritised and tracked by the AF. If the person turns away from the camera so the face can no longer be seen then the AF will track the side or back of the persons face. If they leave the shot and then come back into the shot, provided the AF can see their facial features the AF will pick up and focus on that face again. When touching on an object that isn’t a face the camera will focus on the touched object as it moves around within the frame. Touch AF makes it very easy to perform perfect pull focusses between different objects or characters within a shot or scene. It’s a very clever system and a welcome addition to the FX6.

Breathing Compensation:

Another new feature that will be of assistance when using the AF is the addition of the Breathing Compensation feature first seen in the Sony A7IV. This feature works by electronically adjusting the size of the recorded frame to minimise any any lens breathing while changing the focus distance. This helps to mask and hide focus changes made during a shot. It is a nice feature, but I will say that sometimes when you pull focus for example, that slight change in the image size can be nice as it re-enforces the focus change and gives the viewer a visual clue that something about the shot has been changed. If the only thing that changes in a shot is the point of focus, sometimes it can look odd or perhaps electronic rather than the more natural focus changes we are used to seeing. Of course the feature can be turned on or off, so you are free to decide whether to use it or not depending on what you are shooting. 

The breathing compensation only works with certain Sony lenses, mostly GM lenses and a few G series lenses. The lenses include: SEL14F18GM, SEL20F18G, SEL24F14GM, SEL35F14GM, SEL50F12GM, SEL85F14GM,  SEL135F18GM, SEL1224GM, SEL1224G, SEL1635GM, SEL2470GM, SEL24105G, SEL28135G, SEL70200GM (NOT with a teleconverter), SEL70200GM2, SEL100F28GM (NOT when the macro switching ring is set to “0.57m–1.0m.”).

Bokeh Control:

While I’m on the optics, another change is what Sony are calling “Bokeh Control” .  You have already been able to do this on most of Sonys cameras with the variable ND filter by turning on the auto gain/ISO function while using the auto ND filter. Set this way when you change the aperture, the ND filter and auto gain/ISO will maintain a constant image brightness allowing you to use the aperture as a bokeh and DoF control. This is now all rolled into a dedicated new feature to make it easier to achieve the same result, so it’s not really new, but it is now easier to do. The brightness of the image is held constant as you change the aperture, so the aperture becomes a  bokeh and DoF control.  This works best with the Sony lenses that have a stepless aperture ring. A word of warning however is that you will need to keep a close eye on what the ISO/Gain is doing to avoid an excessively noisy image if you don’t have sufficient light for the aperture you are using as the camera will add lots of gain and the images will become noisy very quickly if you are not careful. In practice, while I do like the concept behind this it is only really useful when you have lots of light as you want the ND filter to be doing the work, not the auto gain/ISO so this tends to limit you to exteriors or when using the FX6’s high base ISO which is already a bit noisier than low base.

Cache record in both normal modes and  S&Q.
This is a great new feature. You now have a recording cache that can be used in both normal modes and S&Q motion. The recording cache allows you to capture things that have happened prior to the moment you press the cameras record button. Of course the camera has to actually be pointing in the right direction, but this allows you to capture unexpected events such as lightning in a thunderstorm.  I often find cache recording useful for interviews in case the interviewee suddenly starts talking when you are not expecting it. For many applications this will be a very useful function. Depending on the resolution and frame rate you get a cache period of up to 31 seconds available by selecting short/medium/long and max. 

4 Audio Meters:

This is something almost everyone asked for from day one. You can now monitor channels 1 & 2 as well as 3 & 4  on the LCD when shooting. Hooray!

Raw out via HDMI. 

As well as outputting raw via SDI you can now output the raw signal via HDMI. This will be very useful for those that already use the HDMI raw out from their FX3/A7S3 etc as now you won’t need to have the extra SDI adapter for the Ninja V. You will need to update the firmware for your Ninja V or Ninja V+ and the update from Atomos is already available for download.

SR-Live HDR workflow.

Like the FX9 the FX6 gains the ability to change the viewfinder monitoring mode when shooting with HLG. Using Viewfinder Display Gamma Assist and monitoring in SDR the image in the viewfinder can have a dB offset applied. This offset allows you to expose the HLG such that it is fully optimised for HDR viewing while seeing a correctly exposed SDR image. The details of the offset are stored in the cameras metadata and then in post production as well as your already optimised HDR stream you add the same dB offset to the HLG to gain a stream that will look much better in SDR than it would do without any offset. This way it becomes much easier to deliver great looking and better optimised content for both HDR and SDR audiences.

Other new features:

The A and B recording slots can be individually assigned to the record buttons on the camera handle and body record button so that each button controls the recording of one slot. This allows you to record some shots to one slot and other shots to the other slot depending on which record button you press, or by pressing both buttons you can record to both slots.

The Multi-Function dial settings can be assigned to the hand grip dial so that the hand grip dial behaves in the same way as the muti-function dial.

FTP transfer speeds are improved.

More functions can be controlled via the touch screen.

Increased control functionality when used with Content Browser Mobile version 3.6 (you won’t be able to use earlier versions of CBM with the version 2 firmware). Content Browser Mobile version 3.6 is already available for download. 

This new version 2 firmware update is out just yet, but it will be available by the end of January.

 

 

Shooting Raw With The FS5 And Ninja V+

This came up in one of the user groups today and I thought I would repeat the information here.

One issue when using the Atomos Ninja V+ rather than an older Atomos Shogun or Inferno is that the Ninja V+ doesn’t have an internal S-Log2 option. This seems to cause some users a bit of confusion as most are aware that for the best results the FS5 MUST to be set to PP7 and S-Log2 as this is the only setting that fully optimises the sensor setup.
 
When you shoot raw, you are recording linear raw, the recordings don’t actually have any gamma as such and they are not S-log2 or S-Log3, they are raw. The S-Log2 setting in the FS5 just ensures the camera is optimised correctly. If you use the S-Log3 settings, what you record is exactly the same – linear raw, just with more noise because the camera isn’t as well optimised.
 
Any monitor or post production S-Log2 or S-Log3 settings are simply selecting the intermediate gamma that the raw will be converted to for convenience. So when the Ninja V+ states S-Log3 this is simply what the Ninja converts the raw to, before applying any LUT’s. It doesn’t matter that this is not S-Log2 because you didn’t record S-Log2, you recorded linear raw. This is simply what the Ninja V+ will use internally when processing the raw.
 
You have to convert the raw to some sort of gamma so that you can add LUT’s to it and as S-Log3 LUT’s are commonly available S-Log3 is a reasonable choice. With earlier recorders you had the option to choose S-Log2 so that when viewing the native S-Log2 output from the camera, what you saw on the monitors screen looked similar to what you saw on the FS5’s LCD screen when the FS5 was set to S-Log2. But S-Log2 is no longer included in the latest monitors, so now you only have the option to use S-Log3. But from an image quality point of view this monitor setting makes no difference and has no effect on what is recorded (the FS5 should still be set to PP7 and S-Log2).
 
In post production in the past, for consistency it would have been normal to decode the raw to S-Log2 so that everything match throughout your production pipeline from camera to post. But again, it doesn’t really matter if you now decode the raw to S-Log3 instead if you wish. There will be no significant quality difference and there is a wider range of S-Log3 LUT’s to choose from.
 
If the footage is too noisy then it is under exposed, it’s the only reason why the footage will be excessively noisy. It is true that raw bypasses the majority of the cameras internal noise reduction processes, but this only makes a small difference to the overall noise levels. 

Even with the latest Ninja V+ what is recorded when outputting raw from the FS5 is 12 bit linear raw.
 
12 bit Linear raw is a somewhat restricted format. 12 bits is not a lot of code values to record a high dynamic range linear signal. This is why most cameras use log recording for wide dynamic ranges, log is much more efficient and distributes the available recording data in a way very sympathetic to the way human vision works.
 
In practice what this means is that the 12 bit linear raw has LOTS of data and image information in the upper mid range and the very brightest  highlights. But relatively very little picture information in the lower mid range and shadows. So if it is even the slightest bit under exposed the image will degrade very quickly as for each stop yu go down in brightness you halve the amount of image information you have.
 
In an underexposed image the noise will be very coarse in appearance and the image will be difficult to grade. You really do need to expose the raw nice and bright and because of the way the data is distributed, the brighter you can get away with the better. Never be afraid of exposing linear raw “just a little bit brighter”. It is unlikely to severely bite you if you are over exposed but highly likely to bite you if it is even a fraction under.
 
The 12 bit linear raw from the FS5 is not going to be good in low light or when shooting scenes with large shadow areas unless you can expose bright so that you are bring your levels down in post. If you have to bring any levels up in post the image will not be good.
 
Raw is not a magic bullet that makes everything look great. Just as with S-Log it must be exposed carefully and 12 bit linear raw is particularly unforgiving – but when exposed well it is much better than the internal 8 bit log recordings of the FS5 and can be a fantastic format to work with, especially given the low cost of an FS5.
 
I recommend going back to basics and using a white card to measure the exposure. If monitoring the raw via S-Log3 the white card needs to be exposed around 70%. If using a light meter with the FS5 set the light meter to 640 ISO.
 
If you do want to use a LUT on the Ninja to judge your exposure use a LUT with a -1.5 stop offset. The darker LUT will encourage you to expose the raw brighter and you will find the footage much easier to grade. But it should also be considered that it is also quite normal to add a small amount of selective noise reduction in post production when shooting raw. 

Will My B4 Lens Cover The FX9’s 2K S16 Scan Mode?

This has cropped up a few times in the comments and in various user groups so I thought I would go through what you need to do to use a B4 2/3″ lens with the FX9’s S16 2K scan mode.

Not all B4 2/3″ lenses will directly cover the FX9’s Super 16mm sized 2K scan mode as 2/3″ is smaller than S16. 2/3″ lenses are designed to cover 8.8 x 6.6mm and S16 is 12.5 x 7mm. Some lenses might just about cover this as is, with minor vignetting, but most won’t.

The Sony LA-EB1 includes an optical expander that compensates for this (I think it’s about a 1.35x). With the LA-EB1 all B4 2/3″ lenses should work without vignetting and in addition when an ALAC compatible lens is connected to the LA-EB1 the camera will support the ALAC function which reduces many of the aberrations typically seen with B4 lenses. The LA-EB1 needs a power feed (14.4v) to work correctly and to power the lens. It is supplied with a 4 pin hirose cable that is designed to be plugged into the 4 pin hirose power socket on the XDCA-FX9. This also provides the record trigger signal to the camera. If you don’t have an XDCA-FX9 then you will need to source a 4 pin hirose to D-Tap or similar power cable.


If you have a mount adapter that does not have any optical expansion such as the cheaper MTF B4 to E-Mount adapter (MTB4SEM approx $400), if the lens has a 2x extender you can use the lenses extender if the lens doesn’t cover without it. The more expensive MTF MTB4SEMP (approx $1,200) includes an optical expander and with this adapter all B4 lenses should cover the S16 area without needing to use the lenses extender. To get the zoom servo working you will need an adapter that can provide 12v to the lenses 12pin connector.

Whatever lens or adapter you choose, the lens needs to be an HD lens. The better the lens the better the end result, I know that may seem obvious but when you are using either an adapter with an included optical expander or having to use the lenses 2x extender to eliminate vignetting  with a straight through adapter any imperfections in the lens and any softness becomes quite obvious. Get a really good lens on a good adapter and the images are perfectly respectable, but a poor lens on an adapter will probably dissapoint.

CineD Anamorphic Lens Raffle

With the FX9 recently gaining the ability to shoot Anamorphic the timing of this couldn’t be better. My good friends over at CineD are giving away a Sirui 50mm 1.3x Anamorphic lens in a Christmas raffle. I have the 35mm and the 75mm versions of this lens and the are nice little lenses that will of course work perfectly with the FX9 in it’s S35 4K scan mode with the anamorphic monitoring set to 1.3x. The lenses can also be used with the 5K Crop scan mode. On an FX6 you would need to use about 1.2x clear image zoom to remove the vignette, but I feel these are useable with the FX6 for 4K. I was looking to add the 50mm to my own set so maybe I need to enter the raffle too!
CLICK HERE to go to the raffle – good luck!

FX9 Version 3 Firmware Released.

Sony have today release the version 3 update for the FX9. This is a significant upgrade for the FX9 and I highly recommend all users update their cameras.

The update process is robust and provided you follow the instructions in the PDF guide that is included in the update download package you should not have any problems.



EDIT: I WILL SAY THIS AGAIN – YOU MUST FOLLOW THE INSTRUCTIONS INCLUDED IN THE DOWNLOAD PACKAGE. DO NOT SKIP ANY OF THE STEPS IN THE INSTRUCTIONS. I’m seeing lots of people with failed updates because they are formatting their SD cards on their computer or not turning off the cameras network functions. It’s all in the instructions, the instructions are there to help you, please follow them to the letter. The “root” of a card is the very, very bottom of the cards file structure. It is NOT a folder, anything on the “root” of  a card will not be inside a folder of any sort, the root is the bit of the card where the first folders will be, in the case of an SD card formatted in the FX9 the root is where you will find  the “PRIVATE” folder. The “root” is NOT the XDROOT folder.  And if something does go wrong the instructions tell you how to recover.


Do be aware that when you start the update process the  cameras LCD screen will go blank for around 10 minutes and the only clue that all is good will be a flashing red tally light. Just leave the camera alone, go and do something else and come back an hour later. The upgrade will also appear to stall at around the 80% mark. Again, just be patient and wait for the update complete message before turning off the camera.

There are a lot of new features in version 3 but the three that I think most are going to like the most are the Real-Time tracking AF (aka touch tracking), the Anamorphic monitoring modes and the Super 16mm 2K scan mode.

REAL TIME TRACKING AF

The Real Time tracking allows you to use the viewfinders  touch screen to touch where you want the camera to focus. A white box will appear where you touch and then the camera will track the touched object while it stays with the frame. To cancel the touch tracking touch on the grey cancel box that appears in the top left of the viewfinder.



In the image above the camera is in Face/Eye priority mode. As the real time tracking AF over-rides the other AF modes, by touching on the book on the book shelf (circled in red) the AF is now focussing on the book and will track the book if the camera pans or if the book were to move through the shot. If the book were to move out of the shot the AF will revert to Face/Eye AF.

When used in conjunction with the Face/Eye AF, if you touch on a face the AF will prioritise that face and track it. If the person is facing the camera and can be identified as an individual face, the face gets saved and a “*” will appear next to the Face AF symbol top left of the VF.

If the person turns away from the camera the focus will then track the persons head, if they turn back towards the camera it tracks their face again. It is even possible to start by touching on the back of a persons head and then as they turn towards the camera the face/eye AF takes over.

The way the camera “registers” faces is changed from previous firmware versions. As above, to save a face simply touch on a face. Touch a different face to save a different face. When a face has been saved a * will appear next the the Face/Eye AF symbol.

In the image below the face has been selected by touching on it and is now saved (note the * symbol circled in red). Not also the tracking symbol indicating that BOTH Face/Eye AF and Tracking are in use.

Whenever you stop the real time tracking the saved face is removed from the  camera.

If you are using Face/Eye ONLY AF then normally the camera will only focus on faces/eyes, but if you touch on an object that isn’t a face then  the camera will focus on this object. If you touch on a face then the camera will only focus on that particular face and the AF will halt if the camera can’t see that particular face until you stop the real time tracking AF.

As well as the tracking stop button (top left of the VF, look like a grey box) If you have AF assist enabled, turning the focus ring on the lens will stop the tracking AF. Real-time tracking AF will also stop when a touched object that is not a face leaves the frame or when a button assigned with the Push AF/Push MF function is pressed.

Real-time tracking can also be used when the cameras AF switch is set to MF.  If in the menu, under shooting settings, focus, the “Touch Function in MF” setting is set to ON, The real-time touch tracking auto focus will also work when the AF switch on the side of the camera is set to MF. This allows you to use an autofocus lens to focus manually but then instantly switch to real time tracking AF simply by touching the LCD screen. For this to work if the lens has an AF/MF switch the switch on the lens must be set to AF. If using the 28-135mm lense the focus ring must be in the forwards position.

Real time tracking works across the entire frame regardless of the chosen AF frame area. Any objects being tracked must be distinct from the background. Textured, coloured or detailed objects are tracked more easily than plain objects. When using Face/Eye AF not all faces can be saved. To successfully save a face it needs to be facing the camera directly and sufficiently distinct that the camera can identify the eyes, nose and mouth.

ANAMORPHIC MODE

The anamorphic mode is a basic anamorphic monitoring mode for the LCD viewfinder. It has no effect on the recorded files or the SDI/HDMI outputs.

It provides a 1.3x or 2x de-squeeze. The 2x desqueeze function is tailored for 2x anamorphic lenses designed to be used with 35mm film. To use these lenses the camera must be set to 6K Full Frame scan to gain the correct sensor height. This means that when you use a 2x 35mm film anamorphic lens the sensor scan is wider than it really needs to be so there will be some vignetting at the sides of the frame. The 2x de-squeeze function takes this into account and not only de-squeezes the image but also crops the sides to emulate how the footage will look after post production.

If you are using 1.3x lenses then you can use either the Full Frame 6K scan, the FF 5k crop or the S35 4K scan modes, but for 2x anamorphic lenses you will need to use FF 6K scan.

S16 2K Crop.

The S16 2K crop mode allows you to use just the center Super 16mm sized 2K scan area. This mode is actually very useful for shooting at high frame rates above 60fps because unlike the FF 2K or S35 2K scan modes, this mode is using every single pixel within the scan area. As a result there are none of the image artefacts common when shooting at high frame rates using FF 2K or S35 2K.  The 2.5x crop (compared to FF) does mean that for wide shots you will need a very wide lens. But where you can use it, this mode is great for better quality slow motion.

The other use for this scan mode is with Super 16mm lenses or with B4 2/3″ ENG lenses via suitable adapters. The image quality in this mode on the FX9 is a little better than the similar mode on the FS7. But you do need to remember that this is a 2K scan, so you will have a little under HD resolution. In addition the cameras noise appears worse in this mode because you are enlarging fewer pixels to fill the screen, so the noise isn’t as fine or refined as in FF 6K or S35 4K. So where possible you want to make sure you use a nice bright exposure for the best results.

The images below were shot with the same 16x ENG zoom lens at its widest and longest focal lengths.



Here is the full list of new features:

  • S700 Protocol over Ethernet
  • B4 lens support
    – S16 scanning mode (up to FHD 180fps)
    – B4 lens control using with LA-EB1
    – ALAC (Auto Lens Aberration Correction) function
  • Assignable Center Scan
  • Anamorphic lens support
  • Clip Naming (Cam ID + Reel#)
  • Real-time Tracking (touch tracking auto focus)
  • Additional items can be modified in the Status Screen
  • SR Live for HDR metadata support
  • Recording Proxy Clip Real-time Transfer
  • Camcorder Network Setup using Smartphone App
  • Remote control using smartphone over USB tethering
  • USB tethering activated using iOS14 iPhone and iPad

*Network feature above also support C3 Portal (only available in specific countries)



You will find the version 3 download files here:  https://pro.sony/en_GB/support-resources/pxw-fx9/software

Sony Launches Venice II

Sony Venice II

 

Today Sony launched Venice II. Perhaps not one of the very best kept secrets with many leaks in the last few weeks, but finally we officially know that it’s called Venice II and it has an 8K (8.6K maximum) sensor recording 16 bit linear X-OCN or ProRes to 2 built in AXS card slots.

The full information about the new camera is here. https://pro.sony/en_GB/products/digital-cinema-cameras/venice2

Venice II is in essence the original Venice camera and the AXS-R7 all built into a single unit.  But to achieve this the ability to use SxS cards has been dropped, Venice II only works with AXS cards. The XAVC-I codec is also gone. The new camera is only marginally longer than the original Venice camera body.



As well as X-OCN (the equivalent of a compressed raw recording) Venice II can also record 4K ProRes HQ and 4K ProRes 444. Because the sensor is an 8.6K sensor that 4K 444 will be “real” 444 with a real Red, Green and Blue sample at every position in the image. This will be a great format for those not wishing to use X-OCN. But why not use X-OCN? the files are very compact and full of 16 bit goodness. I find X-OCN just as easy to work with as ProRes.

One thing that Venice II can’t do is record proxies. Apparently user feedback is that these are rarely used. I guess in a film style workflow where you have an on set DIT station it’s easy for proxies to be created on set. Or you can create proxies in most edit applications when you ingest the main files, but I do wonder if proxies are something some people will miss if they only have X-OCN files to work from.

New Sensor:

There has been a lot of speculation that the sensor used in Venice II is the same as the sensor in the Sony A1 mirrorless camera, after all the pixel count is exactly the same. We already know that the A1 sensor is a very nice and very capable sensor. So IF it were to be the same sensor but paired with significantly more and better processing power and an appropriate feature set for digital cinema production it would not be anything to complain about. But it is unlikely that it is the very same sensor. It might be based on the A1 sensor (and the original Venice sensor is widely speculated to be based on the A9 sensor) but one thing you don’t want on these sensors are the phase detection sites used for autofocus.

When you expand these very high quality images on to very big screens, even the smallest of image imperfections can become an issue. The phase detection pixels and the wires that interconnect them can form a very, very faint fixed pattern within the image. In a still photograph you would probably never see this. In a highly compressed image, compression artefacts might hide it (although both the FX6 and FX9 exhibit some fixed pattern noise that might in part be caused by the AF sites). But on a giant screen, with a moving image this faint fixed pattern may be perceptible to audiences and that just isn’t acceptable for a flagship cinema camera. So, I am led to believe that the sensors used in both the original Venice and Venice II do not have any AF phase detection pixels or wire interconnects. Which means these can not the very same sensors as found in the A1 or A9. They are most likely specifically made for Venice.
Also most stills camera based sensors are only able to be read at 12 bit when used for video, again perhaps a key difference is that when used with the cooling system in the Venice cameras these sensors can be read at 16 bit at video frame rates rather than 12 or 14 bits.

The processing hardware in Venice II has been significantly upgraded from the original Venice. This was necessary to support the data throughput need to shoot at 8.6K and 60fps as well as the higher resolution SDI outputs and much improved LUT processing.  Venice II can also be painted live on set via both wiFi and Ethernet. So the very similar exterior appearances do in fact hide the fact that this really is a completely new camera.



My Highlights:

I am not going to repeat all the information in the press releases or on the Sony website here. But what I will say is I like what I see. Integrating the R7 into the Venice II body makes the overall package smaller. There are no interconnections to go wrong. The increase in dynamic range to 16 stops, largely thanks to a lower noise floor is very welcome. There was nothing wrong with the original Venice, but this new sensor is just that bit better.

The default dynamic range split gives the same +6 stops as most of Sony’s current cameras but goes down to -10 stops.  But with the very low noise floor that this sensor has rating the camera higher than the rated  800 base ISO to gain a bit of extra headroom shouldn’t be an issue. Sample footage from Venice II shows that the way the highlights do reach their limits is very nice.

The LUT processing has been improved and now you can have 3D LUTs in 4K on SDI’s 1&2 which are 12G and in HD at the same time on SDI’s 3&4 which are 3G – as well as on the monitor out and in the VF. This is actually quite a significant upgrade, the original Vence is a little bit lacking in  the way it handles LUTs. The ART look system is retained if you want even higher quality previews than that possible with 33x LUTs. There is also built in ACES support with a new RRT, this makes the camera extremely easy to use for ACES workflows and the 16 bit linear X-OCN is a great fit for ACES.



It retains the ability to remove the sensor head so it can be used on the end of an extension cable. Venice II can use either the original 6K Venice sensor or the new 8K sensor, however a new extension cable which won’t be available until until some time in 2023 is needed before the head can be separated, so Venice 1 will still have a place for some considerable time to come.

Venice only takes the original 6K sensor but Venice II can take either the original 6K sensor or the new 8K sensor.



Moving the dual ISO from 500/2500 to 800/3200 brings Venice II’s lower base ISO up to the same level as the majority of other Cinema cameras. I know that some found 500 ISO slightly odd to work with. This will just make it easier to work alongside other similarly rated cameras.

Another interesting consideration is that you can shoot at 5.8K pixels with a Super 35mm sized scan. This means that the 4K Super 35mm material will have greater resolution than the original Venice or many other S35 cameras that only use 4K of pixels at S35. There is a lot of very beautiful super 35mm cine glass available and being able to shoot using classic cinema glass and get a nice uplift in the image resolution is going to be really nice. Additionally there will be some productions where the shallower DoF of Full Frame may not be desirable or where the 8.6K files are too big and unnecessary. I can see Venice II being a very nice option for those wishing to shoot Super 35.

But where does this leave existing Venice owners? 

For a start the price of Venice 1 is not going to change. Sony are not dropping the cost. This new Venice is an upgrade over the original and more expensive (but the price does include the high frame rate options). Although my suspicion is that Venice II will not be significantly more expensive that the cost of the current Venice + R7 + HFR licence. Sony want this camera to sell well, so they won’t want to make it significantly more as then many would just stick with Venice 1. The original remains a highly capable camera that produces beautiful images and if you don’t need 8.6K the reasons to upgrade are fewer. The basic colour science of both cameras remains the same, so there is no reason why both can’t be used together on the same projects. Venice 1 can work with lower cost SxS cards and XAVC-I if you need very small files and a very simple workflow, Venice II pushes you to a AXS card based workflow and AXS cards are very expensive. 

If you have productions that need the Rialto system and the ability to un-dock the sensor, then this isn’t going to be available for Venice II for some time. So original Venice cameras will still be needed for Rialto applications (it will be 2023 before Rialto for Venice II becomes available).

Of course it always hurts when a new camera comes out, but I don’t think existing Venice owners should be too concerned.  If customers really felt they needed 8.6K then they would have already likely been lost to a Red camera and the Red ecosystem. But at least now that there is an 8K Venice option that might help keep the original Venice viable for second unit, Rialto (for now at least) or secondary roles within productions shooting primarily in 8K.

I like everything I see about Venice II, but it doesn’t make Venice 1 any less of a camera.