PAG MPL Mini Pag Link Batteries

I’ve been using PAG batteries forever, well at least for as long as I have worked in film and TV and that’s a very, very long time now. Pag batteries have always been known for their robustness, reliability and performance, all things that are vitally important to me as often I find myself shooting in some very remote and very tough environments.

Shooting with Venice deep in the Slot Canyon powered by a Pag Link PL150 battery.

 

For around 7 years I have been using the Pag Link battery system. Pag link allows you to quickly link together multiple batteries. This has many benefits. For a start you can charge many batteries at once with a single channel battery charger. This is great for me when travelling as I can use the tiny Pag travel charger to charge several batteries overnight. Or back at base with my 2 channel Pag charger I will often put 3 or 4 linked batteries on each charger channel so that all my batteries will charge in one single session. And you are not limited to using a Pag charger, you can stack the Pag Link batteries on almost any charger.

A single Pag PagLink PL150 battery will run Venice for around 2 hours.



Another benefit is being able to link a couple of batteries together when you need a higher current output, perhaps to power a big video light or to run a higher powered digital cinema camera. If using more than one battery on a camera it is even possible to hotswap the rear most battery without needing to turn off the camera or stop recording.

The Pag Link batteries have served me extremely well and even after 6 or more years of use are only showing very minimal capacity loss. But as modern cameras are getting smaller and smaller and need less and less power, even the already relatively compact Pag Link batteries sometimes seemed like overkill.

Enter the MPL series.

The Pag Link MPL batteries have taken what was already a great concept and miniaturised it. Using the latest battery cell technologies Pag have managed to produce new smaller and lighter stackable batteries with the similar capacities to the original Pag Links. Pag have also listened to customer feedback adding D-Tap ports to the tops of the batteries as well as an additional USB output. The USB output module can be swapped to other outputs if you need them such as Hirose or Lemo. In addition, the MPL batteries are fitted with industry standard ¼” mounting points. These can be used to either mount accessories to the battery or to mount the battery on to something that doesn’t have a standard battery connection.

Pag Link Mini MPL99 powering my FX6 while shooting the Volcano in Iceland.



My first real test for the MPL batteries was a trip to Iceland to shoot the Fagradalsfjall Volcano. When travelling by air you must take your Lithium batteries as carry on luggage. The MPL’s are built to very high standards and UN tested, so you can be confident that they are as safe and as flight friendly as possible. The smaller size and light weight makes it nice and easy to travel with these batteries.

 

To get to the Volcano you have to hike up a small mountain using rocky, slippery and sometimes very steep routes. It’s around 2.5 miles from the nearest road to the closest places from where you can see the volcano crater, so a minimum of a 5 mile round trip.  I was working on my own, so had to carry camera, lenses, tripod and batteries in a backpack. Plus spare clothing, food and drinks as the weather in Iceland changes frequently and can often be quite nasty. So, every gram of weight counted. I was shooting with a Sony FX6 using an Atomos Ninja V raw recorder and needed enough power to run everything for a full day of on and off shooting. The Pag MPL’s had just become available and were perfect for the job. The built in D-Taps could be used to power the recorder. I used a V-Mount adapter plate for the camera and the USB port in the MPL batteries was perfect for topping up my phone for the live streams I was doing.

I spent several days up at the Volcano, often hiking even further from the road, seeking out different camera angles and different views. A single 100Wh MPL 99 ran the whole setup for most of the day. By adding an additional 50Wh MPL50 on to the back of the MPL99 I had power in reserve. The diminutive size and light weight of these batteries made a big difference for this shoot. Then back at the hotel I could use the Pag travel charger to charge all of my MPL batteries overnight by connecting them together on the charger, no need to get up in the middle of the night to swap batteries over.

Since then, I’ve used the MPL batteries for many different applications. Their small size is deceptive, they don’t look like they would be able to power anything for a long time, but they can. On a shoot using a Venice 2 I used a stacked MPL99 and an MP50 to power the camera while walking around London to save weight. The batteries ran the camera for close to 2 hours and the capacity display on the battery as well as the run time indicator in the cameras viewfinder was highly accurate.

Pag MPL99 and MPL50 being used to power a Sony Venice II



I can’t recommend the Pag Link system highly enough. The only negative is that the original larger V-Mount Pag Link batteries and the new compact V-Mount Pag Link ML batteries can’t be connected together. A new mating system for V-Mount was require for the new smaller batteries. The Gold mount versions both old and new can be stacked together. Stacked together, despite their diminutive size a pair of MPL99’s can deliver up to 12 amps of power, enough for most video lights. The intelligent linking system means there is no issue connecting a fully charged battery to a flat battery. These are very clever, small, light and compact batteries.

Timecode doesn’t synchronise anything!!!

There seems to be a huge misunderstanding about what timecode is and what timecode can do. I lay most of the blame for this on manufactures that make claims such as “Our Timecode Gadget Will Keep Your Cameras in Sync” or “by connecting our wireless time code device to both your audio recorder and camera everything will remain in perfect sync”. These claims are almost never actually true.

What is “Sync”.

First we have to consider what we mean when we talk about “sync” or synchronisation.  A dictionary definition would be something like “the operation or activity of two or more things at the same time or rate.” For film and video applications if we are talking about 2 cameras they would be said to be in sync when both start recording each frame that they record at exactly the same moment in time and then over any period of time they record exactly the same number of frames, each frame starting and ending at precisely the same moment.

What is “Timecode”.

Next we have to consider what time code is. Timecode is a numerical value that is attached to each frame of a video or an audio recording in an audio device to give it a time value in hours, minutes, seconds, frames. It is used to identify individual frames and each frame must have a unique numerical value. Each individual successive frames timecode value MUST be “1” greater than the frame before (I’m ignoring drop frame for the sake of clarity here). A normal timecode stream does not feature any form of sync pulse or sync control, it is just a number value.

Controlling the “Frame Rate”.


And now we have to consider what controls the frame rate that a camera or recorder records at. The frame rate the camera records at is governed by the cameras internal sync or frame clock. This is normally a  circuit controlled by a crystal oscillator. It’s worth noting that these circuits can be affected by heat and at different temperatures there may be very slight variations in the frequency of the sync clock. Also this clock starts when you turn the camera on, so the exact starting moment of the sync clock depends on the exact moment the camera is switched on. If you were to randomly turn on a bunch of cameras their sync clocks would all be running out of sync. Even if you could press the record button on each camera at exactly the same moment, each would start recording the first frame at a very slightly different moment in time depending on where in the frame rate cycle the sync clock of each camera is. In higher end cameras there is often a way to externally control the sync clock via an input called “Genlock”.  Applying a synchronisation signal to the cameras Genlock input will pull the cameras sync clock into precise sync with the sync signal and then hold it in sync. 

And the issue is………..

Timecode doesn’t perform a sync function. To SYNCHRONISE two cameras or a camera and audio recorder you need a genlock sync signal and timecode isn’t a sync signal, timecode is just a frame count  number. So timecode cannot synchronise 2 devices. The camera’s sync/frame clock might be running at a very slightly different frame rate to the clock of the source of the time code. When feeding timecode to a camera the camera might already be part way through a frame when the timecode value for that frame arrives, making it too late to be added, so there will be an unavoidable offset. Across multiple cameras this offset will vary, so it is completely normal to have a +/- 2 frame (sometimes more) offset amongst several cameras at the start of each recording.

And once you start to record the problems can get even worse…

If the camera’s frame clock is running slightly faster than the clock of the TC source then perhaps the camera might record 500 frames but only receive 498 timecode values – So what happens for the 2 extra frames the camera records in this time? The answer is the camera will give each frame in the sequence a unique numerical value that increments by 1, so the extra frames will have the necessary 2 additional TC values. And as a result the TC in the camera at the end of the clip will be an additional 2 frames different to that of the TC source. The TC from the source and the TC from the camera won’t exactly match, they won’t be in sync or “two or more things at the same time or rate”, they will be different.

The longer the clip that you record, the greater these errors become as the camera and TC source drift further apart.

Before you press record on the camera, the cameras TC clock will follow the external TC input. But as soon as you press record, every recorded  frame MUST have a unique new numerical value 1 greater than the previous frame, regardless of what value is on the external TC input. So the cameras TC clock will count the frames recorded. And the number of frames recorded is governed by the camera sync/frame clock, NOT the external TC.  

So in reality the ONLY way to truly synchronise the timecode across multiple cameras or audio devices is to use a sync clock connected to the  GENLOCK input of each device.

Connecting an external TC source to a cameras TC input is likely to  result in much closer TC values for both the audio recorder and camera(s) than no connection at all. But don’t be surprised if you see small 1 or 2 frame errors at the start of clips due to the exact timing of when the TC number arrives at the camera relative to when the camera starts to record the first frame and then possibly much larger errors at the ends of clips, these errors are expected and normal. If you can’t genlock everything with a proper sync signal, a better way to do it is to use the camera as the TC source and feed the TC from the camera to the audio recorder. Audio recorders don’t record in frames, they just lay the TC values alongside the audio. As an audio recorder doesn’t need to count frames the TC values will always be in the right place in the audio file to match the cameras TC frame count. 

CineEI is not the same as conventional shooting.

CineEI is different to conventional Shooting and you will need to think differently.

Shooting using CineEI is a very different process to conventional shooting. The first thing to understand about CineEI and Log is that the number one objective is to get the best possible image quality with the greatest possible dynamic range and this can only be achieved by recording at the cameras base sensitivity. If you add in camera gain you add noise and reduce the dynamic range that can be recorded, so ideally you always need to record at the cameras base sensitivity for the best possible captured image.

Sony call their system CineEI. On an Arri camera the only way to shoot log or raw is using Exposure Indexes and it’s the same with Red, Canon and almost every other digital cinema camera when shooting log. You always record at the cameras base sensitivity because this will deliver the greatest dynamic range.

Post Production.

A key part of any log workflow is the post production. Without a really good post production workflow you will never see the best possible results from shooting Log. An important part of the post production workflow will be correcting for any exposure offsets used when shooting. If something has been exposed very brightly, then in post you will bring that exposure down to a normal level. Bringing the levels down in post will decrease noise. The flip side to this is that if the exposure is very dark then you will need to raise the levels in post and this will make then more noisy

Exposure and Light Levels.

It is assumed that when using CineEI and shooting with log that you will control the light levels in your shots and use levels suitable for the recording ISO (base ISO) of the camera using combinations of aperture, ND and shutter speed, again it’s all about getting the best possible image quality. If lighting a scene you will light for the base ISO of the camera you are using.

Here’s the bit that’s different:

Changing the EI (Exposure Index) allows you to tailor where the middle of your exposure range is. It allows you to alter the balance between more highlight range with less shadows or less highlight range with more shadow information in the captured image. On a bright high contrast exterior you might want more highlight range, while for a dark moody night scene you might want more shadow range. Exposing brighter puts more light on to the sensor. More light on the sensor will extend the shadow range but decrease the highlight range. Exposing darker will decrease the shadow range but also allow brighter highlights to be captured without clipping. 

IMPORTANT:   EI is NOT the same as ISO.

ISO is a measure of a film stock or camera sensors SENSITIVITY to light. It is the measure of how strongly the cameras sensor responds to light.

Exposure Index is a camera setting that determines how bright the image will become for a given EXPOSURE. While it is related to sensitivity it is NOT the same thing and should always be kept distinct from sensitivity.

ISO= Sensitivity and a measure of the sensors response to light.

EI = Exposure Index – how bright the image seen in the viewfinder will be.

The important bit to understand is that EI is an exposure rating, not a sensitivity rating. The EI is the number you would put into a light meter for the optimum EXPOSURE for the type of scene you are shooting. The EI that you use depends on your desired shadow and highlight ranges as well as how much noise you feel is acceptable. 

What Actually happens when I change the EI value on a Sony camera?

On a Sony camera the only things that change when you alter the EI value are the brightness of any Look Up Tables (LUTs) being used, the EI value indicated in the viewfinder and the EI value recorded in the metadata that is attached to your clip.

Importantly – To actually see a change in the viewfinder image or the image on an external monitor you must be viewing your images via a LUT as the EI changes the LUT brightness, changing the EI does not on it’s own change the  way the S-Log3 is recorded or the sensitivity of the camera. If you are not viewing via a LUT you won’t see any changes when you change the EI values, so for CineEI to work, you must be monitoring via a LUT.

Raising and Lowering the EI value:

When you raise the EI value the LUT will become brighter. When you lower the EI value the LUT will become darker.

If we were to take a camera with a base ISO of 800 then a nominal  “normal” exposure would result from using 800 EI. When the base ISO value and the EI value are matched, then we can expect to get a “normal” exposure.

The S-Log3 levels that you will get when exposed correctly and the EI value matches the cameras base ISO value. Note you will have 6 stops of range above middle grey and 8+ stops below middle grey.

 

Let’s now look at what happens when we use EI values higher or lower than the base ISO value.

(Note: One extra stop of exposure is the equivalent of doubling the ISO or EI. One less stop of exposure is the equivalent of halving the ISO or EI. So if double 800 EI so you get to 1600 EI this would be considered 1 stop higher. If you double 1600 EI so you are at 3200 EI this is one further stop higher. So 800 EI to 3200 EI is 2 stops higher)

If you were to use a higher EI, let’s say 3200 EI, two stops higher than the base 800 EI, then the LUT will become 2 stops brighter.

If you were using a light meter you would enter 3200  into the light meter. 

When looking at this now 2 stops brighter viewfinder image you would be inclined to close the aperture by 2 stops (or add ND/shorter shutter) to bring the brightness of the viewfinder image back to normal. The light meter would also recommend an exposure that is 2 stops darker.



Because the recording sensitivity or base ISO remains the same no matter what the EI, the fact that you have reduced your exposure by 2 stops means that the sensor is now receiving 2 stops less light, however the recording sensitivity  has not changed. 

Shooting like this, using a higher EI than the base ISO will result in less light hitting the sensor which will result in images with less shadow range and more noise but at the same time a greater highlight range.

The S-Log3 levels that you will get when the EI value is 2 stops higher than the cameras base ISO value and you have exposed 2 stops darker to compensate for the brighter viewfinder image. Note how you now have 8 stops above middle grey and 6+ stops below. The final image will also have more noise.

 

A very important thing to consider here is that this is not what you normally want when shooting darker scenes, you normally want less noise, more shadow range. So with CineEI, you would normally try to shoot a darker, moody scene with an EI lower than the base ISO.

In this chart we can see how at 800 EI there is 6 stops of over exposure range and 9 stops of under. At 1600 EI there will be 7 stops of over range and 8 stops of under and the image will also be twice as noisy. At 400 EI there are 5 stops over and 10 stops under and the noise will be halved.



This goes completely against most peoples conventional exposure thinking.

For a darker scene or a scene with large shadow areas you actually want to use a low EI value. So if the base ISO is 800 then you might want to consider using 400 EI. 400 EI will make the LUT 1 stop darker. Enter 400 EI into a light meter and compared to 800 the light meter will recommend an exposure that is 1 stop brighter. When seeing an image in the viewfinder that is 1 stop darker you will be inclined to open the aperture or reduce the ND to bring the brightness back to a normal level. 

This now brighter exposure means you are putting more light on to the sensor, more light on the sensor means less noise in the final image and an increased shadow range. But, that comes at the loss of some of the highlight range.

The S-Log3 levels that you will get when the EI value is 2 stops lower than the cameras base ISO value and you have exposed more brightly to compensate for the darker viewfinder image. Note how you now have 4 stops above middle grey and 10+ stops below. The final image will have less noise.

 

Need to think differently.

The CineEI mode and log are not the same as conventional “what you see is what you get” shooting methods. CineEI requires a completely different approach if you really want to achieve the best possible results.

If you find the images are too dark when the EI value matches the recording base ISO, then you need to open the aperture, add light or use a faster lens. Raising the EI to compensate for a dark scene is likely to create more problems than it will fix. It might brighten the image in the viewfinder, making you think all is OK, but on your small viewfinder screen you won’t see the extra noise and grain that will be in the final images once you have raised your levels in post production. Using a higher EI and not paying attention could result in you stopping down a touch to protect some blown out highlight or to tweak the exposure when this is probably the last thing you actually want to do.

I’ve lost count of the number of times I have seen people cranking up the EI to a high value thinking this is how you should shoot a darker scene only to discover they can’t then make it look good in post production. The CineEI mode on these cameras is deliberately kept separate from the conventional “custom” or “SDR” mode to help people understand that this is something different. And it really does need to be treated differently and you really do need to re-learn how you think about exposure. 

For dark scenes you almost never want to use an EI value higher than the base ISO value and often it is better it use a lower EI value as this will help ensure you expose any shadow areas sufficiently brightly.

The CineEI mode in some regards emulates how you would shoot with a film camera. You have a single film stock with a fixed sensitivity (the base ISO). Then you have the option to expose that stock brighter (using a lower EI) for less grain, more shadow detail, less highlight range or expose darker (using a higher EI) more grain, less shadow detail, more highlight range. Just as you would do with a film camera.

Sony’s CineEI mode is not significantly different from the way you shoot log or raw with an Arri camera. Nor is it significantly different to how you shoot raw on a Red camera – the camera shoots at a fixed sensitivity and any changes to the ISO value you make in camera are only actually changing the monitoring brightness and the clips metadata.

Exposing more brightly on purpose to achieve a better end result is not “over exposure”. It is simply brighter exposure. Over exposure is generally considered to be a mistake or undesirable, but exposing more brightly on purpose is not a mistake.

FX9 Guide: Videos And PDF Updated For Version 3

The FX9 Guide series of videos and the downloadable and searchable PDF guide that I created for Sony’s PXW-FX9 camera have been updated to cover the new features in the version 3 firmware. 

There are 6 new videos including a short film called “I-Spy” that makes use of almost all of the new features.

The full set of FX9 guide videos can be found here on the Sony website: 

https://pro.sony/en_GB/filmmaking/filmmaking-tips/pxw-fx9-tutorial-videos-introduction

If you are unable to access the videos via the link above they can also be found on YouTube on the Sony Camera Channel:

https://www.youtube.com/c/ImagingbySony

Below is the “I-Spy” short film that I made to generate the sample material needed for the tutorial videos. Every shot in I-Spy uses at least 1 of the new features included in the version 3 update.

 

Real Time Northern Lights, Shot with The FX3

I’ve just got back from my latest Northern Lights expedition to Norway and thought I would share some real time footage of the Northern Lights shot with the Sony FX3 and a Sony 24mm f1.4 GM lens. The 24mm f1.4 is a lovely lens and brilliant for shooting star fields etc as it is pin sharp right into the corners. It also has near zero comma distortion so stars remain nice and round. 
It was -27c when this was shot and my tripods fluid head was starting to get very stiff, so that’s my excuse for the bumps on a couple of the camera moves. 
What you see in this video is pretty much exactly as it appeared to my own eyes. This is not time-lapse and the colours while slightly boosted are as they really are. 
I shot using a range of ISO’s using S-Log3. Starting at 12,800 ISO but going all the way up to 128000 ISO. I perhaps didn’t need to go that high as the Aurora was pretty bright but when an Aurora like this may only last a few minutes you don’t want to stop and change your settings unless you have to for fear of missing something. The low light performance of the FX3 really is quite phenomenal.

Anyway, I hope you enjoy the video.

 

Handy Tips For Using The Sony Variable ND Filter Values.

Sony rate the ND filters in most of there cameras using a fractional value such as 1/4, 1/16, 1/64 etc.

These values represent the amount of light that can pass through the filter, so a 1/4 ND lets 1/4 of the light through. 1/4 is the equivalent to 2 stops ( 1 stop = half,  2 stops = 1/4,  3 stops = 1/8,  4 stops = 1/16,  5 stops = 1/32, 6 stops = 1/64,  7 stops = 1/128).


These fractional values are actually quite easy to work with in conjunction  with the cameras ISO rating.

If you want to quickly figure out what ISO value to put into a light meter to discover the aperture/shutter needed when using the camera with the built in ND filters, simply take the cameras ISO rating and multiply it by the ND value. So 800 ISO with 1/4 ND becomes 800 x 1/4 = 200 (or you can do the maths as 800 ÷ 4). Put 200 in the light meter and it will tell what aperture to use for your chosen shutter speed.

If you want to figure out how much ND to use to get an equivalent overall ISO rating (camera ISO and  ND combined) you take the ISO of the camera and divide by the ISO you want and this gives you  a value “x” which is the fraction in 1/x. So if you want 3200 ISO then take the base of 12800 and divide by 3200 which gives 4, so you want 1/4 ND at 12800.

Premiere Pro 2022 and Issues With S-Log3 – It’s Not A Bug, It’s A Feature!

This keeps cropping up more and more as users of Adobe Premiere Pro upgrade to the 2022 version.

What people are finding is that when they place S-Log3 (or almost any other log format such as Panasonic V-Log or Canon C-Log) into a project, instead of looking flat and washed as it would have done in previus versions of Premiere, the log footage looks more like Rec-709 with normal looking contrast and normal looking color. Then when they apply their favorite LUT to the S-Log3 it looks completely wrong, or at least very different to the way it looked in previous versions.

So, what’s going on?

This isn’t a bug, this is a deliberate change. Rec-709 is no longer the only colourspace that people need to work in and more and more new computers and monitors support other colourspaces such as P3 or Rec2020. The Macbook Pro I am writing this on has a wonderful HDR screen that supports Rec2020 or DCI P3 and it looks wonderful when working with HDR content!
 
Color Management and Colorspace Transforms.
 
Premiere 2022 isn’t adding a LUT to the log footage, it is doing a colorspace transform so that the footage you shot in one colorspace (S-Log3/SGamut3.cine/V-Log/C-Log/Log-C etc) gets displayed correctly in the colorspace you are working in.

S-Log3 is NOT flat.
 
A common misconception is that S-log3 is flat or washed out. This is not true. S-log3 has normal contrast and normal colour.
 
The only reason it appears flat is because more often than not people use it in a miss matched color space and the miss match that you get when you display material shot using the S-log3/SGamut3 colorspace using the Rec-709 colorspace causes it to be displayed incorrectly and the result is images that appear to be flat, lack contrast and colour when in fact your S-Log3 footage isn’t flat, it has lots of contrast and lots of colour. You are just viewing it incorrectly in the incorrect colorspace.

So, what is Premiere 2022 doing to my log footage?
 
What is now happening in Premiere is that Premiere 2022 reads the clips metadata to determine its native colorspace and it then adds a colorspace transform to convert it to the display colourspace determined by your project settings.
 
The footage is still S-Log3, but now you are seeing it as it is actually supposed to look, albeit within the limitations of the display gamma. S-log3 isn’t flat, it just that previously you were viewing it incorrectly, but now with the correct colorspace being added to match the project settings and the type of monitor you are using the S-Log3 is being displayed correctly having been transformed fro S-Log3/SGamut3 to Rec-709 or whatever your project is set to.
 
If your project is an HDR project, perhaps HDR10 to be compatible with most HDR TV’s or for a Netflix commission then the S-log3 would be transformed to HDR10 and would be seen as HDR on an HDR screen without any grading being necessary. If you then changed you project settings to DCI-P3 then everything in your project will be transformed to P3 and will look correct without grading on a P3 screen. Then change to Rec709 and again it all looks correct without grading – the S-Log3 doesn’t look flat, because in fact it isn’t.

Color Managed Workflows will be the new “normal”.
 
Colour managed workflows such as this are now normal in most high end edit and grading applications and it is something we need to get used to because Rec709 is no longer the only colorspace that people need to deliver in. It won’t be long before delivery in HDR (which may mean one of several different gamma and gamut combinations) becomes normal. This isn’t a bug, this is Premiere catching up and getting ready for a future that won’t be stuck in SDR Rec-709. 

A color managed workflow means that you no longer need to use LUT’s to convert your Log footage to Rec-709, you simply grade you clips within the colorspace you will be delivering in. A big benefit of this comes when working with multiple sources. For example S-Log3 and Rec-709 material in the same project will now look very similar. If you mix log footage from different cameras they will all look quite similar and you won’t need separate LUT’s for each type of footage or for each final output colorspace.

The workaround if you don’t want to change.
 
If you don’t want to adapt to this new more flexible way of working then you can force Premiere to ignore the clips metadata by right clicking on your clips and going to “Modify” and “Interpret Footage” and then selecting “Colorspace Override” and setting this to Rec-709. When you use the interpret footage function on an S-Log3 clip to set the colorspace to Rec709 what you are doing is forcing Premiere to ignore the clips metadata and to treat the S-Log3 as though it is a standard dynamic range Rec-709 clip. In a Rec-709 project his re-introduces the gamut miss-match that most are used to and results in the S-Log3 appearing flat and washed out. You can then apply your favourite LUTs to the S-Log3 and the LUT then transforms the S-Log3 to the projects Rec-709 colorspace and you are back to where you were previously.
 
This is fine, but you do need to consider that it is likely that at some point you will need to learn how to work across multiple colorspaces and using LUTs as colorspace transforms is very inefficient as you will need separate LUTs and separate grades for every colorspace and every different type of source material that you wish to work in. Colour managed workflows such as this new one in Premiere or ACES etc are the way forwards as LUTs are no longer needed for colorspace transforms, the edit and grading software looks after this for you. Arri Log-C will look like S-Log3 which will look like V-Log and then the same grade can be applied no matter what camera or colorspace was used. It will greatly simplify workflows once you understand what is happening under the hood and allows you to output both SDR and HDR versions without having to completely re-grade everything.

Unfortunately I don’t think the way Adobe are implementing their version of a colour managed workflow is very clear. There are too many automatic assumptions about what you want to do and how you want to handle your footage. On top of this there are insufficient controls for the user to force everything into a known set of settings. Instead different things are in different places and it’s not always obvious exactly what is going on under the hood. The color management tools are all small addons here and there and there is no single place where you can go for an overview of the start to finish pipeline and settings as there is in DaVinci Resolve for example. This makes it quite confusing at times and it’s easy to make mistakes or get an unexpected result.  There is more information about what Premiere 2022 is doing here: https://community.adobe.com/t5/premiere-pro-discussions/faq-premiere-pro-2022-color-management-for-log-raw-media/

Should You Use In Camera Noise Reduction Or Not?

This is another common question on many user groups. It comes up time and time again. But really there is no one clear cut answer. In a perfect world we would never need to add any noise reduction, but we don’t live and shoot in a perfect world. Often a camera might be a little noisy or you may be shooting with a lot less light than you would really like, so in camera NR might need to be considered.

You need to consider carefully whether you should use in camera NR or not. There will be some cases where you want in camera NR and other times when you don’t.

Post Production NR.
An important consideration is that adding post production NR on top of in-camera NR is never the best route to go down. NR on top of NR will often produce ugly blocky artefacts. If you ever want to add NR in post production it is almost always better not to also add in camera NR. Post production NR has many advantages as you can more precisely control the type and amount you add depending on what the shot needs. When using proper grading software such as DaVinci Resolve you can use power windows or masks to only add NR to the parts of the image that need it.

Before someone else points it out I will add here that it is almost always impossible to turn off all in camera NR. There will almost certainly be some NR added at the sensor that you can not turn off. In addition most recording codecs will apply some noise reduction to avoid wasting data recording the noise, again this can’t be turned off. Generally higher bit rate, less compressed codecs apply less NR. What I am talking about here is the additional NR that can be set to differing levels within the cameras settings that is in addition to the NR that occurs at the sensor or in the codec.

Almost every NR process, as well as reducing the visibility of noise will introduce other image artefacts. Most NR process work by taking an average value for groups of pixels or an average value for the same pixel over a number of frames. This averaging tends to not only reduce the noise but also reduce fine details and textures. Faces and skin tones may appear smoothed and unnatural if excessively noise reduced. Smooth surfaces such as walls or the sky may get broken up into subtle bands or steps. Sometimes these artefacts won’t be seen in the cameras viewfinder or on a small screen and only become apparent on a bigger TV or monitor. Often the banding artefacts seen on walls etc are a result of excessive NR rather than a poor codec etc (although the two are often related as a weak codec may have to add a lot of NR to a noisy shot keep the bit rate down).

If you are shooting log then any minor artefacts in the log footage from in camera noise reduction may be magnified when you start grading and boosting the contrast. So, generally speaking when shooting log it is always best to avoid adding in camera NR. The easiest way to avoid noise when shooting with log is to expose a bit brighter so that in the grade you are never adding gain. Take gain away in post production to compensate for a brighter exposure and you take away much of the noise – without giving up those fine textures and details that make skin tones look great. If shooting log, really the only reason an image will be noisy is because it hasn’t been exposed bright enough. Even scenes that are meant to look dark need to be exposed well. Scenes with large dark areas need good contrast between at least some brighter parts so that the dark areas appear to be very dark compared to the bright highlights. Without any highlights it’s always tempting to bring up the shadows to give some point of reference. Add a highlight such as a light fixture or a lit face or object and there is no need to then bring up the shadows, they can remain dark, contrast is king when it comes to dark and night scenes.

If, however you are shooting for “direct to air” or content that won’t be graded and needs to look as good as possible directly from the camera then a small amount of in camera NR can be beneficial. But you should test the cameras different levels to see how much difference each level makes while also observing what happens to subtle textures and fine details. There is no free lunch here. The more NR you use the more fine details and textures you will lose and generally the difference in the amount of noise that is removed between the mid and high setting is quite small. Personally I tend to avoid using high and stick to low or medium levels. As always good exposure is the best way to avoid noise. Keep your gain and ISO levels low, add light if necessary or use a faster lens, this is much more effective than cranking up the NR.

ILME-FX6 Version 2 Firmware Update

Coming soon, very soon is the version 2 firmware update for the FX6. Like the recently released version 3 update for the FX9 this update is a significant upgrade for the FX6 adding lots of new and very useful features.

AF Touch Tracking:

The big feature that almost every FX6 user has been wanting since the day it was launched is touch tracking AF.  This feature allows you to touch the LCD screen where you want the camera to focus. The touch tracking AF works in conjunction with the cameras face detection AF to provide what Sony are calling “Advanced AI based AF”. If you touch on a face for example, that face is then prioritised and tracked by the AF. If the person turns away from the camera so the face can no longer be seen then the AF will track the side or back of the persons face. If they leave the shot and then come back into the shot, provided the AF can see their facial features the AF will pick up and focus on that face again. When touching on an object that isn’t a face the camera will focus on the touched object as it moves around within the frame. Touch AF makes it very easy to perform perfect pull focusses between different objects or characters within a shot or scene. It’s a very clever system and a welcome addition to the FX6.

Breathing Compensation:

Another new feature that will be of assistance when using the AF is the addition of the Breathing Compensation feature first seen in the Sony A7IV. This feature works by electronically adjusting the size of the recorded frame to minimise any any lens breathing while changing the focus distance. This helps to mask and hide focus changes made during a shot. It is a nice feature, but I will say that sometimes when you pull focus for example, that slight change in the image size can be nice as it re-enforces the focus change and gives the viewer a visual clue that something about the shot has been changed. If the only thing that changes in a shot is the point of focus, sometimes it can look odd or perhaps electronic rather than the more natural focus changes we are used to seeing. Of course the feature can be turned on or off, so you are free to decide whether to use it or not depending on what you are shooting. 

The breathing compensation only works with certain Sony lenses, mostly GM lenses and a few G series lenses. The lenses include: SEL14F18GM, SEL20F18G, SEL24F14GM, SEL35F14GM, SEL50F12GM, SEL85F14GM,  SEL135F18GM, SEL1224GM, SEL1224G, SEL1635GM, SEL2470GM, SEL24105G, SEL28135G, SEL70200GM (NOT with a teleconverter), SEL70200GM2, SEL100F28GM (NOT when the macro switching ring is set to “0.57m–1.0m.”).

Bokeh Control:

While I’m on the optics, another change is what Sony are calling “Bokeh Control” .  You have already been able to do this on most of Sonys cameras with the variable ND filter by turning on the auto gain/ISO function while using the auto ND filter. Set this way when you change the aperture, the ND filter and auto gain/ISO will maintain a constant image brightness allowing you to use the aperture as a bokeh and DoF control. This is now all rolled into a dedicated new feature to make it easier to achieve the same result, so it’s not really new, but it is now easier to do. The brightness of the image is held constant as you change the aperture, so the aperture becomes a  bokeh and DoF control.  This works best with the Sony lenses that have a stepless aperture ring. A word of warning however is that you will need to keep a close eye on what the ISO/Gain is doing to avoid an excessively noisy image if you don’t have sufficient light for the aperture you are using as the camera will add lots of gain and the images will become noisy very quickly if you are not careful. In practice, while I do like the concept behind this it is only really useful when you have lots of light as you want the ND filter to be doing the work, not the auto gain/ISO so this tends to limit you to exteriors or when using the FX6’s high base ISO which is already a bit noisier than low base.

Cache record in both normal modes and  S&Q.
This is a great new feature. You now have a recording cache that can be used in both normal modes and S&Q motion. The recording cache allows you to capture things that have happened prior to the moment you press the cameras record button. Of course the camera has to actually be pointing in the right direction, but this allows you to capture unexpected events such as lightning in a thunderstorm.  I often find cache recording useful for interviews in case the interviewee suddenly starts talking when you are not expecting it. For many applications this will be a very useful function. Depending on the resolution and frame rate you get a cache period of up to 31 seconds available by selecting short/medium/long and max. 

4 Audio Meters:

This is something almost everyone asked for from day one. You can now monitor channels 1 & 2 as well as 3 & 4  on the LCD when shooting. Hooray!

Raw out via HDMI. 

As well as outputting raw via SDI you can now output the raw signal via HDMI. This will be very useful for those that already use the HDMI raw out from their FX3/A7S3 etc as now you won’t need to have the extra SDI adapter for the Ninja V. You will need to update the firmware for your Ninja V or Ninja V+ and the update from Atomos is already available for download.

SR-Live HDR workflow.

Like the FX9 the FX6 gains the ability to change the viewfinder monitoring mode when shooting with HLG. Using Viewfinder Display Gamma Assist and monitoring in SDR the image in the viewfinder can have a dB offset applied. This offset allows you to expose the HLG such that it is fully optimised for HDR viewing while seeing a correctly exposed SDR image. The details of the offset are stored in the cameras metadata and then in post production as well as your already optimised HDR stream you add the same dB offset to the HLG to gain a stream that will look much better in SDR than it would do without any offset. This way it becomes much easier to deliver great looking and better optimised content for both HDR and SDR audiences.

Other new features:

The A and B recording slots can be individually assigned to the record buttons on the camera handle and body record button so that each button controls the recording of one slot. This allows you to record some shots to one slot and other shots to the other slot depending on which record button you press, or by pressing both buttons you can record to both slots.

The Multi-Function dial settings can be assigned to the hand grip dial so that the hand grip dial behaves in the same way as the muti-function dial.

FTP transfer speeds are improved.

More functions can be controlled via the touch screen.

Increased control functionality when used with Content Browser Mobile version 3.6 (you won’t be able to use earlier versions of CBM with the version 2 firmware). Content Browser Mobile version 3.6 is already available for download. 

This new version 2 firmware update is out just yet, but it will be available by the end of January.

 

 

Shooting Raw With The FS5 And Ninja V+

This came up in one of the user groups today and I thought I would repeat the information here.

One issue when using the Atomos Ninja V+ rather than an older Atomos Shogun or Inferno is that the Ninja V+ doesn’t have an internal S-Log2 option. This seems to cause some users a bit of confusion as most are aware that for the best results the FS5 MUST to be set to PP7 and S-Log2 as this is the only setting that fully optimises the sensor setup.
 
When you shoot raw, you are recording linear raw, the recordings don’t actually have any gamma as such and they are not S-log2 or S-Log3, they are raw. The S-Log2 setting in the FS5 just ensures the camera is optimised correctly. If you use the S-Log3 settings, what you record is exactly the same – linear raw, just with more noise because the camera isn’t as well optimised.
 
Any monitor or post production S-Log2 or S-Log3 settings are simply selecting the intermediate gamma that the raw will be converted to for convenience. So when the Ninja V+ states S-Log3 this is simply what the Ninja converts the raw to, before applying any LUT’s. It doesn’t matter that this is not S-Log2 because you didn’t record S-Log2, you recorded linear raw. This is simply what the Ninja V+ will use internally when processing the raw.
 
You have to convert the raw to some sort of gamma so that you can add LUT’s to it and as S-Log3 LUT’s are commonly available S-Log3 is a reasonable choice. With earlier recorders you had the option to choose S-Log2 so that when viewing the native S-Log2 output from the camera, what you saw on the monitors screen looked similar to what you saw on the FS5’s LCD screen when the FS5 was set to S-Log2. But S-Log2 is no longer included in the latest monitors, so now you only have the option to use S-Log3. But from an image quality point of view this monitor setting makes no difference and has no effect on what is recorded (the FS5 should still be set to PP7 and S-Log2).
 
In post production in the past, for consistency it would have been normal to decode the raw to S-Log2 so that everything match throughout your production pipeline from camera to post. But again, it doesn’t really matter if you now decode the raw to S-Log3 instead if you wish. There will be no significant quality difference and there is a wider range of S-Log3 LUT’s to choose from.
 
If the footage is too noisy then it is under exposed, it’s the only reason why the footage will be excessively noisy. It is true that raw bypasses the majority of the cameras internal noise reduction processes, but this only makes a small difference to the overall noise levels. 

Even with the latest Ninja V+ what is recorded when outputting raw from the FS5 is 12 bit linear raw.
 
12 bit Linear raw is a somewhat restricted format. 12 bits is not a lot of code values to record a high dynamic range linear signal. This is why most cameras use log recording for wide dynamic ranges, log is much more efficient and distributes the available recording data in a way very sympathetic to the way human vision works.
 
In practice what this means is that the 12 bit linear raw has LOTS of data and image information in the upper mid range and the very brightest  highlights. But relatively very little picture information in the lower mid range and shadows. So if it is even the slightest bit under exposed the image will degrade very quickly as for each stop yu go down in brightness you halve the amount of image information you have.
 
In an underexposed image the noise will be very coarse in appearance and the image will be difficult to grade. You really do need to expose the raw nice and bright and because of the way the data is distributed, the brighter you can get away with the better. Never be afraid of exposing linear raw “just a little bit brighter”. It is unlikely to severely bite you if you are over exposed but highly likely to bite you if it is even a fraction under.
 
The 12 bit linear raw from the FS5 is not going to be good in low light or when shooting scenes with large shadow areas unless you can expose bright so that you are bring your levels down in post. If you have to bring any levels up in post the image will not be good.
 
Raw is not a magic bullet that makes everything look great. Just as with S-Log it must be exposed carefully and 12 bit linear raw is particularly unforgiving – but when exposed well it is much better than the internal 8 bit log recordings of the FS5 and can be a fantastic format to work with, especially given the low cost of an FS5.
 
I recommend going back to basics and using a white card to measure the exposure. If monitoring the raw via S-Log3 the white card needs to be exposed around 70%. If using a light meter with the FS5 set the light meter to 640 ISO.
 
If you do want to use a LUT on the Ninja to judge your exposure use a LUT with a -1.5 stop offset. The darker LUT will encourage you to expose the raw brighter and you will find the footage much easier to grade. But it should also be considered that it is also quite normal to add a small amount of selective noise reduction in post production when shooting raw.