This is another common question on many user groups. It comes up time and time again. But really there is no one clear cut answer. In a perfect world we would never need to add any noise reduction, but we don’t live and shoot in a perfect world. Often a camera might be a little noisy or you may be shooting with a lot less light than you would really like, so in camera NR might need to be considered.
You need to consider carefully whether you should use in camera NR or not. There will be some cases where you want in camera NR and other times when you don’t.
Post Production NR. An important consideration is that adding post production NR on top of in-camera NR is never the best route to go down. NR on top of NR will often produce ugly blocky artefacts. If you ever want to add NR in post production it is almost always better not to also add in camera NR. Post production NR has many advantages as you can more precisely control the type and amount you add depending on what the shot needs. When using proper grading software such as DaVinci Resolve you can use power windows or masks to only add NR to the parts of the image that need it.
Before someone else points it out I will add here that it is almost always impossible to turn off all in camera NR. There will almost certainly be some NR added at the sensor that you can not turn off. In addition most recording codecs will apply some noise reduction to avoid wasting data recording the noise, again this can’t be turned off. Generally higher bit rate, less compressed codecs apply less NR. What I am talking about here is the additional NR that can be set to differing levels within the cameras settings that is in addition to the NR that occurs at the sensor or in the codec.
Almost every NR process, as well as reducing the visibility of noise will introduce other image artefacts. Most NR process work by taking an average value for groups of pixels or an average value for the same pixel over a number of frames. This averaging tends to not only reduce the noise but also reduce fine details and textures. Faces and skin tones may appear smoothed and unnatural if excessively noise reduced. Smooth surfaces such as walls or the sky may get broken up into subtle bands or steps. Sometimes these artefacts won’t be seen in the cameras viewfinder or on a small screen and only become apparent on a bigger TV or monitor. Often the banding artefacts seen on walls etc are a result of excessive NR rather than a poor codec etc (although the two are often related as a weak codec may have to add a lot of NR to a noisy shot keep the bit rate down).
If you are shooting log then any minor artefacts in the log footage from in camera noise reduction may be magnified when you start grading and boosting the contrast. So, generally speaking when shooting log it is always best to avoid adding in camera NR. The easiest way to avoid noise when shooting with log is to expose a bit brighter so that in the grade you are never adding gain. Take gain away in post production to compensate for a brighter exposure and you take away much of the noise – without giving up those fine textures and details that make skin tones look great. If shooting log, really the only reason an image will be noisy is because it hasn’t been exposed bright enough. Even scenes that are meant to look dark need to be exposed well. Scenes with large dark areas need good contrast between at least some brighter parts so that the dark areas appear to be very dark compared to the bright highlights. Without any highlights it’s always tempting to bring up the shadows to give some point of reference. Add a highlight such as a light fixture or a lit face or object and there is no need to then bring up the shadows, they can remain dark, contrast is king when it comes to dark and night scenes.
If, however you are shooting for “direct to air” or content that won’t be graded and needs to look as good as possible directly from the camera then a small amount of in camera NR can be beneficial. But you should test the cameras different levels to see how much difference each level makes while also observing what happens to subtle textures and fine details. There is no free lunch here. The more NR you use the more fine details and textures you will lose and generally the difference in the amount of noise that is removed between the mid and high setting is quite small. Personally I tend to avoid using high and stick to low or medium levels. As always good exposure is the best way to avoid noise. Keep your gain and ISO levels low, add light if necessary or use a faster lens, this is much more effective than cranking up the NR.
UPDATE – Some issues with the original version of the LUT were found by some users, so I have created a revised version and the revised version is now linked below.
Arri Look LUT’s are clearly very popular with a lot of Sony users, so I have created an Arri-Look LUT for the FX3/FX6/FX9/Venice that can be used to mimic the look from an Arri camera. It is not designed to pretend to be a real Arri camera, but to instead provide an image with the look and feel of an Arri camera but tailored to the Sony sensors.
As usual the LUT is free to download, but if you do find it useful I do ask that you buy me a coffee or other drink as a thank you. All contributions are always most welcome. Additionally do let me know what you like about this LUT or don’t like, so I can look at what LUTs may be good to create in the future.
There is a bug in some versions of DaVinci Resolve 17 that can cause frames in some XAVC files to be rendered in the wrong order. This results in renders where the resulting video appears to stutter or the motion may jump backwards for a frame or two. This has now been fixed in version 17.3.2 so all user of XAVC and DaVinci Resolve are urged to upgrade to at least version 17.3.2.
Sadly this is not an uncommon problem. Suddenly and seemingly for no apparent reason the SDI output on your camera stops working. And this isn’t a new problem either, SDI ports have been failing ever since they were first introduced. This issue affect all types of SDI ports. But it is more likely with higher speed SDI ports such as 6G or 12G as they operate at higher frequencies and as a result the components used are more easily damaged as it is harder to protect them without degrading the high frequency performance.
Probably the most common cause of an SDI port failure is the use of the now near ubiquitous D-Tap cable to power accessories connected to the camera. The D-Tap connector is sadly shockingly crudely designed. Not only is it possible to plug in many of the cheaper ones the wrong way around but with a standard D-Tap plug there is no mechanism to ensure that the negative or “ground” connection of the D-Tap cable makes or breaks before the live connection. There is a however a special but much more expensive D-Tap connector available that includes electronic protection against this very issue – see: https://lentequip.com/products/safetap
Imagine for a moment you are using a monitor that’s connected to your cameras SDI port. You are powering the monitor via the D-Tap on the cameras battery as you always do and everything is working just fine. Then the battery has to be changed. To change the battery you have to unplug the D-Tap cable and as you pull the D-Tap out, the ground connection disconnects fractionally before the live connection. During that moment there is still positive power going to the monitor but because the ground on the D-Tap is now disconnected the only ground route back to the battery becomes via the SDI cable through the camera. For a fraction of a second the SDI cable becomes the power cable and that power surge blows the SDI driver chip.
After you have completed the battery swap, you turn everything back on and at first all appears good, but now you can’t get the SDI output to work. There’s no smoke, no burning smells, no obvious damage as it all happened in a tiny fraction of a second. The only symptom is a dead SDI.
And it’s not only D-Tap cables that can cause problems. A lot of the cheap DC barrel connectors have a center positive terminal that can connect before the outer barrel makes a good connection. There are many connectors where the positive can make before the negative.
It can also happen when powering the camera and monitor (or other SDI connected devices like a video transmitter) via separate mains adapters. The power outputs of most of the small, modern, generally plastic bodied switch mode type power adapters and chargers are not connected to ground. They have a positive and negative terminal that “floats” above ground at some unknown voltage. Each power supplies negative rail may be at a completely different voltage compared to ground. So again an SDI cable connected between two devices, powered by different power supplies will act as the ground between them and power may briefly flow down the SDI cable as the SDI cables ground brings both power supply negative rails to the same common voltage. Failures this way are less common, but do still occur.
For these reasons you should always connect all your power supplies, power cables and especially D-Tap or other DC power cables first. Then while everything remains switched off connect the SDI cables. Only when everything is connected should you turn anything on. If unplugging or re-plugging a monitor (or anything else for that matter) turn everything off first. Do not connect or disconnect anything while any of the equipment is on. Although to be honest the greatest risk is at the time you connect or disconnect any power cables such as when swapping a battery where you are using the D-Tap to power any accessories. So if changing batteries, switch EVERYTHING off first, then disconnect your SDI cables before disconnecting the D-Tap or other power cables next.
(NOTE: It’s been brought to my attention that Red recommend that after connecting the power, but before connecting any SDI cables you should turn on any monitors etc. If the monitor comes on OK, this is evidence that the power is correctly connected. There is certainly some merit to this. However this only indicates that there is some power to the monitor, it does not ensure that the ground connection is 100% OK or that the ground voltages at the camera and monitor are the same. By all means power the monitor up to check it has power, then I still recommend that you turn it off again before connecting the SDI).
The reason Arri talk about shielded power cables is because most shielded power cables use connectors such as Lemo or Hirose where the body of the connector is grounded to the cable shield. This helps ensure that when plugging the power cable in it is the ground connection that is made first and the power connection after. Then when unplugging the power breaks first and ground after. When using properly constructed shielded power cables with Lemo or Hirose connectors it is much less likely that these issues will occur (but not impossible).
Is this an SD fault? No, not really. The fault lies in the choice of power cables that allow the power to make before the ground or the ground to break before the power breaks. Or the fault is with power supplies that have poor or no ground connection. Additionally you can put it down to user error. I know I’m guilty of rushing to change a battery and pulling a D-Tap connector without first disconnecting the SDI on many occasions, but so far I’ve mostly gotten away with it (I have blown an SDI on one of my Convergent Design Odysseys).
If you are working with an assistant or as part of a larger crew do make sure that everyone on set knows not to plug or unplug power cables or SDI cables without checking that it’s OK to do so. How many of us have set up a camera, powered it up, got a picture in the viewfinder and then plugged an SDI cable between the camera and a monitor that doesn’t have a power connection yet or already on and plugged in to some other power supply? Don’t do it! Plug and unplug in the right order – ALL power cables and power supplies first, check power is going to the camera, check power is going to the monitor, then turn it all off first, finally plug in the SDI.
Scroll down to where it says “Stunning Cinematic Colour” and there you will find a video called “Orlaith” that shows both LUT’s applied to the same footage.
Orlaith is a gaelic name and it is pronounced “orla”. It is the name of a mythical golden princess. The short film was shot on a teeny-tiny budget in a single evening with an FX3 and FX6 using S-Log3 and SGamut3.cine. Then the LUTs were applied directly to the footage with no further grading.
This is a very good question that came up in one of the F5/F55/FX9 facebook groups that I follow. The answers are also mostly relevant to users of the FX6, FX3 and the A7SIII.
There were two parts to it: Is the FX9’s raw out as good as the raw from the F5/F55 and then – do I really need raw.
In terms of image quality I don’t think there is any appreciable difference, going between the raw from an FX9 and the raw from an F5/F55 is a sideways step.
The F5/F55 with either Sony Raw or X-OCN offer great 16 bit linear raw in a Sony MXF package. The files are reasonably compact, especially if you are using the R7 and X-OCN. There are some compatibility issues however and you can’t use the Sony Raw/X-OCN in FCP-X and the implementation in Premier Pro is poor.
The 16 bit out from the FX9/XDCA-FX9 gets converted to 12 bit log raw by the Atomos recorders, currently the only recording options – but in reality you would be extremely hard pushed to really see any difference between 16 bit linear raw and 12 bit log raw from this level of camera.
Recording the 12 bit log raw as ProRes Raw means that you are tied to just FCP-X, Premiere Pro (poor implementation again) and Scratch. The quality of the images that can be stored in the 2 different raw formats is little different, 16 bit linear has more code values but distributed very inefficiently. 12 bit log raw has significantly fewer code values but the distribution is far more efficient. AXS media is very expensive, SSD’s are cheap. AXS card readers are expensive, SSD adapters are cheap. So there are cost implications.
Personally I feel the reduced noise levels from the FX9 makes footage from the FX9 more malleable than footage from the F5/F55 and if you are shooting in FF6K there is more detail in the recordings, even though they are downsampled to 4K raw. But the FF6K will have more rolling shutter compared to an F55/F5.
Working with Sony Raw/X-OCN in Resolve is delightfully easy, especially if you use ACES and it’s a proper grading package. If you want to work with ProResRaw in Resolve you will need to use Apple Compressor or FCP-X to create a demosaiced log file, which even if you use ProRes444 or XQ not the same as working from the original raw file. For me that’s the biggest let down. If I could take ProResRaw direct into Resolve I’d be very happy. But it is still perfectly possible to get great footage from ProResRaw by transcoding if you need to.
As to whether you need raw, only you can answer that fr yourself. There are many factors to consider. What’s your workflow, how are you delivering the content. Will the small benefit from shooting raw actually be visible to your clients?
Are you capturing great content – in which case raw may give you a little more, or are you capturing less than ideal material – in which case raw isn’t going to be a get out of jail card. Raw of any flavour works best when it’s properly exposed and captured well.
I would suggest anyone trying to figure out whether they need raw or not to start by to grading the XAVC-I from the FX9 and see how far you can push that, then compare it to the raw. I think may be surprised by how little difference there is, XAVC-I S-Log3 is highly gradable and if you can’t get the look you want from the XAVC-I raw isn’t going to be significantly different. It’s not that there is anything wrong with raw, not at all. But it does get rather over sold as a miracle format that will transform what you can do. It won’t perform those miracles, but if everything else has been done to the highest possible standards then raw does offer the very best that you can get from these cameras.
As a middle ground also consider non raw ProRes. Again the difference between that and XAVC-I is small, but it may be that whoever is doing the post production finds it easier to work with. And the best bit is there are no compatibility issues, it works everywhere.
But really my best recommendation is to test each workflow for yourself and draw your own conclusions. I think you will find the differences between each much smaller than you might assume. So then you will need to decide which works for you based on cost/effort/end result.
Sometimes best isn’t always best! Especially if you can get to where you need to be more easily as an easy workflow gives you more time to spend on making it look the way you want rather than fussing with conversions or poor grading software.
Can you tell which is genuine and which is fake? It would appear that a number of fake BP-U batteries are starting to show up on ebay and other less reputable places. The battery on the left won’t charge on a genuine Sony charger, this tells me it is not a real Sony battery.
If you look at the labels on the batteries the quality of the printing on the fake battery on the left is not as fine as on the genuine battery, in particular the ® as well as the box around the level indicator LED’s is not as crisply and finely printed.
The sellers are clever. These are not so cheap as to raise suspicion, they just seem very competitively priced. These batteries might be a little bit cheaper, but how safe are they and how long will they last? I have to say this would have fooled me and I have a lot of sympathy for others that have been tricked into buying these. But if the manufacturer can’t sell these by legitimate means under their own brand name I really do have to question their quality and safety.
This is a question that comes up a lot. Especially from those migrating to a camera with a CineEI mode from a camera without one. It perhaps isn’t obvious why you would want to use a shooting mode that has no way of adding gain to the recordings.
If using the CineEI mode shooting S-log3 at the base ISO, with no offsets or anything else then there is very little difference between what you record in Custom mode at the base ISO and CineEI at the base EI.
But we have to think about what the CineEI mode is all about. It’s all about image quality. You would normally chose to shoot S-Log3 when you want to get the highest possible quality image and CineEI is all about quality.
The CineEI mode allows you to view via your footage via a LUT so that you can get an appreciation of how the footage will look after grading. Also when monitoring and exposing via the LUT because the dynamic range of the LUT is narrower, your exposure will be more accurate and consistent because bad exposure looks more obviously bad. This makes grading easier. One of the keys to easy grading is consistent footage, footage where the exposure is shifting or the colours changing (don’t use ATW with Log!!) can be very hard to grade.
Then once you are comfortable exposing via a LUT you can start to think about using EI offsets to make the LUT brighter or darker. When the LUT is darker you open the aperture or reduce the ND to return the LUT to a normal looking image and vice versa with a brighter LUT. This then changes the brightness of the S-log3 recordings and you use this offsetting process to shift the highlight/shadow range as well as noise levels to suit the types of scenes you are shooting. Using a low EI (which makes the LUT darker) plus correct LUT exposure (the darker LUT will make you open the aperture to compensate) will result in a brighter recording which will improve the shadow details and textures that are recorded and thus can be seen in the shadow areas. At the same time however that brighter exposure will reduce the highlight range by a similar amount to the increase in the shadow range. And no matter what the offset, you always record at the cameras full dynamic range.
I think what people misunderstand about CineEI is that it’s there to allow you to get the best possible, highly controlled images from the camera. Getting the best out of any camera requires appropriate and sufficient light levels. CineEI is not designed or intended to be a replacement for adding gain or shooting at high recording ISOs where the images will be already compromised by noise and lowered dynamic range.
CineEI exists so that when you have enough light to really make the camera perform well you can make those decisions over noise v highlights v shadows to get the absolute best “negative” with consistent and accurate exposure to take into post production. It is also the only possible way you can shoot when using raw as raw recordings are straight from the sensor and never have extra gain added in camera.
Getting that noise/shadow/highlight balance exactly right, along with good exposure is far more important than the use of external recorders or fatter codecs. You will only ever really benefit fully from higher quality codecs if what you are recording is as good as it can be to start with. The limits as to what you can do in post production are tied to image noise no matter what codec or recording format you use. So get that bit right and everything else gets much easier and the end result much better. And that’s what CineEI gives you great control over.
When using CineEI or S-Log3 in general you need to stop thinking “video camera – slap in a load if gain if its dark” and think “film camera – if its too dark I need more light”. The whole point of using log is to get the best possible image quality, not shooting with insufficient light and a load of gain and noise. It requires a different approach and completely different way of thinking, much more in line with the way someone shooting on film would work.
What surprises me is the eagerness to adopt shutter angles and ISO ratings for electronic video cameras because they sound cool but less desire to adopt a film style approach to exposure based on getting the very best from the sensor. In reality a video sensor is the equivalent of a single sensitivity film stock. When a camera has dual ISO then it is like having a camera that takes two different film stocks. Adding gain or raising the ISO away from the base sensitivity in custom mode is a big compromise that can never be undone. It adds noise and decreases the dynamic range. Sometimes it is necessary, but don’t confuse that necessity with getting the very best that you can from the camera.
I wish to update and present the facts that I have regarding potential issues with mainly older 3rd party PB-U batteries. This isn’t here as a scare story, I’m not trying to sensationalise this, just present the facts that I have to hopefully clarify the current situation.
In 2019 I became aware that it was suddenly becoming very hard to buy 3rd party BP-U batteries. Dealers didn’t have any and you couldn’t find them anywhere. Talking to a couple of manufacturers I was informed that they had been told to stop making BP-U batteries.
Then I learnt from Sony that they had been getting an unusually large number of their more recent cameras in for repair, cameras that had suddenly and inexplicably stopped working. They traced this to design issues in some 3rd party batteries that resulted in power flowing through the batteries data pins, damaging beyond repair the cameras motherboard. It was not a case of a battery being inserted incorrectly, it was an issue with the circuitry in the battery.
As a result of this Sony took action in 2019 to prevent the manufacture of 3rd party BP-U batteries and that’s why you could no longer get them.
Since then however it would appear that the manufacture of 3rd party batteries is once again in full swing. In addition I’ve noticed that some older models have been discontinued, often with new versions replacing them, perhaps a “B” version or a model number numerically higher than before.
From this I must assume that whatever the issue was, it has now been resolved and that the 3rd party BP-U batteries on sale today should be perfectly safe to use with our cameras. I would have no hesitation in today buying a brand new BP-U battery from any of the reputable brands.
I have nothing to gain here. This is not a campaign to make you all buy Sony batteries. Even though Sony do make a very fine battery, I too use 3rd party batteries as I need the D-Tap port found only on 3rd party batteries.
But clearly there was a very real battery issue. I’m led to understand that the cost to repair these damaged cameras was over $1K. While not every user of these batteries ends up with a dead camera, I think you have to ask yourself – is it worth using batteries made in 2019 or earlier? I won’t list the batteries that I know to have problems because the list may be incomplete. Just because a battery is not on the list it would not be a guarantee that it’s safe. However if any 3rd party battery manufacturer is reading this and has the confidence to provide me with a list of batteries that they will guarantee are safe, I will gladly publish that (January 2022 and not one manufacture has provided any information).
Clearly not everyone ends up with a dead camera, perhaps the majority have no issue, but enough did that Sony had to take action and it appears that the manufacturers responded by checking and adjusting their designs if necessary.
So my advice is: Don’t use 3rd party batteries made prior to 2020.
If you do, then make absolutely sure the camera is completely powered down when inserting or removing the battery.
I believe that any BP-U battery made in 2020 or later should be safe to use. So please think about replacing any old batteries with new ones, or perhaps contact your battery supplier and ask if what you have is safe. However you should be aware that since 2019 Sony’s own BP-U battery chargers will no longer charge 3rd party batteries.
The information I have presented here is correct to the best of my knowledge and I hope you will use it to make your own decision about which batteries to use.
I often hear people saying that XAVC-I isn’t good enough or that you MUST use ProRes or some other codec. My own experience is that XAVC-I is actually a really good codec and recording to ProRes only ever makes the very tiniest (if any) difference to the finished production.
I’ve been using XAVC-I for over 8 years and it really worked very well for me. I’ve also tested and compared it against ProRes many times and I know the differences are very small, so I am always confident that when using XAVC-I that I will get a great result. But I decided to make this video to show just how close they are.
It was shot with a Sony FX6 using internal XAVC-I (class 300) on an SD card alongside an external recording using ProResHQ on a Shogun 7. I deliberately chose to use Cine EI and S-Log3 at the cameras high base ISO of 12,800 as noise will stress any codec that little bit harder and adding a LUT adds another layer of complexity that might show up any issues all just to make the test that little bit tougher. The slightly higher noise level of the high base ISO also allows you to see how each codec handles noise more easily.
A sample clip of each codec was place in the timeline (DaVinci Resolve) and a caption added. This was then rendered out, ProRes HQ rendered using ProRes HQ and the XAVC-I files rendered to XAVC-I. So for most of the examples seen the XAVC-I files have been copied and re-encoded 5 times plus the encoding to the file uploaded to YouTube, plus YouTubes own encoding, a pretty tough test.
Because in most workflows I don’t believe many people will use XAVC-I in post production as an intermediate codec I also repeated the tests with the XAVC-I rendered to ProResHQ 5 times over as this is probably more representative of a typical real world workflow. These examples are shown at the end of the video. Of course the YouTube compression will restrict your ability to see some of the differences between the two codecs. But, this is how many people will be distributing their content. Even if not via YouTube, via other highly compressed means, so it’s not an unfair test and reflects many real world applications.
Where the s709 LUT has been added it was added AFTER each further copy of the clip, so this is really a “worst case scenario”. Overall in the end the ProRes HQ and XAVC-I are remarkably similar in performance. In the 300% blow up you can see differences between the XAVC-I that is 6 generations old compared to the 6th generation ProRes HQ if you look very carefully at the noise. But the differences are very, very hard to spot and going 6 generations of XAVC-I is not realistic. It was designed a s a camera codec. In the same test where the XAVC was rendered to ProRes HQ for each post production generation any difference is incredibly hard to find even when magnified 300%. I am not claiming that XAVC-I Class 300 is as good as ProRes HQ. But I think it is worth considering what you need when shooting. Do you really want to have to use an external recorder, do you really want to have to deal with files that are 3 to 4 times larger. Do you want to have to remember to switch recording methods between slow motion and normal speeds? For most productions I very much doubt that the end viewer would ever be able to tell the difference between material shot using XAVC-I class 300 and ProResHQ. And that audience certainly isn’t going to feel they are watching a substandard image, and that’s what counts.
There is so much emphasis placed on using “better” codecs that I think some people are starting to believe that XAVC-I is unusable or going to limit what they can do. This isn’t the case. It is a pretty good codec and frankly if you can’t get a great looking image when using XAVC then a better codec is unlikely to change that.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.