I often hear people saying that XAVC-I isn’t good enough or that you MUST use ProRes or some other codec. My own experience is that XAVC-I is actually a really good codec and recording to ProRes only ever makes the very tiniest (if any) difference to the finished production.
I’ve been using XAVC-I for over 8 years and it really worked very well for me. I’ve also tested and compared it against ProRes many times and I know the differences are very small, so I am always confident that when using XAVC-I that I will get a great result. But I decided to make this video to show just how close they are.
It was shot with a Sony FX6 using internal XAVC-I (class 300) on an SD card alongside an external recording using ProResHQ on a Shogun 7. I deliberately chose to use Cine EI and S-Log3 at the cameras high base ISO of 12,800 as noise will stress any codec that little bit harder and adding a LUT adds another layer of complexity that might show up any issues all just to make the test that little bit tougher. The slightly higher noise level of the high base ISO also allows you to see how each codec handles noise more easily.
A sample clip of each codec was place in the timeline (DaVinci Resolve) and a caption added. This was then rendered out, ProRes HQ rendered using ProRes HQ and the XAVC-I files rendered to XAVC-I. So for most of the examples seen the XAVC-I files have been copied and re-encoded 5 times plus the encoding to the file uploaded to YouTube, plus YouTubes own encoding, a pretty tough test.
Because in most workflows I don’t believe many people will use XAVC-I in post production as an intermediate codec I also repeated the tests with the XAVC-I rendered to ProResHQ 5 times over as this is probably more representative of a typical real world workflow. These examples are shown at the end of the video. Of course the YouTube compression will restrict your ability to see some of the differences between the two codecs. But, this is how many people will be distributing their content. Even if not via YouTube, via other highly compressed means, so it’s not an unfair test and reflects many real world applications.
Where the s709 LUT has been added it was added AFTER each further copy of the clip, so this is really a “worst case scenario”. Overall in the end the ProRes HQ and XAVC-I are remarkably similar in performance. In the 300% blow up you can see differences between the XAVC-I that is 6 generations old compared to the 6th generation ProRes HQ if you look very carefully at the noise. But the differences are very, very hard to spot and going 6 generations of XAVC-I is not realistic. It was designed a s a camera codec. In the same test where the XAVC was rendered to ProRes HQ for each post production generation any difference is incredibly hard to find even when magnified 300%. I am not claiming that XAVC-I Class 300 is as good as ProRes HQ. But I think it is worth considering what you need when shooting. Do you really want to have to use an external recorder, do you really want to have to deal with files that are 3 to 4 times larger. Do you want to have to remember to switch recording methods between slow motion and normal speeds? For most productions I very much doubt that the end viewer would ever be able to tell the difference between material shot using XAVC-I class 300 and ProResHQ. And that audience certainly isn’t going to feel they are watching a substandard image, and that’s what counts.
There is so much emphasis placed on using “better” codecs that I think some people are starting to believe that XAVC-I is unusable or going to limit what they can do. This isn’t the case. It is a pretty good codec and frankly if you can’t get a great looking image when using XAVC then a better codec is unlikely to change that.
Timed to coincide with the release of the ILME-FX6 camcorder Sony have updated both Catalyst Browse and Catalyst Prepare. These new and long awaited versions add support for the FX6’s rotation metadata and clip flag metadata as well as numerous bug fixes. It should be noted that for the correct operation that a GPU that supports OpenGL is required. Also while the new versions support MacOS Catalina there is no official support for Big Sur. Catalyst Browse is free while Catalyst Prepare is not free. Prepare can perform more complex batch processing of files, checksum and file verification, per-clip adjustments as well as other additional features.
Raw can be a brilliant tool, I use it a lot. High quality raw is my preferred way of shooting. But it isn’t magic, it’s just a different type of recording codec.
All too often – and I’m as guilty as anyone – people talk about raw as “raw sensor data” a term that implies that raw really is something very different to a normal recording. In reality it’s really not that different. When shooting raw all that happens is that the video frames from the sensor are recorded before they are converted to a colour image. A raw frame is still a picture, it’s just that it’s a bitmap image made up of brightness values, each pixel represented by a single brightness code value rather than a colour image where each location in the image is represented by 3 values one for each of Red, Green and Blue or Luma, Cb and Cr.
As that raw frame is still nothing more than a normal bitmap all the cameras settings such as white balance, ISO etc are in fact baked in to the recording. Each pixel only has one single value and that value will have been determined by the way the camera is setup. Nothing you do in post production can change what was actually recorded. Most CMOS sensors are daylight balanced, so unless the camera adjusts the white balance prior to recording – which is what Sony normally do – your raw recording will be daylight balanced.
Modern cameras when shooting log or raw also record metadata that describes how the camera was set when the image was captured.
So the recorded raw file already has a particular white balance and ISO. I know lots of people will be disappointed to hear this or simply refuse to believe this but that’s the truth about a raw bitmap image with a single code value for each pixel and that value is determined by the camera settings.
This can be adjusted later in post production, but the adjustment range is not unlimited and it is not the same as making an adjustment in the camera. Plus there can be consequences to the image quality if you make large adjustments.
Log can be also adjusted extensively in post too. For decades feature films shot on film were scanned using 10 bit Cineon log (which is the log gamma curve S-Log3 is based on) and 10 bit log used for post production until 12 bit and then 16 bit linear intermediates came along like OpenEXR. So this should tell you that actually log can be graded very well and very extensively.
But then many people will tell you that you can’t grade log as well as raw. Often they will give photographers as an example where there is a huge difference between what you can do with a raw photo and a normal image. But we also have to remember this is typically comparing what you can do with a highly compressed 8 bit jpeg file and an often uncompressed 12 or 14 bit raw file. It’s not a fair comparison, of course you would expect the 14 bit file to be better.
The other argument often given is that it’s very hard to change the white balance of log in post, it doesn’t look right or it falls apart. Often these issues are nothing to do with the log recording but more to do with the tools being used.
When you work with raw in your editing or grading software you will almost always be using a dedicated raw tool or raw plugin designed for the flavour of raw you are using. As a result everything you do to the file is optimised for the exact flavour of raw you are dealing with. It shouldn’t come as a surprise to find that to get the best from log you should be using tools specifically designed for the type of log you are using. In the example below you can see how Sony’s Catalyst Browse can perfectly correctly change the white balance and exposure of S-log material with simple sliders just as effectively as most raw formats.
Applying the normal linear or power law (709 is power law) corrections found in most edit software to Log won’t have the desired effect and basic edit software rarely has proper log controls. You need to use a proper grading package like Resolve and it’s dedicated log controls. Better still some form of colour managed workflow like ACES where your specific type of log is precisely converted on the fly to a special digital intermediate and the corrections are made to the intermediate file. There is no transcoding, you just tell ACES what the footage was was shot on and magic happens under the hood. Once you have done that you can change the white balance or ISO of log material in exactly the same way as raw. There is very, very little difference.
When people say you can’t push log, more often than not it isn’t a matter of can’t, it’s a case of can – but you need to use the right tools.
Less compression or a greater bit depth are where the biggest differences between a log or raw recording come from, not so much from whether the data is log or raw. Don’t forget raw is often recorded using log, which kind of makes the “you can’t grade log” argument a bit daft.
Camera manufactures and raw recorder manufacturers are perfectly happy to allow everyone to believe raw is magic and worse still, let people believe that ANY type of raw must be better than all other types of recordings. Read though any camera forum and you will see plenty of examples of “it’s raw so it must be better” or “I need raw because log isn’t as good” without any comprehension of what raw is and how in reality it’s the way the raw is compressed and the bit depth that really matters.
If we take ProRes Raw as an example: For a 4K 24/25fps file the bit rate is around 900Mb/s. For a conventional ProRes HQ file the bit rate is around 800Mb/s. So the file size difference between the two is not at all big.
But the ProRes Raw file only has to store around 1/3 as many data points as the component ProResHQ file. As a result, even though the ProRes Raw file often has a higher bit depth, which in itself usually means better a better quality recording, it is also much, much less compressed and as a result will have fewer artefacts.
It’s the reduced compression and deeper bit depth possible with raw that can lead to higher quality recordings and as a result may bring some grading advantages compared to a normal ProRes or other compressed file. The best bit is there is no significant file size penalty. So you have the same amount of data, but you data should be of higher quality. So given that you won’t need more storage, which should you use? The higher bit depth less compressed file or the more compressed file?
But, not all raw files are the same. Some cameras feature highly compressed 10 bit raw, which frankly won’t be any better than most other 10 bit recordings as you are having to do all the complex math to create a colour image starting with just 10 bit. Most cameras do this internally at at least 12 bit. I believe raw needs to be at least 12 bit to be worth having.
If you could record uncompressed 12 bit RGB or 12 bit component log from these cameras that would likely be just as good and just as flexible as any raw recordings. But the files would be huge. It’s not that raw is magic, it’s just that raw is generally much less compressed and depending on the camera may also have a greater bit depth. That’s where the benefits come from.
This is extremely cool! You can change the FX6’s base look in custom mode using LUTs. This is not the same as baking in a LUT in Cine EI as in custom mode you can change the gain or ISO just as you would with any other gamma. But there’s more than that – you can even adjust the look of the LUT by changing the detail settings, black level, matrix and multi-matrix. Watch the video to see how it’s done.
UPDATE 29th Sept 2020. The issues have now been resolved so it is now safe to update.
27th Aug 2020 If you are a mac user and especially of you use it to edit footage from a Sony camera I recommend that you do not upgrade the operating system to OSX 10.15.6, Pro Video Codecs to 2.1.2 or upgrade FCP-X to version 10.4.9 at this time.
At the moment there is clearly an issue with footage from the FX9 after these updates. It is not clear whether this is due to the new Pro Video Codecs package 2.1.2 that is comes as part of the update to OSX 10.15.6 or whether it is just related to the FCP-X 10.4.9 update. Some users are reporting that some FX9 MXF files can not be previewed in Finder after updating as well as not being visible in FCP-X.
While so far it I have only seen reports that footage from the FX9 is affected, but it wouldn’t surprise me if Venice material is also affected.
I would suggest waiting for a few weeks after the release of any update before updating and never do an update half way through an important project.
UPDATE: Sony know about the issue and are working with Apple to resolve it. It only seems to affect some FX9 footage and possibly some Venice footage. It appears as the culprit is the Pro Video Codecs update, but this is yet to be confirmed. I would still suggest waiting before upgrading even if you are using a different camera.
The Sony PXW-Z90 is a real gem of a camcorder. It’s very small yet packs a 1″ sensor , has real built in ND filters, broadcast codecs and produces a great image. On top of all that it can also stream live directly to Facebook and other similar platforms. In this video I show you how to set up the Z90 to stream live to YouTube. Facebook is similar. The NX80 from Sony is very similar and can also live stream in the same way.
Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.
Do you have an FS5 and want to stream to Facebook or YouTube? It’s actually fairly straight forward and you don’t even need to buy anything extra! You can even connect a couple of FS5’s to a single computer and switch between them.
How do you do it?
First you will need to download and install two pieces of free software on your computer. The first is VLC. VLC is an open source video player but it also has the ability to act as a media server that can receive the UDP video streams that the FS5 sends and convert them into a live video clip on the computer. The computer and the camera will both need to be connected to the same wifi network and you will need to enter the IP address of the computer into the streaming server settings in the FS5. By connecting the FS5 to your computer via the network you can use VLC to decode the UDP stream . Go to “file” “open network” and click on “open RTP/UDP stream” and enter the computers IP address and the stream port, you should then save the FS5 stream as a playlist in VLC.
OBS is a clever open source streaming application that can convert any video feed connected to a computer into a web stream. From within OBS you can set the signal source to VLC and then the stream from the FS5 will become one of the “scenes” or inputs that OBS can stream to Facebook, YouTube etc.
For multi-camera use a different port for each of the UDP streams and then in VLC save each stream as a different playlist. Then each playlist can be attached to a different scene in OBS so that you can switch. cut and mix between them.
With some difficult times ahead and the need for most of us to minimise contact with others there has never been a greater need for streaming and online video services that now.
I’m setting up some streaming gear in my home office so that I can do some presentations and online workshops over the coming weeks.
I am not an expert on this and although I did recently buy a hardware RTMP streaming encoder, like many of us I didn’t have a good setup for live feeds and streaming.
So like so many people I tried to buy a Blackmagic Design Atem, which is a low cost all in one switcher and streaming device. But guess what? They are out of stock everywhere with no word on when more will become available. So I have had to look at other options.
The good news is that there are many options. There is always your mobile phone, but I want to be able to feed several sources including camera feeds, the feed from my laptop and the video output from a video card.
OBS is s great piece of software that can convert almost any video source connected to a computer into a live stream that can be sent to most platforms including Facebook and YouTube etc. If the computer is powerful enough it can switch between different camera sources and audio sources. If you follow the tutorials on the OBS website it’s pretty quick and easy to get it up and running.
So how am I getting video into the laptop that’s running OBS? I already had a Blackmagic Mini Recorder which is an HDMI and SDI to thunderbolt input adapter and I shall be using this to feed the computer. There are many other options but the BM Mini Recorders are really cheap and most dealers stock them as well as Amazon. it’s HD only but for this I really don’t need 4K or UHD.
Taking things a step further I also have both an Atomos Sumo and an Atomos Shogun 7. Both of these monitor/recorders have the ability to act as a 4 channel vision switcher. The great thing about these compared to the Blackmagic Atem is that you can see all your sources on a single screen and you simply touch on the source that you wish to go live. A red box appears around that source and it’s output from the device.
So now I have the ability to stream a feed via OBS from the SDI or HDMI input on the Blackmagic Mini Recorder, fed from one of 4 sources switched by the Atomos Sumo or Shogun 7. A nice little micro studio setup. My sources will be my FS5 and FX9. I can use my Shogun as a video player. For workflow demos I will use another laptop or my main edit machine feeding the video output from DaVinci Resolve via a Blackmagic Mini Monitor which is similar to the mini recorder but the mini monitor is an output device with SDI and HDMI outputs. The final source will be the HDMI output of the edit computer so you can see the desktop.
Don’t forget audio. You can probably get away with very low quality video to get many messages across. But if the audio is hard to hear or difficult to understand then people won’t want to watch your stream. I’m going to be feeding a lavalier (tie clip) mic directly into the computer and OBS.
I think really my main reason for writing this was really to show that many of us probably already have most of the tools needed to put together a small streaming package. Perhaps you can offer this as a service to clients that need to now think about online training or meetings. I was lucky enough to have already had all the items listed in this article, the only extras I have had to but are an extra thunderbolt cable as I only had one. But even if you don’t have a Sumo or Shogun 7 you can still use OBS to switch between the camera on your laptop and any other external inputs. The OBS software is free and very powerful and this really is the keystone to making this all work.
I will be starting a number of online seminars and sessions in the coming weeks. I do have some tutorial videos that I need to finish editing first, but once that’s done expect to see lots of interesting online content from me. Do let me know what topics you would like to see covered and subject to a little bit of sponsorship I’ll see what I can do.
Stay well people. This will pass and then we can all get back on with life again.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.