Advertisements

Why Does S-Log Recorded Internally Look Different To S-Log Recorded On An External Recorder?

I have written about this many times before, but I’ll try to be a bit more concise here. So – You have recorded S-Log2 or S-Log3 on your Sony camera and at the same time recorded on an external ProRes Recorder such as an Atomos, Blackmagic or other ProRes recorder. But the pictures look different and they don’t grade in the same way. It’s a common problem. Often the external recording will look more contrasty and when you add a LUT the blacks and shadow areas come out very differently. Video signals can be recorded using a several different data ranges. S-Log2 and S-Log3 signals are always Data Range.  When you record in the camera the cameras adds information to the recording called metadata that tells your editing or grading software that the material is Data Range. This way the edit and grading software knows how to correctly handle the footage and how to apply any LUT’s. However when you record to an external recorder the external recorder doesn’t have this extra metadata. So the recorder will record the Data Range signal that comes from the camera but it doesn’t add the metadata. The ProRes codec is normally used for Legal Range video and by default, unless there is metadata that says otherwise, edit and grading software will assume any ProRes recordings to be Legal Range. So what happens is that your edit software takes the file, assumes it’s Legal Range and handles it as a Legal Range file when in fact the data in the file is Data Range. This results in the recording levels being transposed into incorrect levels for processing. So when you add a LUT it will look wrong, perhaps with very dark shadows or very bright over exposed looking highlights. It can also limit how much you can grade the footage. What Can We Do About It? Premiere CC. You don’t need to do anything in Premiere for the internal .mp4 or MXF recordings. They are handled correctly but Premiere isn’t handling the ProRes files correctly. My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect.
Screenshot-2019-03-01-at-11.04.04 Why Does S-Log Recorded Internally Look Different To S-Log Recorded On An External Recorder?
Using the legacy “fast color corrector” filter to transform the external recording to the correct range within Premiere.
Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction. DaVinci Resolve: It’s very easy to tell Resolve to treat the clips as Data Range recordings. In the media bin, right click on the clip and under “clip attributes” change the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.
Advertisements

ACS Technical Panel Review The PXW-FX9

The ACS have produced a video report about some of the testing that they did with a pre-production FX9. It’s quite a long video but has some interesting side by side comparisons with the FS7 which we all already know very well. You’ve heard much of what’s in the video from me already, but I’m a Sony guy, so it’s good to hear the same things from the much more impartial ACS.

With my super geek hat on it was really interesting to see the colour response tests performed by Pawel Achtel ACS at 37.08. These tests use a very pure white light source that is split into the full spectrum and then the monochromatic light is projected onto the sensor. It’s a very telling test. I was quite surprised to see how large the FS7’s response is, it’s not something I have ever had the tools to measure. The test also highlights a lack of far red response from the FS7. It’s not terrible, but does help explain why warm skin tones perhaps don’t always look as nice as they could. I do wonder if this is down to the characteristics of the cameras IR cut filter as we also know the sensor to be quite sensitive to IR. The good news is that the PXW-FX9 has what Pawel claims to be the best colour accuracy of any camera he’s tested, and he’s tested pretty much all of the current cinema cameras. Take a look for yourself.

Sony FX9 ACS Roundtable from ACS on Vimeo.

More on the PXW-FX9’s Scan Modes.

Scan Modes

The PXW-FX9 features a 6K Full Frame sensor. With this sensor it is possible to select various scan modes and frame sizes. It is important to understand what these mean and which scan modes can be used with which frame rates and recording formats.

There are two selectable frame sizes, Full Frame (FF) and Super 35 (s35). Full Frame is the larger of the two sensor scan sizes. When Full Frame is selected the sensor area is similar to that of a Full Frame photo camera. In the Full Frame mode you will need to use lenses designed for Full Frame. The frame size in Full Frame scan mode is also similar to the VistaVision film format.

In the Super 35mm mode a reduced area of the sensor is used that is of a similar size to a frame of super 35mm movie film. In this mode you can use lenses designed for APS-C, Super 35mm movie film as well as lenses designed for Full Frame cameras. If you use a Full Frame lens in the Super35 scan mode the field of view will be narrower than it would be in the Full Frame mode by a factor of 1.5.

FF 6K Scan is the highest quality scan mode available in the FX9. The sensor operates in the Full Frame format and a full 6K scan is used, reading 19 million pixels from the sensor. The 6K image is then downsampled to UHD (or HD) for recording. By starting at 6K and downsampling the quality of the UHD recordings will be higher than possible from a 4K scan. Noise in the image is reduced and the resolution and colour sampling is maximised. However there are some frame rate limitations in FF 6K scan. The highest frame rate that can be selected when using FF 6K scan is 30 frames per second. You can record either UHD or HD from FF 6K scan.

FF 2K scan, optimised for speed, quality is reduced. Uses the same Full Frame sized sensor area as FF 6K. However, the sensor is read at 2K instead of 6K. The reduced resolution allows the sensor to be read out much faster, currently up to 120fps. However in this mode the cameras optical filtering is less optimum and this means that the image quality is somewhat reduced compared to the FF 6K scan. This scan mode is best suited to high frame rate shooting where the ability to shoot at a high frame rate is the main priority. You can only record HD from FF 2K scan. I recommend FF 2K is only used for 120fps recording.

S35 4K Medium Balance of Quality and Speed.  In this mode 4K of pixels are read out. This is similar to the scan area and number of pixels of a PXW-FS7 or FS5. As a result the resolution of the recordings will be similar to that of other 4K s35 cameras. Because there is no downsampling in this mode the image quality is not quite as high as can be achieved from the FF 6K scan mode. But the reduced number of pixels that need to be read means that the S35 4K scan can be used at frame rates up to 60fps. You can record either UHD or HD from s35 4K scan.

S35 2K scan optimised for speed with s35 or APS-C lenses, quality is reduced. As above uses the smaller Super 35mm frame area. However, the sensor is read at 2K instead of 4K. The reduced resolution allows the sensor to be read out much faster. The FF 2K scan mode can operate at up to 120fps. In  this mode the cameras optical filtering is less than optimum and this means that the image quality is somewhat reduced compared to the FF 6K or S35 4K scan. This scan mode is best suited to high frame rate shooting where the ability to shoot at a high frame rate is the main priority and only Super 35mm or APS-C lenses are available. You can only record HD from S35 2K scan and I recommend you only use the mode when you need to shoot 120fps with a s35 or APS-C lens.

Don’t Convert Raw to ProRes Before You Do Your recording.

This comes up again and again, hence why I am writing about it once again.
Raw should never be converted to log before recording if you want any benefit from the raw. You may as well just record the 10 bit log that most cameras are capable of internally. Or take log and output it via the cameras 10 bit output (if it has one) and record that directly on the ProRes recorder. It doesn’t matter how you do it but if you convert between different recording types you will always reduce the image quality and this is as bad a way to do it as you can get. This mainly relates to cameras like the PXW-FS7. The FS5 is different because it’s internal UHD recordings are only 8 bit, so even though the raw is still compromised by converting it to ProRes log, this can still be better than the internal 8 bit log.
S-Log like any other log is a compromise recording format. Log was developed to squash a big dynamic range into the same sized recording bucket as would normally be used for conventional low dynamic range gammas. It does this by discarding a lot of tonal and textural information from everything brighter than 1 stop above middle grey, instead of the amount of data doubling for each stop up you go in exposure, it’s held at a constant amount. Normally this is largely transparent as human vision is less acute in the highlight range, but it is still a compromise.
The idea behind Linear raw is that it should give nothing away, each stop SHOULD contain double the data as the one below. But if you only have 12 bit data that would only allow you to record 11 stops of dynamic range as you would quickly run out of code values. So Sony have to use floating point math or something very similar to reduce the size of each stop by diving down the number of code values each stop has. This has almost no impact on highlights where you start off with 100’s or 1000’s values but in the shadows where a stop may only have 8 or 16 values dividing by 4 means you now only have 2 or 4 tonal levels. So once again this is a compromise recording format. To record a big dynamic range using linear what you really need is 16 bit data.
In summary so far:
S-Log reduces the number of highlight tonal values to fit it a big DR in a normal sized bucket.
Sony’s FSRaw, 12 Bit Linear reduces the number of tonal Values across the entire range to fit it in a compact 12 bit recording bucket, but the assumption is that the recording will be at least 12 bit. The greatest impact of the reduction is in the shadows.
Convert 12 bit linear to 10 bit S-Log and now you are compromising both the highlight range and the shadow range. You have the worst of both, you have 10 bit S-Log but with much less shadow data than the S-log straight from the camera. It’s really not a good thing to do and the internally generated S-Log won’t have shadows compromised in the same way.
If you have even the tiniest bit of under exposure or you attempt to lift the shadows in any way this will accentuate the reduced shadow data and banding is highly likely as the values become stretched even further apart as you bring them up the output gamma range.
If you expose brightly and then reduce the shadows this has the effect of compressing the values closer together or pushing them further down the output curve, closing them together as they go down the output gamma range, this reduces banding. This is one of the reasons why exposing more brightly can often help both log and raw recordings. So a bit of over exposure might help, but any under exposure is really, really going to hurt. Again, you would probably be better off using the internally generated S-Log.
To make matters worse there is also often an issue with S-Log in a ProRes file.
If all that is not enough there is also a big problem in the way ProRes files record S-Log. S-Log should always be recorded as full range data. When you record an internal XAVC file the metadata in the clips tells the edit or grading software that the file is full range. Then when you apply a LUT or do your grading the correct transforms occur and all shadow textures are preserved. But ProRes files are by default treated as legal range files. So when you record full range S-Log inside a ProRes file there is a high likelihood that your edit or grading software will handle the data in the clip incorrectly and this too can lead to problems in the shadows including truncated data, clipping and banding, even though the actual recorded data may be OK. This is purely a metadata issue, grading software such as DaVinci resolve can be forced to treat the ProRes files as full range.
 
 
more on S-Log and ProRes files here: http://www.xdcam-user.com/2019/03/sonys-internal-recording-levels-are-correct/

What’s So Magical About Full Frame – Or Is It all Just ANOTHER INTERNET MYTH?

FIRST THINGS FIRST:
The only way to change the perspective of a shot is to change the position of the camera relative to the subject or scene.  Just put a 1.5x wider lens on a s35camera and you have exactly the same angle of view as a Full Frame camera. It is an internet myth that Full Frame changes the perspective or the appearance of the image in a way that cannot be exactly replicated with other sensor or frame sizes. The only thing that changes perspective is how far you are from the subject. It’s one of those laws of physics and optics that can’t be broken. The only way to see more or less around an object is by changing your physical position.

The only thing changing the focal length or sensor size changes is magnification and you can change the magnification either by changing sensor size or focal length and the effect is exactly the same either way. So in terms of perspective, angle of view or field of view an 18mm s35 setup will produce an identical image to a 27mm FF setup. The only difference may be in DoF depending on the aperture where  f4 on FF will provide the same DoF as f2.8 on s35. If both lenses are f4 then the FF image will have a shallower DoF.

Again though physics play a part here as if you want to get that shallower DoF from a FF camera then the lens FF lens will normally need to have the same aperture as the s35 lens. To do that the elements in the FF lens need to be bigger to gather twice as much light so that it can put the same amount of light as the s35 lens across the twice as large surface area of the FF sensor.  So generally you will pay more for a comparable FF like for like aperture lens as a s35 lens. Or you simply won’t be able to get an equivalent in FF because the optical design becomes too complex, too big, too heavy or too costly.
This in particular is a big issue for parfocal zooms. At FF and larger imager sizes they can be fast or have a big zoom range, but to do both is very, very hard and typically requires some very exotic glass. You won’t see anything like the affordable super 35mm Fujinon MK’s in full frame, certainly not at anywhere near the same price. This is why for decades 2/3″ sensors and 16mm film before that, ruled the roost for TV news as lenses with big zoom ranges and large fast apertures were relatively affordable.
Perhaps one of the commonest complaints I see today with larger sensors is “why can’t I find an affordable fast, parfocal zoom with more than a 4x zoom range”. Such lenses do exist, for s35 you have lenses like the $22K Canon CN7 17-120mm  T2.9, which is pretty big and pretty heavy. For Full Frame the nearest equivalent is the more expensive $40K Fujinon Premista 28-100 t2.9. which is a really big lens weighing in at almost 4kg. But look at the numbers: Both will give a very similar AoV on their respective sensors at the wide end but the much cheaper Canon has a greatly extended zoom range and will get a tighter shot than the Premista at the long end. Yes, the DoF will be shallower with the Premista, but you are paying almost double, it is a significantly heavier lens and it has a much reduced zoom ratio. So you may need both the $40K Premista 28-100 and the $40K Premista 80-250 to cover everything the Canon does (and a bit more). So as you can see, getting that extra shallow DoF may be very costly. And it’s not so much about the sensor, but more about the lens.
The History of large formats:
It is worth considering that back in the 50’s and 60’s we had VistaVision, a horizontal 35mm format the equivalent of 35mm FF, plus 65mm and a number of other larger than s35 formats. All in an effort to get better image quality.
VistaVision (The closet equivalent to 35mm Full Frame).
VistaVision didn’t last long, about 7 or 8 years because better quality film stocks meant that similar image quality could be obtained from regular s35mm film and shooting VistaVision was difficult due to the very shallow DoF and focus challenges, plus it was twice the cost of regular 35mm film. It did make a brief comeback in the 70’s for shooting special effects sequences where very high resolutions were needed. VistaVision was superseded by Cinemascope which uses 2x Anamorphic lenses and conventional vertical super 35mm film and Cinemascope was subsequently largely replaced by 35mm Panavision (the two being virtually the same thing and often used interchangeably).
65mm formats.
 At around the same time there were various 65mm (with 70mm projection) formats including Super Panavision, Ultra Panavision and Todd-AO These too struggled and very few films were made using 65mm film after the end of the 60’s. There was a brief resurgence in the 80’s and again recently there have been a few films, but production difficulties and cost has meant they tend to be niche productions.
Historically there have been many attempts to establish mainstream  larger than s35 formats. But by and large audiences couldn’t tell the difference and even if they did they wouldn’t pay extra for them. Obviously today the cost implication is tiny compared to the extra cost of 65mm film or VistaVision. But the bottom line remains that normally the audience won’t actually be able to see any difference, because in reality there isn’t one, other than perhaps a marginal resolution increase. But it is harder to shoot FF than s35. Comparable lenses are more expensive, lens choices more limited, focus is more challenging at longer focal lengths or large apertures. If you get carried away with too large an aperture you get miniaturisation and cardboarding effects if you are not careful (these can occur with s35 too).
Can The Audience Tell – Does The Audience Care?
Cinema audiences have not been complaining that the DoF isn’t shallow enough, or that the resolution isn’t high enough (Arri’s success has proved that resolution is a minor image quality factor). But they are noticing focus issues, especially in 4K theaters.
 So while FF and the other larger format are here to stay. Full Frame is not the be-all and end-all. Many, many people believe that FF has some kind of magic that makes the images different to smaller formats because they “read it on the internet so it must be true”.  I think sometimes some things read on the internet create a placebo effect where when you read it enough times you will actually become convinced that the images are different, even when in fact they are not. Once they realise that actually it isn’t different, I’m quite sure many will return to s35 because that does seem to be the sweet spot where DoF and focus is manageable and IQ is plenty good enough. Only time will tell, but history suggest s35 isn’t going anywhere any time soon.

Today’s modern cameras give us the choice to shoot either FF or s35. Either can result in an identical image, it’s only a matter of aperture and focal length. So pick the one that you feel most comfortable with for you production. FF is nice, but it isn’t magic.

Really it’s all about the lens.

The really important thing is your lens choice. I believe that what most people put down as “the full frame effect” is nothing to do with the sensor size but the qualities of the lenses they are using. Full frame stills cameras have been around for a long time and as a result there is a huge range of very high quality glass to choose from (as well as cheaper budget lenses). In the photography world APS-C which is similar to super 35mm movie film has always been considered a lower cost or budget option and many of the lenses designed for APS-C have been built down to a price rather than up in quality. This makes a difference to the way the images may look. So often Full Frame lenses may offer better quality or a more pleasing look, just because the glass is better.

I recently shot a project using Sony’s Venice camera over 2 different shoots. For the shoot we used Full Frame and the Sigma Cine Primes. The images we got looked amazing. But then the second shoot where we needed at times to use higher frame rates we shot using super 35 with a mix of the Fujinon MK zooms and Sony G-Master lenses. Again the images looked amazing and the client and the end audience really can’t tell the footage from the first shoot with the footage from the second shoot.

Downsampling from 6K.

One very real benefit shooting 6K full frame does bring, with both the FX9 and Sony Venice (or any other 6K FF camera) is that when you shoot at 6K and downsample to 4K you will have a higher resolution image with better colour and in most cases lower noise than if you started at 4K. This is because the bayer sensors that all the current large sensor camera use don’t resolve 4K when shooting at 4K. To get 4K you need to start with 6K.

Hot Pixels and White Dots From My New Camcorder (FX9 and many others).

So you have just taken delivery of a brand new PXW-FX9. Turned it on and plugged it in to a 4K TV or monitor – and shock horror there are little bright dots in the image – hot pixels.

First of all, don’t be alarmed, this is not unusual, in fact I’d actually be surprised if there weren’t any, especially if the camera has travelled in any airfreight.

Video sensors have millions of pixels and they are prone to disturbance from cosmic rays. It’s not unusual for some to become out of spec. So all modern cameras incorporate various methods of recalibrating or re-mapping those pesky problem pixels. On the Sony professional cameras this is called APR. Owners of the Sony F5, F55, Venice and FX9 will see a “Perform APR” message every couple of weeks as this is a function that needs to be performed regularly to ensure you don’t get any problems.

You should always run the APR function after flying with the camera, especially on routes over the poles as cosmic rays are greater in these areas. Also if you intend to shoot at high gain levels it is worth performing an APR run before the shoot.

If your camera doesn’t have a dedicated APR function, typically found in the maintenance section of the the camera menu system, then often the black balance function will have a very similar effect. On some Sony cameras repeatedly performing a black balance will active the APR function.

If there are a lot of problem pixels then it can take several runs of the APR routine to sort them all out. But don’t worry, it is normal and it is expected. All cameras suffer from it. Even if you have 1000 dead pixels that’s still only a teeny tiny fraction of the 19 million pixels on the sensor.

APR just takes 30 seconds or so to complete. It’s also good practice to black balance at the beginning of each day to help minimise fixed pattern noise and set the cameras black level correctly. Just remember to ensure there is a cap on the lens or camera body to exclude all outside light when you do it!

SEE ALSO: http://www.xdcam-user.com/2011/02/are-cosmic-rays-damaging-my-camera-and-flash-memory/

connecting to The PXW-FX9 Using Content Browser Mobile For Monitoring and OtheR Functions.

One of the great features of the PXW-FX9 is the ability to connect a phone or tablet to the camera via WiFi so that you can view a near live feed from the camera (there’s about a 5 to 6 frame delay).

To do this you need to install the latest version of the free Sony Content Browser Mobile application on your phone. Then you would normally connect the phone to the cameras WiFi by placing the FX9 into Access Point Mode and use either NFC to establish the connection if your phone has it, or by manually connecting your phone’s WiFi to the camera.

However for many people this does not always provide a stable connection with frequent drop outs and disconnects. Fortunately there is a another way to connect the camera and phone and this seems much more stable.

First put the cameras WiFi into “Station Mode” instead of “Access Point” mode. Then setup your phone to act as a WiFi Hotspot. Now you can connect the camera to the phone by performing a network search on the camera. Once the camera finds the phones WiFi hotspot you connect the camera to the phone.

Once the connection from the camera to the phone has been established you should open Content Browser Mobile and it should find the FX9. If it doesn’t find it straight away swipe down with your finger to refresh the connection list. Then select the camera to connect to it.

Once connected this way you will have all the same options that you would have if connected the other way around (using Access Point mode). But the connection tends to be much, much more stable. In addition you can also now use the cameras ftp functions to upload files from the camera via your phones cellular data connection to remote servers.

If you want to create a bigger network then consider buying one of the many small battery powered WifI routers or a dedicated 4G MiFi hotspot and connect everything to that. Content Browser Mobile should be able to find any camera connected to the same network. Plus if you use a WiFi router you can connect several phones to the same camera.

Struggling With Blue LED Lighting? Try Turning On The adaptive Matrix.

It’s a common problem. You are shooting a performance or event where LED lighting has been used to create dramatic coloured lighting effects. The intense blue from many types of LED stage lights can easily overload the sensor and instead of looking like a nice lighting effect the blue light becomes an ugly splodge of intense blue that spoils the footage.

Well there is a tool hidden away in the paint settings of many recent Sony cameras that can help. It’s called “adaptive matrix”.

When adaptive matrix is enabled, when the camera sees intense blue light such as the light from a blue LED light, the matrix adapts to this and reduces the saturation of the blue colour channel in the problem areas of the image. This can greatly improve the way such lights and lighting look. But be aware that if trying to shoot objects with very bright blue colours, perhaps even a bright blue sky, if you have the adaptive matrix turned on it may desaturate them. Because of this the adaptive matrix is normally turned off by default.

If you want to turn it on, it’s normally found in the cameras paint and matrix settings and it’s simply a case of setting adaptive matrix to on. I recommend that when you don’t actually need it you turn it back off again.

Most of Sony’s broadcast quality cameras produced in the last 5 years have the adaptive matrix function, that includes the FS7, FX9, Z280, Z450, Z750, F5/F55 and many others.

Why Can’t I Get Third Party BP-U Batteries any more?

In the last month or so it has become increasingly hard to find dealers or stores with 3rd party BP-U style batteries in stock.

After a lot of digging around and talking to dealers and battery manufacturers it became apparent that Sony were asking the manufacturers of BP-U style batteries to stop making and selling them or face legal action. The reason given being that the batteries impinge on Sony’s Intellectual Property rights.

Why Is This Happening Now?

It appears that the reason for this clamp down is because it was discovered that the design of some of these 3rd party batteries was such that the battery could be inserted into the camera in a way that instead of power flowing through the power pins to the camera, power was flowing through the data pins. This will burn out the circuit boards in the camera and the camera will no longer work.

Users of these damaged cameras, unaware that the problem was caused by the battery were sending them back to Sony for repair under warranty. I can imagine that many arguments would have then followed over who was to pay for these potentially very expensive repairs or camera replacements.

So it appears that to prevent further issues Sony is trying to stop potentially damaging batteries from being manufactured and sold.

This is good and bad. Of course no one wants to use a battery that could result in the need to replace a very expensive camera with a new one (and if you were not aware it was the battery you could also damage the replacement camera). But many of us, myself included, have been using 3rd party batteries so that we can have a D-Tap power connection on the battery to power other devices such as monitors.

Only Option – BP-U60T?

Sony don’t produce batteries with D-Tap outlets. They do make a battery with a hirose connector (BP-U60T), but that’s not what we really want and compared to the 3rd party batteries it’s very expensive and the capacity isn’t all that high.

BP-U60T Why Can't I Get Third Party BP-U Batteries any more?
Sony BP-U60T with 4 pin hirose DC out.

So where do we go from here?

If you are going to continue to use 3rd party batteries, do be very careful about how you insert them and be warned that there is the potential for serious trouble. I don’t know how widespread the problem is.

We can hope perhaps that maybe Sony will either start to produce batteries with a D-Tap of their own. Or perhaps they can work with a range of chosen 3rd party battery manufacturers to find a way to produce safe batteries with D-Tap outputs under licence.

DaVinci Resolve 16.1.2 Released.

Blackmagic Design have just released the latest update to DaVinci Resolve. If you have been experiencing crashes when using XAVC material from the PXW-FX9 I recommend you download and install this update.

If you are not a Resolve user and are struggling with grading or getting the very best from any log or raw camera, then I highly recommend you take a look at DaVinci Resolve. It’s also a very powerful edit package. The best bit is the free version supports most cameras. If you need full MXF support you will need to buy the studio version, but with a one off cost of only $299 USD it really is a bargain and gets you away from any horrid subscription services.

https://www.blackmagicdesign.com/support/family/davinci-resolve-and-fusion