Tag Archives: grading

Can DaVinci Resolve steal the edit market from Adobe and Apple.

I have been editing with Adobe Premiere since around 1994. I took a rather long break from Premiere between 2001 and 2011 and switched over to Apple and  Final Cut Pro which in many ways used to be very similar to Premiere (I think some of the same software writers were used for FCP as Premiere). My FCP edit stations were always muti-core Mac Towers. The old G5’s first then later on the Intel Towers. Then along came FCP-X. I just didn’t get along with FCP-X when it first came out. I’m still not a huge fan of it now, but will happily concede that FCP-X is a very capable, professional edit platform.

So in 2011 I switch back to Adobe Premiere as my edit platform of choice. Along the way I have also used various versions of Avid’s software, which is another capable platform.

But right now I’m really not happy with Premiere. Over the last couple of years it has become less stable than it used to be. I run it on a MacBook Pro which is a well defined hardware platform, yet I still get stability issues. I’m also experiencing problems with gamma and level shifts that just shouldn’t be there. In addition Premiere is not very good with many long GOP codecs. FCP-X seems to make light work of XAVC-L compared to Premiere. Furthermore Adobe’s Media encoder which once used to be one of the first encoders to get new codecs or features is now lagging behind, Apples Compressor now has the ability to do at he full range of HDR files. Media Compressor can only do HDR10. If you don’t know, it is possible to buy Compressor on it’s own.

Meanwhile DaVinci Resolve has been my grading platform of choice for a few years now. I have always found it much easier to get the results and looks that I want from Resolve than from any edit software – this isn’t really a surprise as after all that’s what Resolve was originally designed for.

editing-xl-1024x629 Can DaVinci Resolve steal the edit market from Adobe and Apple.
DaVinci Resolve a great grading software and it’s edit capabilities are getting better and better.

The last few versions of Resolve have become much faster thanks to some major processing changes under the hood and in addition there has been a huge amount of work on Resolves edit capabilities. It can now be used as a fully featured edit platform. I recently used Resolve to edit some simpler projects that were going to be graded as this way I could stay in the same software for both processes, and you know what it’s a pretty good editor. There are however a few things that I find a bit funky and frustrating in the edit section of Resolve at the moment. Some of that may simply be because I am less familiar with it for editing than I am Premiere.

Anyway, on to my point. Resolve is getting to be a pretty good edit platform and it’s only going to get better. We all know that it’s a really good and very powerful grading platform and with the recent inclusion of the Fairlight audio suite within Resolve it’s pretty good at handling audio too. Given that the free version of Resolve can do all of the edit, sound and grading functions that most people need, why continue to subscribe to Adobe or pay for FCP-X?

With the cost of the latest generations of Apple computers expanding the price gap between them and similar spec Windows machines – as well as the new Macbooks lacking built in ports like HDMI, USB3 that we all use every day (you now have to use adapters and dongles). The  Apple eco system is just not as attractive as it used to be. Resolve is cross platform, so an Mac user can stay with Apple if they wish, or move over to Windows or Linux whenever they want with Resolve. You can even switch platforms mid project if you want. I could start an edit on my MacBook and the do the grade on a PC workstation staying with Resolve through the complete process.

Even if you need the extra features of the full version like very good noise reduction, facial recognition, 4K DCI output or HDR scopes then it’s still good value as it currently only costs $299/£229 which is less than a years subscription to Premiere CC.

But what about the rest of the Adobe Creative suite? Well you don’t have to subscribe to the whole suite. You can just get Photoshop or After Effects. But there are also many alternatives. Again Blackmagic Design have Fusion 9 which is a very impressive VFX package used for many Hollywood movies and like Resolve there is also a free version with a very comprehensive tools set or again for just $299/£229 you get the full version with all it’s retiming tools etc.

motion-xl-1024x512 Can DaVinci Resolve steal the edit market from Adobe and Apple.
Blackmagic Designs Fusion is a very impressive video effects package for Mac and PC.

For a Photoshop replacement you have GIMP which can do almost everything that Photoshop can do. You can even use Photoshop filters within GIMP. The best part is that GIMP is free and works on both Mac’s and PC’s.

So there you have it – It looks like Blackmagic Design are really serious about taking a big chunk of Adobe Premiere’s users. Resolve and Fusion are cross platform so, like Adobe’s products it doesn’t matter whether you want to use a Mac or a PC. But for me the big thing is you own the software. You are not going to be paying out rather a lot of money month on month for something that right now is in my opinion somewhat flakey.

I’m not quite ready to cut my Creative Cloud subscription yet, maybe on the next version of Resolve. But it won’t be long before I do.


Sony Venice. Dual ISO’s, 1 stop ND’s and Grading via Metadata.

With the first of the production Venice cameras now starting to find their way to some very lucky owners it’s time to take a look at some features that are not always well understood, or that perhaps no one has told you about yet.

Dual Native ISO’s: What does this mean?

An electronic camera uses a piece of silicon to convert photons of light into electrons of electricity. The efficiency at doing this is determined by the material used. Then the amount of light that can be captured and thus the sensitivity is determined by the size of the pixels. So, unless you physically change the sensor for one with different sized pixels (which will in the future be possible with Venice) you can’t change the true sensitivity of the camera. All you can do is adjust the electronic parameters.

With most video cameras the ISO is changed by increasing the amount of amplification applied to the signal coming off the sensor. Adding more gain or increasing the amplification will result in a brighter picture. But if you add more amplification/gain then the noise from the sensor is also amplified by the same amount. Make the picture twice as bright and normally the noise doubles.

In addition there is normally an optimum amount of gain where the full range of the signal coming from the sensor will be matched perfectly with the full recording range of the chosen gamma curve. This optimum gain level is what we normally call the “Native ISO”. If you add too much gain the brightest signal from the sensor would be amplified too much and exceed the recording range of the gamma curve. Apply too little gain and your recordings will never reach the optimum level and darker parts of the image may be too dark to be seen.

As a result the Native ISO is where you have the best match of sensor output to gain. Not too much, not too little and hopefully low noise. This is typically also referred to as 0dB gain in an electronic camera and normally there is only 1 gain level where this perfect harmony between sensor, gain and recording range is achieved, this becoming the native ISO.

Side Note: On an electronic camera ISO is an exposure rating, not a sensitivity measurement. Enter the cameras ISO rating into a light meter and you will get the correct exposure. But it doesn’t really tell you how sensitive the camera is as ISO has no allowance for increasing noise levels which will limit the darkest thing a camera can see.

Tweaking the sensor.

However, there are some things we can tweak on the sensor that effect how big the signal coming from the sensor is. The sensors pixels are analog devices. A photon of electricity hits the silicone photo receptor (pixel) and it gets converted into an electron of electricity which is then stored within the structure of the pixel as an analog signal until the pixel is read out by a circuit that converts the analog signal to a digital one, at the same time adding a degree of noise reduction. It’s possible to shift the range that the A to D converter operates over and the amount of noise reduction applied to obtain a different readout range from the sensor. By doing this (and/or other similar techniques, Venice may use some other method) it’s possible to produce a single sensor with more than one native ISO.

A camera with dual ISO’s will have two different operating ranges. One tuned for higher light levels and one tuned for lower light levels. Venice will have two exposure ratings: 500 ISO for brighter scenes and 2500 ISO for shooting when you have less light. With a conventional camera, to go from 500 ISO to 2500 ISO you would need to add just over 12dB of gain and this would increase the noise by a factor of more than 4. However with Venice and it’s dual ISO’s, as we are not adding gain but instead altering the way the sensor is operating the noise difference between 500 ISO and 2500 ISO will be very small.

You will have the same dynamic range at both ISO’s. But you can choose whether to shoot at 500 ISO for super clean images at a sensitivity not that dissimilar to traditional film stocks. This low ISO makes it easy to run lenses at wide apertures for the greatest control over the depth of field. Or you can choose to shoot at the equivalent of 2500 ISO without incurring a big noise penalty.

One of Venice’s key features is that it’s designed to work with Anamorphic lenses. Often Anamorphic lenses are typically not as fast as their spherical counterparts. Furthermore some Anamorphic lenses (particularly vintage lenses) need to be stopped down a little to prevent excessive softness at the edges. So having a second higher ISO rating will make it easier to work with slower lenses or in lower light ranges.


Next it’s worth thinking about how you might want to use the cameras ND filters. Film cameras don’t have built in ND filters. An Arri Alexa does not have built in ND’s. So most cinematographers will work on the basis of a cinema camera having a single recording sensitivity.

The ND filters in Venice provide uniform, full spectrum light attenuation. Sony are incredibly fussy over the materials they use for their ND filters and you can be sure that the filters in Venice do not degrade the image. I was present for the pre-shoot tests for the European demo film and a lot of time was spent testing them. We couldn’t find any issues. If you introduce 1 stop of ND, the camera becomes 1 stop less sensitive to light.  In practice this is no different to having a camera with a sensor 1 stop less sensitive. So the built in ND filters, can if you choose, be used to modify the base sensitivity of the camera in 1 stop increments, up to 8 stops lower.

So with the dual ISO’s and the ND’s combined you have a camera that you can setup to operate at the equivalent of 2 ISO all the way up to 2500 ISO in 1 stop steps (by using 2500 ISO and 500 together you can have approximately half stops steps between 10 ISO and 650 ISO). That’s an impressive range and at no stage are you adding extra gain. There is no other camera on the market that can do this.

On top of all this we do of course still have the ability to alter the Exposure Index of the cameras LUT’s to offset the exposure to move the exposure mid point up and down within the dynamic range. Talking of LUT’s I hope to have some very interesting news about the LUT’s for Venice. I’ve seen a glimpse of the future and I have to say it looks really good!


The raw and X-OCN material from a Venice camera (and from a PMW-F55 or F5 with the R7 recorder) contains a lot of dynamic metadata. This metadata tells the decoder in your grading software exactly how to handle the linear sensor data stored in the files. It tells your software where in the recorded data range the shadows start and finish, where the mid range sits and where the highlights start and finish. It also informs the software how to decode the colors you have recorded.

I recently spent some time with Sony Europe’s color grading guru Pablo Garcia at the Digital Motion Picture Center in Pinewood. He showed me how you can manipulate this metadata to alter the way the X-OCN is decoded to change the look of the images you bring into the grading suite. Using a beta version of Black Magic’s DaVinci Resolve software, Pablo was able to go into the clips metadata in real time and simply by scrubbing over the metadata settings adjust the shadows, mids and highlights BEFORE the X-OCN was decoded. It was really incredible to see the amount of data that Venice captures in the highlights and shadows. By adjusting the metadata you are tailoring the the way the file is being decoded to suit your own needs and getting the very best video information for the grade. Need more highlight data – you got it. Want to boost the shadows, you can, at the file data level before it’s converted to a traditional video signal.

It’s impressive stuff as you are manipulating the way the 16 bit linear sensor data is decoded rather than a traditional workflow which is to decode the footage to a generic intermediate file and then adjust that. This is just one of the many features that X-OCN from the Sony Venice offers. It’s even more incredible when you consider that a 16 bit linear  X-OCN LT file is similar in size to 10 bit XAVC-I(class 480) and around half the size of Apples 10 bit ProRes HQ.  X-OCN LT looks fantastic and in my opinion grades better than XAVC S-Log. Of course for a high end production you will probably use the regular X-OCN ST codec rather than the LT version, but ST is still smaller than ProRes HQ. What’s more X-OCN is not particularly processor intensive, it’s certainly much easier to work with X-OCN than cDNG. It’s a truly remarkable technology from Sony.

Next week I will be shooting some more test with a Venice camera as we explore the limits of what it can do. I’ll try and get some files for you to play with.

Why do we strive to mimic film? What is the film look anyway?


Please don’t take this post the wrong way. I DO understand why some people like to try and emulate film. I understand that film has a “look”. I also understand that for many people that look is the holy grail of film production. I’m simply looking at why we do this and am throwing the big question out there which is “is it the right thing to do”? I welcome your comments on this subject as it’s an interesting one worthy of discussion.

In recent years with the explosion of large sensor cameras with great dynamic range it has become a very common practice to take the images these cameras capture and apply a grade or LUT that mimics the look of many of todays major movies. This is often simply referred to as the “film look”.

This look seems to be becoming more and more extreme as creators attempt to make their film more film like than the one before, leading to a situation where the look becomes very distinct as opposed to just a trait of the capture medium. A common technique is the “teal and orange” look where the overall image is tinted teal and then skin tones and other similar tones are made slightly orange. This is done to create colour contrast between the faces of the cast and the background as teal and orange are on opposite sites of the colour wheel.

Another variation of the “film look” is the flat look. I don’t really know where this look came from as it’s not really very film like at all. It probably comes from shooting with a log gamma curve, which results in a flat, washed out looking image when viewed on a conventional monitor. Then because this look is “cool” because shooting on log is “cool” much of the flatness is left in the image in the grade because it looks different to regular TV ( or it may simply be that it’s easier to create a flat look than a good looking high contrast look). Later in the article I have a nice comparison of these two types of “film look”.

Not Like TV!

Not looking like TV or Video may be one of the biggest drivers for the “film look”. We watch TV day in, day out. Well produced TV will have accurate colours, natural contrast (over a limited range at least) and if the TV is set up correctly should be pretty true to life. Of course there are exceptions to this like many daytime TV or game shows where the saturation and brightness is cranked up to make the programmes vibrant and vivid.  But the aim of most TV shows is to look true to life. Perhaps this is one of the drivers to make films look different, so that they are not true to life, more like a slightly abstract painting or other work of art. Colour and contrast can help setup different moods, dull and grey for sadness, bright and colourful for happy scenes etc, but this should be separate from the overall look applied to a film.

Another aspect of the TV look comes from the fact that most TV viewing takes place in a normal room where light levels are not controlled. As a result bright pictures are normally needed, especially for daytime TV shows.

But What Does Film Look Like?

But what does film look like? As some of you will know I travel a lot and spend a lot of time on airplanes. I like to watch a film or 2 on longer flights and recently I’ve been watching some older films that were shot on film and probably didn’t have any of the grading or other extensive manipulation processes that most modern movies go through.

Lets look at a few frames from some of those movies, shot on film and see what they look like.

Lawrence-of-Arabia-01-1024x576 Why do we strive to mimic film? What is the film look anyway?
Lawrence of Arabia.

The all time classic Lawrence of Arabia. This film is surprisingly colourful. Red, blues, yellows are all well saturated. The film is high contrast. That is, it has very dark blacks, not crushed, but deep and full of subtle textures. Skin tones  are around 55 IRE and perhaps very slightly skewed towards brown/red, but then the cast are all rather sun tanned. But I wouldn’t call the skin tones orange. Diffuse whites typically around 80 IRE and they are white, not tinted or coloured.

braveheart1-1024x576 Why do we strive to mimic film? What is the film look anyway?

When I watched Braveheart, one of the things that stood out to me was how green the foliage and grass was. The strong greens really stood out in this movie compared to more modern films. Overall it’s quite dark, skin tones are often around 45 IRE and rarely more than 55 IRE, very slightly warm/brown looking, but not orange. Again it’s well saturated and high contrast with deep blacks. Overall most scenes have a quite low peak and average brightness level. It’s quite hard to watch this film in a bright room on a conventional TV, but it looks fantastic in a darkened room.

Indy_cuts_bridge Why do we strive to mimic film? What is the film look anyway?
Raiders Of The Lost Ark

Raiders of the Lost Ark does show some of the attributes often used for the modern film look. Skin tones are warm and have a slight orange tint and overall the movie is very warm looking. A lot of the sets use warm colours with browns and reds being prominent. Colours are well saturated. Again we have high contrast with deep blacks and those much lower than TV skin tones, typically 50-55IRE in Raiders. Look at the foliage and plants though, they are close to what you might call TV greens, ie realistic shades of green.

A key thing I noticed in all of these (and other) older movies is that overall the images are darker than we would use for daytime TV. Skin tones in movies seem to sit around 55IRE. Compare that to the typical use of 70% zebras for faces on TV. Also whites are generally lower, often diffuse white sitting at around 75-80%. One important consideration is that films are designed to be shown in dark cinema theatres where  white at 75% looks pretty bright. Compare that to watching TV in a bright living room where to make white look bright you need it as bright as you can get. Having diffuse whites that bit lower in the display range leaves a little more room to separate highlights from whites giving the impression of a greater dynamic range. It also brings the mid range down a bit so the shadows also look darker without having to crush them.

Side Note: When using Sony’s Hypergammas and Cingeammas they are supposed to be exposed so that white is around 70-75% with skin tones around 55-60%. If used like this with a sutable colour matrix such as “cinema” they can look quite film like.

If we look at some recent movies the look can be very different.

the_revenant Why do we strive to mimic film? What is the film look anyway?
The Revenant

The Revenant is a gritty film and it has a gritty look. But compare it to Braveheart and it’s very different. We have the same much lower skin tone and diffuse white levels, but where has the green gone? and the sky is very pale.  The sky and trees are all tinted slightly towards teal and de-saturated. Overall there is only a very small colour range in the movie. Nothing like the 70mm film of Laurence of Arabia or the 35mm film of Braveheart.

deadmen-1024x576 Why do we strive to mimic film? What is the film look anyway?
Dead Men Tell No Tales.

In the latest instalment of the Pirates of the Caribbean franchise the images are very “brown”. Notice how even the whites of the ladies dresses or soldiers uniforms are slightly brown. The sky is slightly grey (I’m sure the sky was much bluer than this). The palm tree fronds look browner than green and Jack Sparrow looks like he’s been using too much fake tan as his face is border line orange (and almost always also quite dark).

wonder_woman_still_6 Why do we strive to mimic film? What is the film look anyway?
Wonder Woman.

Wonder woman is another very brown movie. In this frame we can see that the sky is quite brown. Meanwhile the grass is pushed towards teal and de-saturated, it certainly isn’t the colour of real grass.  Overall colours are subdued with the exception of skin tones.

These are fairly typical of most modern movies. Colours generally quite subdued, especially greens and blues. The sky is rarely a vibrant blue, grass is rarely a grassy green. Skin tones tend to be very slightly orange and around 50-60IRE. Blacks are almost always deep and the images contrasty. Whites are rarely actually white, they tend to be tinted either slightly brown or slightly teal. Steel blues and warm browns are favoured hues. These are very different looking images to the movies shot on film that didn’t go through extensive post production manipulation.

So the film look, isn’t really about making it look like it was shot on film, it’s a stylised look that has become stronger and stronger in recent years with most movies having elements of this look. So in creating the “film look” we are not really mimicking film, but copying a now almost standard colour grading recipe that has some film style traits.


In most cases these are not unpleasant looks and for some productions the look can add to the film, although sometimes it can be taken to noticeable and objectionable extremes. However we do now have cameras that can capture huge colour ranges. We also have the display technologies to show these enormous colour ranges. Yet we often choose to deliberately limit what we use and very often distort the colours in our quest for the “film look”.

HDR TV’s with Rec2020 colour can show both a greater dynamic range and a greater colour range than we have ever seen before. Yet we are not making use of this range, in particular the colour range except in some special cases like some TV commercials as well as high end wild life films such as Planet Earth II.

This TV commercial for TUI has some wonderful vibrant colours that are not restricted to just browns and teal yet it looks very film like. It does have an overall warm tint, but the other colours are allowed to punch through. It feels like the big budget production that it clearly was without having to resort to  the modern defacto  restrictive film look colour palette. Why can’t feature films look like this? Why do they need to be dull with a limited colour range? Why do we strive to deliberately restrict our colour pallet in the name of fashion?

What’s even more interesting is what was done for the behind the scenes film for the TUI advert…..

The producers of the BTS film decided to go with an extremely flat, washed out look, another form of modern “film look” that really couldn’t be further from film. When an typical viewer watches this do they get it in the same way as we that work in the industry do?  Do they understand the significance of the washed out, flat, low contrast pictures or do they just see weird looking milky pictures that lack colour with odd skin tones? The BTS film just looks wrong to me. It looks like it was shot with log and not graded.  Personally, I don’t think it looks cool or stylish, it just looks wrong and cheap compared to the lush imagery in the actual advert (perhaps that was the intention).

I often see people looking for a film look LUT. Often they want to mimic a particular film. That’s fine, it’s up to them. But if everyone starts to home in on one particular look or style then the films we watch will all look the same. That’s not what I want. I want lush rich colours where appropriate. Then I might want to see a subdued look in a period piece or a vivid look for a 70’s film. Within the same movie colour can be used to differentiate between different parts of the story. Take Woody Allen’s Cafe Society, shot by Vittorio Storaro for example. The New York scenes are grey and moody while the scenes in LA that portray a fresh start are vibrant and vivid. This is I believe important, to use colour and contrast to help tell the story.

Our modern cameras give us an amazing palette to work with. We have the tools such as DaVinci Resolve to manipulate those colours with relative ease. I believe we should be more adventurous with our use of colour. Reducing exposure levels a little compared to the nominal TV and video – skin tones at 70% – diffuse whites at 85-90%, helps replicate the film look and also leaves a bit more space in the highlight range to separate highlights from whites which really helps give the impression of a more contrasty image. Blacks should be black, not washed out and they shouldn’t be crushed either.

Above all else learn to create different styles. Don’t be afraid of using colour to tell your story and remember that real film isn’t just brown and teal, it’s actually quite colourful. Great artists tend to stand out when their works are different, not when they are the same as everyone else.


Big Update for Sony Raw Viewer.

rawviewer-01-large-e1480363307344 Big Update for Sony Raw Viewer.
Sony’s Raw Viewer for raw and X-OCN file manipulation.

Sony’s raw viewer is an application that has just quietly rumbled away in the background. It’s never been a headline app, just one of those useful tools for viewing or transcoding Sony’s raw material. I’m quite sure that the majority of users of Sony’s raw material do their raw grading and processing in something other than raw viewer.

But this new version (2.3) really needs to be taken very seriously.

Better Quality Images.

For a start Sony have always had the best de-bayer algorithms for their raw content. If you de-bayer Sony raw in Resolve and compare it to the output from previous versions of Raw Viewer, the raw viewer content always looked just that little bit cleaner. The latest versions of Raw Viewer are even better as new and improved algorithms have been included! It might not render as fast, but it does look very nice and can certainly be worth using for any “problem” footage.

Class 480 XAVC and X-OCN.

Raw Viewer version 2.3 adds new export formats and support for Sony’s X-OCN files. You can now export to both XAVC class 480 and class 300, 10 or 12bit ProRes (HD only unfortunately), DPX and SStP.  XAVC Class 480 is a new higher quality version of XAVC-I that could be used as a ProResHQ replacement in many instances.

Improved Image Processing.

Color grading is now easier than ever thanks to support for Tangent Wave tracker ball control panels along with new grading tools such as Tone Curve control. There is support for EDL’s and batch processing with all kind of process queue options allowing you to prioritise your renders. Although Raw Viewer doesn’t have the power of a full grading package it is very useful for dealing with problem shots as the higher quality de-bayer provides a cleaner image with fewer artefacts. You can always take advantage of this by transcoding from raw to 16 bit DPX or Open EXR so that the high quality de-bayer takes place in Raw Viewer and then do the actual grading in your chosen grading software.

HDR and Rec.2100

If you are producing HDR content version 2.3 also adds support for the PQ and HLG gamma curves and Rec.2100 It also now includes HDR waveform displays. You can use Raw Viewer to create HDR LUT’s too.

So all-in-all Raw Viewer has become a very powerful tool for Sony’s raw and XOCN content that can bring a noticeable improvement in image quality compared to de-bayering in many of the more commonly used grading packages.

Download Link for Sony Raw Viewer: http://www.sonycreativesoftware.com/download/rawviewer


ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.