Category Archives: Workflow

ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

Advertisements

It’s all in the grade!

So I spent much of last week shooting a short film and commercial (more about the shoot in a separate article). It was shot in raw using my Sony F5, but could have been shot with a wide range of cameras. The “look” for this production is very specific. Much of it set late in the day or in the evening and requiring a gentle romantic look.

In the past much of this look would have been created in camera. Shooting with a soft filter for the romantic look, shifting the white balance with warming cards or a dialled in white balance for a warm golden hour evening look. Perhaps a custom picture profile or scene file to alter the look of the image coming from the camera. These methods are still very valid, but thanks to better recording codecs and lower cost grading and post production tools, these days it’s often easier to create the look in post production.

When you look around on YouTube or Vimeo at most of the showreels and demo reels from people like me they will almost always have been graded. Grading is a huge, modern,  part of the finishing process and it makes a huge difference to the final look of a production. So don’t automatically assume everything you see on-line looked like that when it was shot. It probably didn’t and a very, very big part of the look tends to be created in post these days.

One further way to work is to go half way to your finished look in camera and then finish off the look in post. For some productions this is a valid approach, but it comes with some risks and there are some things that once burnt into the recording can be hard to then subsequently change in post, for example any in camera sharpening is difficult to remove in post as are crushed blacks or skewed or offset white balance.

Also understand that there is a big difference between trying to grade using the color correction tools in an edit suite and using a dedicated editing package. For many, many years I used to grade using my editing software, simply because that was what I had. Plug-ins such as Magic Bullet looks are great and offer a quick and effective way to get a range of looks, but while you can do a lot with a typical edit color corrector, it pales into insignificance compared to what can be done with a dedicated grading tool, for example not only creating a look but then adjusting individual elements of the image.

When it comes to grading tools then DaVinci Resolve is probably the one that most people have heard of. Resolve Lite is free, yet still incredibly capable (provided you have a computer that will run it). There are lots of other options too like Adobe Speed Grade, but the key thing is that if you change your workflow to include lots of grading, then you need to change the way you shoot too. If you have never used a proper grading tool then I urge you to learn how to use one. As processing power improves and these tools become more and more powerful they will play an ever greater role in video production.

So how should you shoot for a production that will be graded? I’m sure you will have come across the term “shoot flat” and this is often said to be the way you should shoot when you’re going to grade. Well, yes and no. It depends on the camera you are using, the codec, noise levels and many other factors. If you are the DP, Cinematographer or DiT, then it’s your job to know how footage from your camera will behave in post production so that you can provide the best possible blank canvas for the colourist.

What is shooting flat exactly? Lets say your monitor is a typical LCD monitor. It will be able to show 6 or 7 stops of dynamic range. Black at stop 0 will appear to be black and whites at stop 7 will appear bright white. If your camera has a 7 stop range then the blacks and whites from the camera will be mapped 1:1 with the monitor and the picture will have normal contrast. But what happens when you then have a camera that can capture double that range, say 12 to 14 stops?. The bright whites captured by the camera will be significantly brighter than before. If you then take that image and try to show it on the same LCD monitor you have an issue because the LCD cannot go any brighter than before, so the much brighter whites from the high dynamic range shot are shown at the same brightness as the original low dynamic range shot. Not only that but the now larger tonal range is now squashed together into the monitors limited range. This reduces the contrast in the viewed image and as a result it looks flat.

That’s a real “shoot flat” image (a wide dynamic range shown on a typical dynamic range monitor), but you have to be careful because you can also create a flat looking image by raising the cameras black level or black gamma or reducing the white level. Doing this reduces the contrast in the shadows and mid tones and will make the pictures look low contrast and flat. But raising the black level or black gamma or reducing the white point rarely increases the dynamic range of a camera, most cameras dynamic range is limited by the way they handle highlights and over exposure, not shadows, dark or white level. So just beware, not all flat looking images bring real post production advantages, I’ve seen many examples of special “flat” picture profiles or scene files that don’t actually add anything to the captured image, it’s all about dynamic range, not contrast range. See this article for more in depth info on shooting flat.

If you’re shooting for grading, shooting flat with a camera with a genuinely large dynamic range is often beneficial as you provide the colourist with a broader dynamic range image that he/she/you can then manipulate so that it looks good on typically small dynamic range TV’s and monitors, but excessively raising the black level or black gamma rarely helps the colourist as this just introduces an area that will need to be corrected to restore good contrast rather than adding anything new or useful to the image. You also need to consider that it’s all very well shooting with a camera that can capture a massive dynamic range, but as there is no way to ever show that full range, compromises must be made in the grade so that the picture looks nice. An example of this would be a very bright sky. In order to show the clouds in the sky the rest of the scene may need to be darkened as the sky is always brighter than everything else in the real world. This might mean the mid tones have to be rather dark in order to preserve the sky. The other option would be to blow the sky out in the grade to get a brighter mid range. Either way, we don’t have a way of showing the 14 stop range available from cameras like the F5/F55 with current display technologies, so a compromise has to be made in post and this should be in the back of your mind when shooting scenes with large dynamic ranges. With a low dynamic range camera, you the camera operator would choose whether to let the highlights over expose to preserve the mid range or whether to protect the highlights and put up with a darker mid range. But now with these high dynamic range cameras that decision is largely moved to post production, but you should still be looking at your mid tones and if needed adding a bit of extra illumination so that the mids are not fighting the highlights.

In addition to shooting flat there is a lot of talk about using log gamma curves, S-Log, S-log2, LogC etc. Again IF the camera and recording codec are optimised for Log then this can be an extremely good approach. Remember that if you choose to use a log gamma curve then you will also need to adjust the way you expose to place skin tones etc in the correct part of the log curve. It’s no longer about exposing for what looks good on the monitor or in the viewfinder, but about exposing the appropriate shades in the correct part of the log curve.  I’ve written many articles on this so I’m not going to go into it here, other than to say log is not a magic fix for great results and log needs a 10 bit codec if your going to use it properly. See these articles on Log: S-Log and 8 bit  or Correct Exposure with Log. Using Log does allow you to capture the cameras full range, it will give you a flat looking image and when used correctly it will give the colourist a large blank canvas to play with. When using log it is vital that you use a proper grading tool that will apply log based corrections to your footage as adding linear corrections in a typical edit application to log footage will not give the best results.

So what if your camera doesn’t have log? What can you do to help improve the way the image looks after post production? First of all get your exposure right. Don’t over expose. Anything that clips can not be recovered in post. Something that’s a little too dark can easily be brightened a bit, but if it’s clipped it’s gone for good. So watch those highlights. Don’t under expose,  just expose correctly. If you’re having a problem with a bright sky don’t be tempted to add a strong graduated filter to the camera to darken the sky. If the colorist tries to adjust the contrast of the image the grad may become more extreme and objectionable. It’s better to use a reflector or some lights to raise the foreground rather than a graduated filter to lower the highlight.

One thing that can cause grading problems is any knee compression. Most video cameras by default use something called the “Knee” to compress highlights. This does give the camera the ability to capture a greater dynamic range, but this is done by aggressively compressing together the highlights and it’s either on or off. If the light changes during the shot and the cameras knee is set to auto (as most are by default) then the highlight compression will change mid shot and this can be a nightmare to grade. So instead of using the cameras default knee settings use a scene file or picture profile to set the knee to manual or use an extended range gamma curve like a Hypergamma or Cinegamma that does not have a knee and instead uses a progressive type of highlight compression.

Another thing that can become an issue in the grading suite is image sharpening. In camera sharpening such as detail correction works by boosting contrast around edges. So if you take an already sharpened image into the grading suite and then boost the contrast in post, the sharpening will become more visible and the pictures may take on more of a video look or become over sharpened. It’s just about impossible to remove image sharpening in post, but to add a bit of sharpening is quite easy. So, if you’re shooting for post consider either turning off the detail correction circuits all together or at the very least reduce the levels applied by a decent amount.

Color and white balance: One thing that helps keep things simple in the grade is having a consistent image. The last thing you want is the white balance changing half way through the shot, so as a minimum use a fixed white balance or preset white balance. I find it better to shoot with preset white when shooting for a post heavy workflow as even if the light changes a little from scene to scene or shot to shot the RGB gain levels remain the same so any corrections applied have a similar effect, the colourist then just tweaks the shots for any white balance differences. It’s also normally easier to swing the white balance in post if preset is used as there won’t be any odd shifts added as can sometimes happen if you have used a grey/white card to white balance.

Just as the brightness or luma of an image can clip if over exposed then so too can the colour. If you’re shooting colourful scenes, especially shows or events with coloured lights then it will help you if you reduce the saturation of the colour matrix by around 20%, this allows you to record stronger colours before they clip. Colour can then be added back in in the grade if needed.

Noise and grain: This is very important. The one thing above all the others that will limit how far you can push your image in post is noise and grain. There are two sources of this, camera noise and compression noise. Camera noise is dependant on the cameras gain and chosen gamma curve. Aways strive to use as little gain as possible, remember that if the image is just a little dark you can always add gain in post, so don’t go adding un-necessary gain in camera. A proper grading suite will have powerful noise reduction tools and these normally work best if the original footage is noise free and then gain added in post, rather than trying to de-noise grainy camera clips.

The other source of noise and grain is compression noise. Generally speaking, the more highly compressed the video stream is then the greater the noise will be. Compression noise is often more problematic than camera noise as in many cases it will have a regular pattern or structure which makes it visually more distracting than random camera noise. More often than not the banding seen in images across the sky or flat surfaces is caused by compression artefacts rather than anything else and during grading any artefacts such as these can become more visible. So try to use as little compression as possible, this may mean using an external recorder but these can be purchased or hired quite cheaply these days. As always, before a big production test your workflow. Shoot some sample footage, grade it and see what it looks like. If you have a banding problem, suspect the codec or compression ratio first, not whether it’s 8 bit or 10 bit, in practice it’s not 8 bit that causes banding, but too much or poor quality compression (so even a camera with only an 8 bit output like the FS700 will benefit from recording on a better quality external recorder).

RAW: Of course the best way of providing the colourist (even if that’s yourself) the best blank canvas is to shoot with a camera that can record the raw sensor data. By shooting raw you do not add any in camera sharpening or gamma curves that may then need to be removed in post. In addition raw normally means capturing the cameras full dynamic range. But that’s not possible for everyone and generally involves working with very large amounts of data. If you follow my guidelines above you should at least have material that will allow you a good range of adjustment and fine tuning in post. This isn’t “fix it in post”, we are not putting right something that is wrong. We are shooting in a way that allows us to make use of the incredible processing power available in a modern computer to produce great looking images. You are making those last adjustments that make a picture look great using a nice big monitor (hopefully calibrated) in a normally more relaxed environment than on most shoots.

The way videos are produced is changing. Heavy duty grading used to be reserved for high end productions, drama and movies. But now it is common place, faster and easier than ever. Of course there are still many applications where there isn’t the time for grading, such as TV news, but grading is going to play an every greater part in more and more productions, so it’s worth learning how to do it properly and how to adjust your shooting setup and style to maximise the quality of the finished production.

ACES – What’s it all about, do I need to worry about it?

You may have heard the term “ACES”  in presentations or workflow discussions for a while now. You may know that it is the Academy Color Encoding System and is a workflow for post producing high end material, but what does it mean in simple terms?

This isn’t a guide on how to use or work with ACES, it’s hopefully an easy to understand explanation of the basics of what it does, why it does it and what advantages it brings.

One of the biggest problems in the world of video and cinema production today is the huge number of different standards in use for acquisition and viewing. There are different color spaces, different gamma curves, different encoding standards, different camera setups and different output requirements. All in all it’s a bit of a confusing mess. ACES aims to mitigate many of these issues at the same time as increasing the image quality beyond that which is normally allowed with existing workflows.

There are 3 different conversion processes within the ACES workflow, plus the actual post production and grading process. These conversions are called IDT, RRT and ODT. It all sounds very confusing but actually when you break it down it’s fairly straight forward, on paper at least!

The first stage is the IDT or Input Device Transformation. This process takes the footage from your camera and converts it to the ACES standard. The IDT must be matched specifically to the camera and codec you are using, you can’t use a Red IDT for a Sony F55 or Arri Alexa. You must use exactly the right IDT.  Using the IDT (a bit like a look up table) you convert your footage to an ACES OpenEXR file. OpenEXR is the file format, like .mov or DPX etc.

Unlike most conventional video cameras the ACES files do not have any gamma or other similar curves to mimic the way our eyesight or film responds to light. ACES is a linear format. The idea is to record and store the light coming from the scene as accurately as technically possible. This is referred to as “Scene Referenced” as you are capturing the light as it comes from the scene, not as you would show it on a monitor to make it look visually pleasing. Most traditional video systems are said to be “Display Referenced” as the are based on what looks nice on a monitor or cinema screen. This normally involves gamma compression which reduces the range of information captured in the highlights. We don’t want this if we are to maximise our grading and post production possibilities so ACES is Scene Referenced and this means a linear response to match the actual physical behaviour of light which is very different to the way we see light or film responds to light. That linear response means lots and lots of data in highlights and as a result large file sizes, there is no limit to the dynamic range ACES can handle. The other thing ACES has is an unrestricted color space. Most traditional systems (including film) have narrow or restricted color space in order to save space for transmission or distribution. If a TV screen can only show a certain range of colours, why capture more than this – this is “Display Referencing”. But ACES is designed to be able to store the full spectrum of the original scene, it is “Scene Referenced”.

In addition  by carefully matching the IDT to the camera, after converting to ACES all your source material should look the same, even if it was shot by different cameras. Now there will be differences due to differing dynamic ranges, colour accuracy, and noise etc, but the ACES material should be as close as technically possible to the original physical scene so a grade applied to one camera make or model should also work for a different camera model in exactly the same way.

Now this big color space may well currently be impossible to capture and display, but by deliberately not restricting the color space ACES has the ability to grow and output files using any existing color space.

So… using the IDT we have now converted our footage to ACES linear saving it as an OpenEXR file. Or as in the case of some grading packages like resolve we have told it to convert our material into ACES as part of the grading process. But how do we view it? ACES Linear looks all wrong on conventional monitors, we now need a way to convert back from ACES to conventional video so we can see what our finished production will look like. Well there are two stages to this. The first is called the RRT, the second the ODT, sometimes these are combined into a single process.

The RRT or Reference Rendering Transform is designed to convert the  ACES linear to an ultra high quality but slightly less complicated standardised intermediate reference format. From this standardised format you can then apply the ODT or Output Device Transformation to convert from that common RRT intermediate to whatever output standard you need. In practice no-one sees or works with the RRT, it is just there as a fixed starting point for the ODT and in most cases the RRT and ODT operations are combined into a single process a bit like adding a viewing LUT. The RRT transformation is incredibly complex while the ODT is a much simpler process. By doing the difficult maths with one single RRT and then keeping the ODT’s simpler it’s easier to create a large range of ODT’s for specific applications. So from one grading pass you can produce masters for broadcast TV, the web or Cinema DCP just by changing the ODT part of the calculations used for the final output.

If your using a conventional HD monitor then you will need to use an ODT for Rec-709 so that the ACES material get converted to Rec-709 for accurate monitoring. It should be noted now though that as you are monitoring in the restricted Rec-709 color space and gamma range that you are not seeing the full range of the ACES footage or RRT intermediate.

So, it all sounds very complicated. In practice what you have to do is convert your footage using the right IDT to an ACES OpenEXR file (or tell the grading application to convert to ACES on the fly). You set up your grading workspace to use ACES and then set your output RRT/ODT to output using the standard you are viewing with (typically Rec-709) and do your grading as you would normally. One limitation of ACES is that due to the large color space many conventional LookUp Tables won’t work as expected within the ACES environment. They are simply too small. You need at least a 64x64x64 LUT which is massive. At the end of the grade you then choose the ODT for your master render, this might be 709 for TV or sRGB for the web and render you master. If your taking your project for finishing elsewhere then you can output your files without the RRT/ODT as ACES OpenEXR.

The advantages of ACES are: Standardised workflow with standardised files, so any ACES OpenEXR file from any camera will look and behave just like an ACES OpenEXR file from any other camera (or at least as closely as technically possible).

Unlimited dynamic range and color space, so no matter what your final output you are getting the very best possible image. Of course limited by the capture capabilities of the camera or film stock, but the workflow and recording format itself is not a limiting factor.

Fast output to multiple standards by doing the difficult maths using a common high quality RRT (Reference Render transform) followed by a simpler ODT specific to the format required. Very often these two functions are combined into a single ODT process.

So is ACES for you? Maybe it is, maybe not. If you use a lot of LUT’s in your grade then perhaps ACES is not going to work for you. If your camera already shoots linear raw then your already a long way towards ACES anyway so you may not see any benefit from the extra stages. However if your shooting with different cameras and there are IDT’s available for all the cameras your using then ACES should help make everything consistent and easier to manage. ACES OpenEXR files will be large compared to conventional video, so that needs to be taken into account.