Category Archives: cinematography

New Atomos Shogun 7 with Dolby Vision Out and 15 stop screen.

So this landed in my inbox today. Atomos are releasing what on paper at least is a truly remarkable new recorder and monitor, the Shogun 7.

For some time now the Atomos Inferno has been my go-to monitor. It’s just so flexible and the HDR screen is wonderful. But the new Shogun 7 looks to be quite a big upgrade.

image New Atomos Shogun 7 with Dolby Vision Out and 15 stop screen.

The screen is claimed to be able to display an astounding 1,000,000:1 contrast ratio and 15+ stops of dynamic range. That means you will be able to shoot in log with almost any camera and see the log output 1:1. No need to artificially reduce the display range, no more flat looking log or raw, just a real look at what you are actually shooting.

I’m off to NAB at the weekend and I will be helping out on the Atomos booth, so I will be able to take a good look at the Shogun 7. If it comes anywhere near to the specs in the press release it will be a must-have piece of kit whether you shoot on an FS5 or Venice!

Here’s the the press release:

Melbourne, Vic – 4 April, 2019:

The new Atomos Shogun 7 is the ultimate 7-inch HDR monitor, recorder and switcher. Precision-engineered for the film and video professional, it uses the very latest video technologies available. Shogun 7 features a truly ground-breaking HDR screen – the best of any production monitor in the world. See perfection on the all-new 1500nit daylight-viewable, 1920×1200 panel with an astounding 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. Shogun 7 will truly revolutionize the on-camera monitoring game.

Bringing the real world to your monitor

With Shogun 7 blacks and colors are rich and deep. Images appear to ‘pop’ with added dimensionality and detail. The incredible Atomos screen uses a unique combination of advanced LED and LCD technologies which together offer deeper, better blacks than rival OLED screens, but with the much higher brightness and vivid color performance of top-end LCDs. Objects appear more lifelike than ever, with complex textures and gradations beautifully revealed. In short, Shogun 7 offers the most detailed window into your image, truly changing the way you create visually.

The Best HDR just got better

A new 360 zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color, yielding jaw-dropping results. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered. For the first time you can enjoy on-screen the same dynamic range, palette of colors and shades that your camera sensor sees. 

On-set HDR redefined with real-time Dolby Vision HDR output

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” – the ultimate tool to see HDR live on-set and carry your creative intent from the camera through into HDR post production. Dolby have optimised their amazing target display HDR processing algorithm and which Atomos have running inside the Shogun 7. It brings real-time automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and magically, automatically, AtomOS 10 analyses the image, queries the TV, and applies the right color and brightness profiles for the maximum HDR experience on the display. Enjoy complete confidence that your camera’s HDR image is optimally set up and looks just the way you wanted it. It is an invaluable HDR on-set reference check for the DP, director, creatives and clients – making it a completely flexible master recording and production station.

“We set out to design the most incredibly high contrast and detailed display possible, and when it came off the production line the Shogun 7 exceeded even our expectations. This is why we call it a screen with “Unbelievable HDR”. With multi-camera switching, we know that this will be the most powerful tool we’ve ever made for our customers to tell their stories“, said Jeromy Young, CEO of Atomos.

blobid1_1554376631889 New Atomos Shogun 7 with Dolby Vision Out and 15 stop screen.

Ultimate recording

Shogun 7 records the best possible images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to reliable AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most Quad Link, Dual Link or Single Link SDI cinema cameras. With Shogun 7 every pixel is perfectly preserved with data rates of up to 1.8Gb/s.

Monitor and record professional XLR audio

Shogun 7 eliminates the need for a separate audio recorder. Add 48V stereo mics via an optional balanced XLR breakout cable. Select Mic or Line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. You can monitor the selected stereo track via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

AtomOS 10, touchscreen control and refined body

Atomos continues to refine the elegant and intuitive AtomOS operating system. Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the award-winning Ninja V. Icons and colors are designed to ensure that the operator can concentrate on the image when they need to. The completely new body of Shogun 7 has a sleek Ninja V like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting. 

AtomOS 10 on Shogun 7 has the full range of monitoring tools that users have come to expect from Atomos, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis. 

Portable multi-cam live switching and recording for Shogun 7 and Sumo 19

Shogun 7 is also the ultimate portable touch-screen controlled multi-camera switcher with asynchronous quad-ISO recording. Switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update. Sumo19 and Shogun 7 are the ideal devices to streamline your multi-camera live productions. 

Enjoy the freedom of asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, just import the xml file into your NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed. This amazing functionality makes Shogun 7 and Sumo19 the most flexible in-the-field switcher-recorder-monitors available.

Shogun 7 will be available in June 2019 priced at $US 1499/ €1499 plus local taxes from authorized Atomos dealers.

Advertisements

Shooting Anamorphic with the Fujinon MK’s and SLR Magic 65 Anamorphot.

There is something very special about the way anamorphic images look, something that’s not easy to replicate in post production. Sure you can shoot in 16:9 or 17:9 and crop down to the typical 2.35:1 aspect ratio and sure you can add some extra anamorphic style flares in post. But what is much more difficult to replicate is all the other distortions and the oval bokeh that are typical of an anamorphic lens.

Anamorphic lenses work by distorting the captured image. Squeezing or compressing it horizontally, stretching it vertically. The amount of squeeze that you will want to use will depend on the aspect ratio of the sensor or film frame. With full frame 35mm cameras or cameras with a 4:3 aspect ratio sensor or gate you would normally use an anamorphic lens that squeezes the image by 2 times. Most anamorphic cinema lenses are 2x anamorphic, that is the image is squeezed 2x horizontally. You can use these on cameras with a 16:9 or 17:9 super35mm sensor, but because a Super35 sensor already has a wide aspect ratio a 2x squeeze is much more than you need for that typical cinema style final aspect ratios of 2.39:1.

For most Super35mm cameras it is normally better to use a lens with a 1.33x squeeze. 1.33x squeeze on Super35 results in a final aspect ratio close to the classic cinema aspect ratio of 2.39:1.

Traditionally anamorphic lenses have been very expensive. The complex shape of the anamorphic lens elements are much harder to make than a normal spherical lens. However another option is to use an anamorphic adapter on the front of an existing lens to turn it into an anamorphic lens. SLR Magic who specialise in niche lenses and adapters have had a 50mm diameter 1.33x anamorphic adapter available for some time. I’ve used this with the FS7 and other cameras in the past, but the 50mm diameter of the adapter limits the range of lenses it can be used with (There is also a 50mm 2x anamorphot for full frame 4:3 aspect ratio sensors from SLR Magic).

Now SLR Magic have a new larger 65mm adapter. The 1.33-65 Anamorphot has a much larger lens element, so it can be used with a much wider range of lenses. In addition it has a calibrated focus scale on it’s focus ring. One thing to be aware of with adapters like these is that you have to focus both the adapter and the lens you are using it on. For simple shoots this isn’t too much of a problem. But if you are moving the camera a lot or the subject is moving around a lot, trying to focus both lenses together can be a challenge.

DSC_0103 Shooting Anamorphic with the Fujinon MK's and SLR Magic 65 Anamorphot.
The SLR Magic 1.33-65 Anamorphot anamorphic adapter.

Enter the PD Movie Dual Channel follow focus.

The PD Movie Dual follow focus is a motorised follow focus system that can control 2 focus motors at the same time. You can get both wired and wireless versions depending on your needs and budget. For the anamorphic shoot I had the wired version (I do personally own a single channel PD Movie wireless follow focus). Setup is quick and easy, you simply attach the motors to your rods, position the gears so they engage with the gear rings on the lens and the anamorphot and press a button to calibrate each motor. It takes just a few moments and then you are ready to go. Now when you turn the PD Movie focus control wheel both the taking lens and the anamorphot focus together.

I used the anamorphot on both the Fujinon MK18-55mm and the MK50-135mm. It works well with both lenses but you can’t use focal lengths wider than around 35mm without the adapter some causing vignetting. So on the 18-55 you can only really use around 35 to 55mm. I would note that the adapter does act a little like a wide angle converter, so even at 35mm the field of view is pretty wide. I certainly didn’t feel that I was only ever shooting at long focal lenghts.

DSC_0099 Shooting Anamorphic with the Fujinon MK's and SLR Magic 65 Anamorphot.
The full rig. PMW-F5 with R5 raw recorder. Fujinon MK 18-55 lens, SLR Magic Anamorphot and PD Movie dual focus system.

Like a lot of lens adapters there are some things to consider. You are putting a lot of extra glass in front of you main lens, so it will need some support. SLR magic do a nice support bracket for 15mm rods and this is actually essential as it stops the adapter from rotating and keeps it correctly oriented so that your anamorphic squeeze remains horizontal at all times. Also if you try to use too large an aperture the adapter will soften the image. I found that it worked best between f8 and f11, but it was possible to shoot at f5.6. If you go wider than this, away from the very center of the frame you get quite a lot of softening image softening. This might work for some projects where you really want to draw the viewer to the center of the frame or if you want a very stylised look, but it didn’t suit this particular project.

The out of focus bokeh has a distinct anamorphic shape, look and feel. As you pull focus the shape of the bokeh changes horizontally, this is one of the key things that makes anamorphic content look different to spherical. As the adapter only squeezes by 1.33 this is as pronounced as it would be if you shot with a 2x anamorphic. Of course the other thing most people notice about anamorphic images is lens flares that streak horizontally across the image. Intense light sources just off frame would produce blue/purple streaks across the image. If you introduce very small point light sources into the shot you will get a similar horizontal flare. If flares are your thing it works best if you have a very dark background. Overall the lens didn’t flare excessively, so my shots are not full of flares like a JJ Abrams movie. But when it did flare the effect is very pleasing. Watch the video linked above and judge for yourself.

Monitoring and De-Squeeze.

When you shoot anamorphic you normally record the horizontally squashed image and then in post production you de-squeeze the image by compressing it vertically. Squashing the image vertically results in a letterbox, wide screen style image and it’s called “De-Squeeze”. You can shoot anamorphic without de-sqeezing the image provided you don’t mind looking at images that are horizontally squashed in your viewfinder or on your monitor. But these days you have plenty of monitors and viewfinders that can “de-squeeze” the anamorphic image so that you can view it with the correct aspect ratio. The Glass Hub film was shot using a Sony PMW-F5 recording to the R5 raw recorder. The PMW-F5 has the ability to de-squeeze the image for the viewfinder built in. But I also used an Atomos Shogun Inferno to monitor as I was going to be producing HDR versions of the film. The Shogun Inferno has both 2x and 1.33x de-squeeze built in so I was able to take the distorted S-Log3 output from the camera and convert it to a HDR PQ image and de-squeeze it all at the same time in the Inferno. This made monitoring really easy and effective.

I used DaVinci Resolve for the post production. In the past I might have done my editing in Adobe Premiere and the grading in Resolve. But Resolve is now a very capable edit package, so I completed the project entirely in Resolve. I used the ACES colour managed workflow as ACES means I don’t need to worry about LUT’s and in addition ACES adds a really nice film like highlight roll off to the output. If you have never tried a colour managed workflow for log or raw material you really should!

The SLR Magic 65-1.33 paired with the Fujinon MK lenses provides a relatively low cost entry into the world of anamorphic shooting. You can shoot anywhere from around 30-35mm to 135mm. The PD Movie dual motor focus system means that there is no need to try to use both hands to focus both the anamorphot and the lens together. The anamorphot + lens behave much more like a quality dedicated anamorphic zoom lens, but at a fraction of the cost. While I wouldn’t use it to shoot everything the Anamorphot is a really useful tool for those times you want something different.

AMPAS snub Cinematography and Editing at the Oscars.

I don’t normally get involved in stuff like this, but this has me quite angry. While an Oscar will still be awarded for Cinematography as well as Editing the presentations will take place during a commercial break (the same for Live Action Short, and Makeup and Hairstyling).

Cinematography and Editing are at the very heart of every movie. If we go back to the very beginnings of Cinema it was all about the Cinematography. The word Cinema is a shortened form of the word Cinematography which means to write or record movement. There were often no actors, no special effects, no sound, just moving images images. perhaps a shot of a train or people in a street. Since then Cinematography has continued to advance both artistically and technically. At the same time Editing has become as important as the script writing. The Cinematography and editing determine the mood, look, pace, style of the film.

As Guillermo del Torro has commented: “Cinematography and Editing are at the very heart of our craft, they are not inherited from a theatrical tradition or literary tradition, they are cinema itself”. I completely agree, so the presentations for Cinematography and Editing deserve the same coverage and respect as every other category. Cinematographers and editors are often overlooked by film goers, they rarely make the mainstream headlines in the same way that leading actors do. So really it is only fair that AMPAS should try to address this and give both the credit and coverage deserved by those men and women that make cinema possible.

Do the images from my Sony camera have to look the way they do?

— And why do Sony cameras look the way they do?

It all about the color science.

“Color Science” is one of those currently in fashion phrases that gets thrown around all over the place today. First of all – what the heck is color science anyway? Simply put it’s how the camera sees the colors in a scene, mixes them together, records them – and then how your editing or grading software interprets what is in the recording and finally how the TV or other display device turns the digital values it receives back into a color image. It’s a combination of optical filters such as the low pass filter, color filters, sensor properties, how the sensor is read out and how the signals are electronically processed both in the camera, by your edit/grading system and by the display device. It is no one single thing, and it’s important to understand that your edit process also contributes to the overall color science.

Color Science is something we have been doing since the very first color cameras, it’s not anything new. However us end users now have a much greater ability to modify that color science thanks to better post production tools and in camera adjustments such as picture profiles or scene files.

Recently, Sony cameras have sometimes been seen by some as having less advanced or poor color science compared to cameras from some other manufacturers. Is this really the case? For Sony part of the color science issue is that historically Sony have deliberately designed their newest cameras to match previous generations of cameras so that a large organisation with multiple cameras can use new cameras without having them look radically different to their old ones. It has always been like this and all the manufacturers do this, Panasonic cameras have a certain look as do Canon etc. New and old Panasonics tend to look the same as do old and new Canon’s, but the Canon’s look different to the Panasonics which look different to the Sony’s.

Sony have a very long heritage in broadcast TV and that’s how their cameras look out of the box, like Rec-709 TV cameras with colors that are similar to the tube cameras they were producing 20 years ago. Sony’s broadcast color science is really very accurate – point one at a test chart such as a Chroma DuMonde and you’ll see highly repeatable, consistent and accurate color reproduction with all the vectors on a vector scope falling exactly where they should, including the skin tone line.

On the one hand this is great if you are that big multi-camera business wanting to add new cameras to old ones without problems, where you want your latest ENG or self-shooters cameras to have the same colors as your perhaps older studio cameras so that any video inserts into a studio show cut in and out smoothly with a consistent look.

But on the other hand it’s not so good if you are a one man band shooter that wants something that looks different. Plus accurate is not always “pretty” and you can’t get away from the fact that the pictures look like Rec-709 television pictures in a new world of digital cinematography where TV is perhaps seen as bad and the holy grail is now a very different kind of look that is more stylised and much less true to life.

So Sony have been a bit stuck. The standard look you get when you apply any of the standard off-the shelf S-Log3 or S-Log2 LUT’s will by design be based on the Sony color science of old, so you get the Sony look. Most edit and grading applications are using transforms for S-Log2/3 based on Sony’s old standard Rec-709 look to maintain this consistency of look. This isn’t a mistake. It’s by design, it’s a Sony camera so it’s supposed to look like other Sony cameras, not different.

But for many this isn’t what they want. They want a camera that looks different, perhaps the “film look” – whatever that is?

Recently we have seen two new cameras from Sony that out of the box look very different from all the others. Sony’s high end Venice camera and the lower cost FS5 MKII. The FS5 MKII in particular proves that it’s possible to have a very different look with Sony’s existing colour filters and sensors. The FS5 MK II has exactly the same sensor with exactly the same electronics as the MK I. The only difference is in the way the RGB data from the sensor is being processed and mixed together (determined by the different firmware in the Mk1 and mk2) to create the final output.

The sensors Sony manufacture and use are very good at capturing color. Sony sensors are found in cameras from many different manufacturers. The recording systems in the Sony cameras do a fine job of recording those colors as data within the files the camera records as data with different code values representing what the sensor saw. Take that data into almost any half decent grading software and you can change the way it looks by modifying the data values. In post production I can turn almost any color I want into any other color. It’s really up to us as to how we translate the code values in the files into the colors we see on the screen, especially when recording using Log or raw. A 3D LUT can change tones and hues very easily by shifting and modifying the code values. So really there is no reason why you have to have the Sony 709 look.

My Venice emulation LUT’s will make S-Log3 from an FS5 or FS7 look quite different to the old Sony Broadcast look. I also have LUT’s for Sony cameras that emulate different Fuji and Kodak film stocks, apply one of these and it really looks nothing like a Sony broadcast camera. Another alternative is to use a color managed workflow such as ACES which will attempt to make just about every camera on the market look the same applying the ACES film style look and highlight roll-off.

We have seen it time and time again where Sony footage has been graded well and it then becomes all but impossible to identify what camera shot it. If you have Netflix take a look at “The Crown” shot on Sony’s F55 (which has the same default Sony look as the FS5 MK1, FS7 etc). Most people find it hard to believe the Crown was shot with a Sony because it has not even the slightest hint of the old Sony broadcast look.

If you use default settings, standard LUT’s etc it will look like a Sony, it’s supposed to! But you have the freedom to choose from a vast range of alternative looks or better still create your own looks and styles with your own grading choices.

But for many this can prove tricky as often they will start with a standard Sony LUT or standard Sony transform. So the image they start with has the old Sony look. When you start to grade or adjust this it can sometimes look wrong because you have perhaps become used to the original Sony image and then anything else just doesn’t seem right, because it’s not what you are used to. In addition if you add a LUT and then grade, elements of the LUT’s look may be hard to remove, things like the highlight roll off will be hard baked into the material, so you need to do need to think carefully about how you use LUT’s. So try to break away from standard LUT’s. Try ACES or try some other starting point for your grade.

Going forward I think it is likely that we will see the new Venice look become standard across all of the Cinema style cameras from Sony, but it will take time for this to trickle down into all the grading and editing software that currently uses transforms for s-Log2/3 that are based on the old Sony Rec-709 broadcast look. But if you grade your footage for yourself you can create just about any look you want.

Film Emulation LUT’s for S-Log3 and S-Log2.

I’ve uploaded these LUT’s before, but they are tucked away under a slightly obscure heading, so here they are again!

There are 4 different LUTs in this set. A basic 709 LUT which is really good for checking exposure etc. It won’t give the best image, but it’s really good for getting your exposure just right. Diffuse white should be 85%, middle grey 43% and skin tones 65-70%.

Then there are 3 film emulation LUT’s that mimic 3 different types of film stock form different manufacturers. These are primarily designed for post production or for use on a client monitor on set. My recommendation is to use the 709 LUT for your viewfinder and exposure and then add the film emulation LUT later in post.

As always (to date at least) I offer these as a free download available by clicking on the links below. However a lot of work goes into creating and hosting these. I feel that this LUT set is worth $25.00 and would really appreciate that being paid if you find the LUT’s useful. Try them before you decide then pay what you feel is fair. All contributions are greatly appreciated and it really does help keep this website up and running. If you can’t afford to pay, then just download the LUT’s and enjoy using them, tell your friends and send them here. If in the future you should choose to use them on a paying project, please remember where you got them and come back and make a contribution. More contributions means more LUT offerings in the future.

Please feel free to share a link to this page if you wish to share these LUT’s with anyone else or anywhere else.

To make a contribution please use the drop down menu here, there are several contribution levels to choose from.


Your choice:



pixel Film Emulation LUT's for S-Log3 and S-Log2.

To download the Film Emulation LUT set please click the link: Film Emulation Luts

 

If you are looking for the Venice look LUTs:

Here are the links to my Venice Look Version 3 LUT’s. Including the minus green offset LUTs. Make sure you choose the right version and once you have downloaded them please read the README file included within the package.

Alister V-Look V3 LUT’s S-Log2/SGamut

Alister V-Look V3 LUT’s S-Log3/SGamut3.cine

 

ISO and EI – using the right terms makes what you are doing easier to understand.

ISO and EI are different things and have different meanings. I find that it really helps understand what you are doing if you use the terms correctly and understand exactly what each really means.

ISO is the measured sensitivity of film stock. There is no actual direct equivalent for electronic cameras as the camera manufacturer is free to determine what they believe is an acceptable noise level. So one camera with an ISO of 1000 may be a lot more or less sensitive than another camera rated at 1000 ISO, it all depends on how much noise the manufacturer things is acceptable for that particular camera.

Broadly speaking on an electronic camera ISO is the number you would enter in to a light meter to achieve the a normally exposed image. It is the nearest equivalent to a sensitivity rating, it isn’t an actual sensitivity rating, but it’s what you need to enter into a light meter if you want to set the exposure that way.

EI is the Exposure Index. For film this is the manufacturers recommended best setting for your light meter to get the best results following the standard developing process for the chosen film stock. It is often different from the films true sensitivity rating. For example Kodak 500T is a 500 ISO film stock that has an EI of 350 when shooting under tungsten light. In almost all situations you would use the EI and not the ISO.

On an electronic camera EI normally refers to an exposure rating that you have chosen to give the camera to get the optimum results for the type of scene you are shooting. ISO may give the median/average/typical exposure for the camera but often rating the camera at a different ISO can give better results depending on your preferences for noise or highlight/shadow range etc. If you find exposing a bit brighter helps your images then you are rating the camera slower (treating it as though it’s less sensitive) and you would enter your new lower sensitivity rating into your light meter and this would be the EI.

Keeping EI and ISO as two different things (because they are) helps you to understand what your camera is doing. ISO is the base or manufacturer sensitivity rating and in most (but not all) log or raw cameras you cannot change this.

EI is the equivalent sensitivity number that you may choose to use to offset the exposure away from the manufacturers rating.

If you freely interchange ISO and EI it’s very confusing for people as they don’t know whether you are referring to the base sensitivity rating or a sensitivity rating that is not the base sensitivity but actually some kind of offset.

If you have a camera with an ISO rating of 2000 and you say “I’m shooting at 800 EI” then it’s clear that you are using a 1.3 stop exposure offset. But if you just say “I’m shooting at 800 ISO” it is less clear as to exactly what you are doing. Have you somehow changed the cameras base sensitivity or are you using an offset? While the numbers used by EI and ISO are the same, the meaning of the terms ISO and EI are importantly different.

Learning to be a film maker? Don’t shoot short pretty videos – shoot documentaries.

Over the years I’ve met many well known high end cinematographers. Most of them are really no different to you or I. Most of them are passionate about their craft and almost always willing to chat about ideas or techniques. But one thing that at first surprised me  was how many of these high end movie makers cut their teeth shooting documentaries. I guess I had imagined them to have all been film school graduates that shot nothing but drama, but no, a very large percentage started out in the world of documentary production.
Documentary production is full of challenges. Location shooting, interviews, beauty shots, challenging lighting etc. But one of the big things with documentary production is that you very often don’t have a choice about what, where or when you shoot. You are faced with a scene or location needed for the story and you have to make it look good. So you have to think about how best to approach it, how to frame it, light it or how to use the available light in the best way.
A lot of todays aspiring film makers like to hone their skills by shooting beautiful people in beautiful locations when the light is at it’s best. These short beauty films very often do come out looking very nice. But, if you have a pretty girl, a pretty scene, and good light then really there is no excuse for it not to look good and the problem is you don’t learn much through doing this.
When you are shooting a typical TV documentary you will be faced with all kinds of locations, all kinds of people and the challenge is to make it all look good, no matter what your presented with. Having to take sometimes ugly scenes and make them look good you learn how to be creative. You have to think about camera angles, perhaps to hide something or to emphasise something important to the story the programme tells. And, like a feature film it is normally also story telling, most documentaries have a story with a beginning middle and end.
If you can master the art of documentary production it will give you a great set of skills that will serve you well elsewhere. If you move on from documentaries to drama, then when the director asks you to shoot a scene that takes place in a specific location you may well have already done something similar in a documentary so you may already have some ideas about how to light it, what focal lengths to use. Plus now you will probably have a more time to light and a much more control over the scene, so it should actually be easier.
 
In an ideal world I guess the best way to learn how to shoot movies is…. to shoot movies. But very often it’s hard to get the people, locations and good script that you need. So very often aspiring film makers will fall back on shooting 90 second vignettes of pretty people in pretty places as it’s easy and simple to do.
But instead I would suggest that you would be much better off shooting a documentary. Perhaps about something that happened near where you live or an issue you are interested in.
Go out and shoot  documentaries in less than perfect locations with all the challenges they present. I bet the resulting videos won’t look nearly as perfectly pretty as all those slow-mo pretty girl on a beach/forest/field of flowers videos. But it will teach you how to deal with different types locations, people, lighting, weather and many of the other things that can be challenges when shooting features. It will make you a better camera operator. Making something that’s already pretty look pretty – that’s easy. Making what would otherwise be an un-interesting living room, factory or city street look interesting, that’s a much tougher challenge that will help bring out the creativity in you.

Noise, ISO, Gain, S-Log2 v S-Log3 and exposure.

Even though I have written about these many times before the message still just doesn’t seem to be getting through to people.

Since the dawn of photography and video the only way to really change the signal to noise ratio and ultimately how noisy the pictures are is by changing how much light you put onto the sensor.

Gain, gamma, log, raw, etc etc only have a minimal effect on the signal to noise ratio. Modern cameras do admittedly employ a lot of noise reduction processes to help combat high noise levels, but these come at a price. Typically they soften the image or introduce artefacts such as banding, smear or edge tearing. So you always want to start off with the best possible image from the sensor with the least possible noise and the only way to achieve that is through good exposure – putting the optimum amount of light onto the sensor.

ISO is so confusing:

But just to confuse things the use of ISO to rate an electronic cameras sensitivity has become normal. But the problem is that most people have no clue about what this really means. On an electronic camera ISO is NOT a sensitivity measurement, it is nothing more than a number that you can put into an external light meter to allow you to use that light meter to obtain settings for the shutter speed and aperture that will give you the camera manufacturers suggest optimum exposure. That’s it – and that is very different to sensitivity.

Lets take Sony’s FS7 as an example (most other cameras behave in a very similar way).

If you set the FS7 up at 0dB gain, rec-709, it will have an exposure rating of 800 ISO. Use a light meter to expose with the meters ISO dial set to 800. Lets say the light meter says set the aperture to f8. When you do this the image is correctly exposed, looks good (well as good as 709 gets at least) and for most people has a perfectly acceptable amount of noise.

Now switch the camera to S-Log2 or S-Log3. With the camera still set to 0dB the ISO rating changes to 2000 which give the impression that the camera may have become more sensitive. But did we change the sensor? No.  Have we added any more gain? No, we have not, the camera is still at 0dB. But if you now expose at the recommended levels, after you have done your grading and you grade to levels similar to 709 the pictures will look quite a lot noisier than pictures shot using Rec-709.

So what’s going on?

If you now go back to the light meter to expose the very same scene, you turn the ISO dial on the light meter from 800 to 2000 ISO and the light meter will tell you to now set the aperture to f13 (approx). So starting at the f8 you had for 800 ISO, you close the aperture on the camera by 1.3 stops to f13 and you will have the “correct” exposure.

BUT: now you are putting 1.3 stops less light on to the sensor so the signal coming from the sensor is reduced by 9dB and as a result the sensor noise that is always there and never really changes is much more noticeable. As a result compared to 709 the graded S-Log looks noisy and it looks noisier by the equivalent of 9dB. This is not because you have changed the cameras sensitivity or changed because you have changed the amount of camera gain but because compared to when you shoot in 709 the sensor is being under exposed and as a result it is outputting a signal 9dB lower. So in post production when you grade or add a LUT you have to add 9dB of gain to get the same brightness as the original direct rec-709 recording and as well as making the desirable image brighter it also makes the noise 9dB higher (unless you do some very fancy noise reduction work in post).

So what do you do?

It’s common simply to open the aperture back up again, typically by 1.5 stops so that after post production grading the S-log looks no more noisy than the 709 from the FS7 – Because in reality the FS7’s sensor works best for most people when rated at the equivalent of 800 ISO rather than 2000 – probably because it’s real sensitivity is 800 ISO.

When you think about it, when you shoot with Rec-709 or some other gamma that won’t be graded it’s important that it looks good right out of the camera. So the camera manufacturer will ensure that the rec-709 noise and grain v sensitivity settings are optimum – so this is probably the optimum ISO rating for the camera in terms of noise, grain and sensitivity.

So don’t be fooled into thinking that the FS7 is more sensitive when shooting with log, because it isn’t. The only reason the ISO rating goes up as it does is so that if you were using a light meter it would make you put less light onto the sensor which then allows the sensor to handle a brighter highlight range. But of course if you put less light onto the sensor the sensor won’t be able to see so far into the shadows and the picture may be noisy which limits still further the use of any shadow information. So it’s a trade-off, more highlights but less shadows and more noise. But the sensitivity is actually the same. Its’s an exposure change not a sensitivity change.

So then we get into the S-Log2 or S-Log3 debate.

First of all lets just be absolutely clear that both have exactly the same highlight and shadow ranges. Both go to +6 stops and -8 stops, there is no difference in that regard. Period.

And lets also be very clear that both have exactly the same signal to noise ratios. S-log3 is NOT noisier than S-log2. S-log 3 records some of the mid range using higher code values than S-Log2 and before you grade it that can sometimes make it appear like it’s noisier, but the reality is, it is not noisier.  Just like the differing ISO ratings for different gamma curves, this isn’t a sensitivity change, it’s just different code values being used. See this article if you want the hard proof: http://www.xdcam-user.com/2014/03/understanding-sonys-slog3-it-isnt-really-noisy/

Don’t forget when you shoot with log you will be grading the image. So you will be adjusting the brightness of the image. If you grade S-Log2 and S-Log3 to the same brightness levels the cumulative gain (the gain added in camera and the gain added in post) ends up the same. So it doesn’t matter which you use in low light the final image, assuming a like for like grade will have the same amount of noise.

For 8 bit records S-Log2 has different benefits.

S-Log2 was designed from the outset for recording 14 stops with an electronic video camera. So it makes use of the cameras full recording range. S-Log3 is based on an old film log curve (cineon) designed to transfer 16 stops or more to a digital intermediate. So when the camera only has a 14 stop sensor you waste a large part of the available recording range. On a 10 bit camera this doesn’t make much difference. But on a 8 bit camera where you are already very limited with the number of tonal values you can record it isn’t ideal and as a result S-Log2 is often a better choice.

But if I shoot raw it’s all going to be so much better – isn’t it?

Yes, no, maybe…. For a start there are lot’s of different types of raw. There is linear raw, log raw, 10 bit log raw, 12 bit linear, 16 bit linear and they are all quite different.

But they are all limited by what the sensor can see and how noisy the sensor is. So raw won’t give you less noise (it might give different looking noise). Raw won’t give you a bigger dynamic range so it won’t allow you to capture deeper or brighter highlights.

But what raw does normally is to give you more data and normally less compression than the cameras internal recordings. In the case of Sony’s FS5 the internal UHD recordings are 8 bit and highly compressed while the raw output is 12 bit, that’s a 4 fold increase in the amount of tonal values. You can record the 12bit raw using uncompressed cDNG or Apples new ProResRaw codec which doesn’t introduce any appreciable compression artefacts and as a result the footage is much more flexible in post production. Go up to the Sony Venice, F5 or F55 cameras and you have 16 bit raw and X-OCN (which behaves exactly like raw) which has an absolutely incredible range of tonal values and is a real pleasure to work with in post production. But even with the Venice camera the raw does not have more dynamic range than the log. However because there are far more tonal values in the raw and X-OCN you can do more with it and it will hold up much better to aggressive grading.

It’s all about how you expose.

At the end of the day with all of these camera and formats how you expose is the limiting factor. A badly exposed Sony Venice probably won’t end up looking anywhere near as good as a well exposed FS7. A badly exposed FS7 won’t look as good as a well exposed FS5. No camera looks good when it isn’t exposed well.

Exposure isn’t brightness. You can add gain to make a picture brighter, you can also change the gamma curve to change how bright it is.  But these are not exposure changes. Exposure is all about putting the optimum amount of light onto the sensor. Enough light to produce a signal from the sensor that will overcome the sensors noise. But also not so much light that the sensor overloads. That’s what good exposure is. Fiddling around with gamma curves and gain settings will only every make a relatively small difference to noise levels compared to good exposure. There’s just no substitute for faster lenses, reflectors or actually adding light if you want clean images.

And don’t be fooled by ISO ratings. They don’t tell you how noisy the picture is going to be, they don’t tell you what the sensitivity is or even if it’s actually changing. All it tells you is what to set a light meter to.

ProRes Raw Over Exposure Magic Tricks – It’s all smoke and mirrors!

There are a lot of videos circulating on the web right now showing what appears to be some kind of magic trick where someone has shot over exposed, recorded the over exposed images using ProRes Raw and then as if by magic made some adjustments to the footage and it goes from being almost nothing but a white out of over exposure to a perfectly exposed image.

This isn’t magic, this isn’t raw suddenly giving you more over exposure range than you have with log, this is nothing more than a quirk of the way FCP-X handles ProRes Raw material.

Before going any further – this isn’t a put-down of raw or ProRes raw. It’s really great to be able to take raw sensor data and record that with only minimal processing. There are a lot of benefits to shooting with raw (see my earlier post showing all the extra data that 12 bit raw can give). But a magic ability to let you over expose by seemingly crazy amounts isn’t something raw does any better than log.

Currently to work with ProRes Raw you have to go through FCP-X. FCP-X applies a default sequence of transforms to the Raw footage to get it from raw data to a viewable image. These all expect the footage to be exposed exactly as per the camera manufacturers recommendations, with no leeway. Inside FCP-X it’s either exposed exactly right, or it isn’t.

The default decode settings include a heavy highlight roll-off. Apple call it “Tone Mapping”. Fancy words used to make it sound special but it’s really no different to a LUT or the transforms and processes that take place in other raw decoders. Like a LUT it maps very specific values in the raw data  to very specific output brightness values. So if you shoot just a bit bright – as you would often do with log to improve the signal to noise ratio – The ProRes raw appears to be heavily over exposed. This is because anything bright ends up crushed into nothing but flat white by the default highlight roll off that is applied by default.

In reality the material is probably only marginally over exposed, maybe just one to 2 stops which is something we have become used to doing with log. When you view brightly exposed log, the log itself doesn’t look over exposed, but if you apply a narrow high contrast 709 LUT to it, it then the footage looks over exposed until you grade it or add an exposure compensated LUT.  This is what is happening by default inside FCP-X, a transform is being applied that makes brightly exposed footage look very bright and possibly over exposed – because thats the way it was shot!

This is why in FCP-X  it is typical to change the color library to WCG (Wide Color Gamut) as this changes the way FCP-X processes the raw, changing the Tone Mapping and most importantly getting rid of the highlight roll off. With no roll-off, highlights and any even slight over exposure will still blow out as you can’t show 14 stops on a conventional 6 stop TV or monitor. Anything beyond the first 6 stops will be lost, the image will look over exposed until you grade or adjust the material to control the brighter parts of the image and bring them back into a viewable range. When you are in WCG mode in FCP-X the there is no longer a highlight roll off crushing the highlights and now because they are not crushed they can be recovered, but there isn’t any more highlight range than you would have if you shot with log on the same camera!

None of this is some kind of Raw over exposure magic trick as is often portrayed. It’s simply not really understanding how the workflow works and appreciating that if you shoot bright – well it’s going to look bright – until you normalise it in post. We do this all the time with log via LUT’s and grading too! It can be a little more straight forward to recover highlights from Linear Raw footage as comes form an FS5 or FS7 compared to log. That’s because of the way log maintains a constant data level in each highlight stop and often normal grading and colour correction tools don’t deal with this correctly. The highlight range is there, but it can be tricky to normalise the log without log grading tools such as the log controls in DaVinci Resolve.

Another problem is the common use of LUT’s on log footage. The vast majority of LUT’s add a highlight roll off, if you try to grade the highlights after adding a LUT with a highlight roll off it’s going to be next to impossible to recover the highlights. You must do the highlight recovery before the LUT is added or use a LUT that has compensation for any over exposure. All of these things can give the impression that log has less highlight range than the raw from the same camera. This is not normally the case, both will be the same as it’s the sensor that limits the range.

The difference in the highlight behaviour is in the workflows and very often both log and raw workflows are miss-understood. This can lead to owners and users of these cameras thinking that one process has more than the other, when in reality there is no difference, it’s appears to be different because the workflow works in a different way.

Why I Choose To Shoot ProRes Raw with the FS5

This is a much discussed topic right now, so as I promised in my last article about this, I have put together a video. Unfortunately YouTube’s compression masks many of the differences between the UHD XAVC and the ProRes Raw, but you can still see them, especially on the waveform scopes.

To really appreciate the difference you should watch the video on a large screen at at high quality, preferably 4K.