Tonight the BBC are running a series of programmes about legendary 80’s pop band Duran Duran. At the same time Sky television in the UK have made one of my the projects I am most proud to have been involved in available for free, in HD, and on demand – A Diamond In The Mind. In 2011 I was involved in the planning and filming of this Duran Duran concert. Originally conceived a s a lowish cost production to be shot in at a small venue in Berlin, the shoot was full of challenges, not least of which was the cancellation of the Berlin concert just hours before it’s start when the lead singer Simon LeBon suffered damaged vocal chords.
We were right in the middle of building up the cameras at the Berlin concert hall when the news came through.
With the cancellation of the Berlin gig the whole scope of the project changed as the next opportunity to shoot would be at one of the huge arena events in the UK. In 1984 Duran Duran produced a film called “Arena”. This was a truly epic concert video that covered several legs of their sold out arena and stadium tour of 1983. While we were never going to replicate this on our much more modest budget, it certainly gave us something to aim for.
The idea was to film the concert with what was at the time ground breaking large sensor video cameras to achieve a film like look. Duran Duran are famous for their videos so we were following in some pretty big footsteps. The majority of the cameras were Sony PMW-F3’s with a custom picture profile that I developed specifically for the shoot. Other cameras included (if I remember right) a couple of FS100’s, some GoPro’s on stage and right at the very back of the Venue there was a Red One shooting a big 4K wide shot.
I got the opportunity to test the camera settings the week before the shoot at a concert at the O2 arena. After that there was just a single concert to film, so we had to get everything just right.
The day of the gig was a typical dark and gloomy winters day in Manchester. Inside the vast Manchester arena we were busy fitting lenses to camera. Sorting out cue sheets, organising talkback links and all those other things needed for a multi camera concert shoot.
We had some very exotic lenses, several Angenieux 24-290 T2.8’s. At my camera position I was using a 40x ENG lens with one of the 2/3″ to super35mm adapters I had designed. The focal length of this lens was the equivalent of 1000mm at f4. The depth of field was paper thin!
The concert started and the filming went ahead. It seem to all be over very quickly, all that preparation, all those tests for just 2 hours of filming. And then the shoot was over.
Post production took quite a while as band member Nick Rhodes chose to add a lot of his own elements to the edit. Each track in the film has a slightly different look, but it was all worth it. The end result was the feature length film “A Diamond In The Mind”. It’s a project that was amazing to work on, with an amazing crew put together by Hangman Films. Today, 7 years later I still think it looks pretty damn good. I’d love to go back to the rushes and produce an HDR version!
Just over a week ago I was in Cape Town with a few hours to spare before my flight home and access to a Sony Venice. So what could I do other than go out and shoot. Here is some of the footage with a quick grade applied – in HDR.
The workflow: I shoot X-OCN ST at 25p and 50p on the Venice camera. 25p was requested by Visual Impact South Africa, the owners of this camera as this is the most common frame rate used in productions they are involved with. The material was backed up to a small portable USB3 raid unit so I could bring it home. Then it was graded using DaVinci Resolve and it’s ACES colour managed workflow with the output set to Rec2100 ST2084. I used a Shogun Inferno and both an LCD HDR Sony Bravia TV and an OLED HDR Philips TV to get a feel for how the images would look on both LCD and OLED technologies.
The file was exported as a UHD ProRes file so that the file direct from Resolve could be uploaded to YouTube. Because I used a colour managed workflow Resolve adds the correct HDR flags to the clip when you render the timeline out. As a result YouTube knows the file is HDR and if you view with a computer or SDR TV YouTube applies it’s default HDR10 to Rec709 LUT and you see the video in SDR. Watch with a direct connection to YouTube with an HDR TV (for example using a browser or YouTube player built in to the TV) and you will get the HDR version. This is probably the simplest way to reliably get HDR clips to play properly on YouTube (which currently does not accept HEVC files).
So here’s the clip.
IMPORTANT: The clip is HDR10, designed to be watched directly on an HDR TV using the TV’s built in web browser or YouTube player application.
Those watching on a normal computer, SDR TV or any other non HDR device will see the HDR clip with YouTube’s SDR/Rec709 LUT applied, so it isn’t exactly optimum for SDR. The YouTube HDR to SDR LUT causes some slightly odd colours in some of the clips. If you can, watch the clip directly on YouTube with an HDR TV.
This is something that keeps popping up all over the place and it’s not just one camera that attracts this comment. Many do, from the FS5 to the FS7 to the F55, plus cameras from other manufacturers too.
One common factor is that very often this relates to the newer super35mm cameras. Cameras designed to give a more rounded, film like look, often cameras with 4K or higher resolution sensors.
I think many people perceive there is an issue with their viewfinder because they come to these new high resolution, more rounded and film like cameras from traditional television centric camcorders that use detail correction, coring and aperture correction to boost the image sharpness.
SD and even HD television broadcasting relies heavily on image sharpening so that viewers perceive a crisp, sharp image at any viewing distance and with any screen size (although on really big screens this can really ruin the image).
This works by enhancing and boosting the contrast around edges. This is standard practice on all normal HD and SD broadcast cameras. Especially camera that use a 3 chip design with a prism as the prism will often reduce the images edge contrast.
As most people will prefer a very slightly sharpened HD image or a heavily sharpened SD image over an unsharpened one, it’s sharpened by default. This means that the images those cameras produce will tend to look sharp even on screens that have a lower resolution than that of the camera because the edges remain high contrast even when the viewing resolution is reduced and as a result look sharp.
Most current manufacturer supplied LCD EVF’s run at 1/4″ HD with 940 x 560 pixels (each pixel made up of an RGB 3 dot matrix). In addition many of the 3rd party VF’s such as the very popular Alphatron are the same because they all use the same mass produced, relatively low cost panels – panels that are also used for mobile phones and many other devices.
The problem then is that when you move to a camera that doesn’t add any image sharpening, if you view the cameras image on a lower resolution screen the image looks soft because — it is. There is no detail correction to compensate. Incidentally this is why often these same cameras can look a bit soft in HD and very soft in SD compared to other traditional or detail corrected cameras. But, that slightly softer, less processed look helps contribute to their more film like look. This softness and lack of sharpening/processing is particularly noticeable if you use the focus mag function as you are then looking at an enlarged but completely un-sharpened image.
It could be argued that the viewfinder should sharpen the image to compensate. Some of the more expensive viewfinders can do this using their own sharpening processes. But the image that you are then seeing is not the picture that is being recorded and this isn’t always ideal. If it is over done then it can make the entire image look sharp even when it isn’t fully in focus. Really you want to be looking at exactly the image that the camera is recording so that you can spot any potential problems. But that then makes focussing tricky.
There are a few 3rd party viewfinders such as the Gratical that have higher resolutions. The Gratical and Eye have screens that are 1280×1024, but in normal use you only use 1280×720 for the image area. This certainly helps, but even the 1:1 pixel zoom on these can look soft and blurry as you loose the viewfinders peaking function when you crop in.
Sony’s Venice and the F55/F5 can use Sony’s new DVF-EL200 OLED viewfinder. This costs around £4.5K ($6K) and has a 1920×1080 screen. It’s a beautiful image, but even this needs a fairly good dose of peaking to artificially sharpen the image to be able to see that last critical bit of focus. Again when you zoom in the image looks soft and a bit blurry (even on a Venice) as the camera itself is not adding any sharpening. The peaking function on the DVF-EL200 is quite sophisticated as it only enhances the highest frequency parts of the image, so only sharp edges and fine details are boosted.
Go back to the days of black and white tube viewfinders and these used tons of peaking to make them useable. Traditional SD and HD cameras add sharpening to their pictures, but most of our modern large sensor 4K camera do not and as a result often the viewfinder images appear soft compared to what we used to see on older cameras or still see today on cameras that do sharpen the pictures.
All of this makes it hard to nail your focus, especially if shooting 4K. Even with a DVF-EL200 on a Venice I struggle at times and rely heavily on image mag (which is still difficult) or better still a much larger monitor with a good sun shade and if necessary some reading glasses to allow you to focus on it up close.
So before you get too critical of your viewfinders performance do also consider all of the above. Try to see how another similar viewfinder looks on your camera (for example an Alphatron on an FS7). Perhaps try a higher resolution viewfinder such as a Gratical, but don’t expect miracles from a small, relatively low resolution screen on a modern digital cinema camera. This really is one of those areas where you can’t beat a big, high resolution screen.
I recently completed a week long shoot with a Sony Venice in the USA, so I thought I would tell you about my experience. The camera I used had a beta of the dual ISO firmware so it could shoot at both the native ISO of 500 and the second native ISO of 2500. This was particularly useful for the shoot as a lot of the filming was done in some very dark places.
I can’t show any of the footage to you yet. But I will be able to link to the finished films once they are released a little later in the year by my client. I have to say straight away that I think the footage looks pretty amazing.
The first location was Las Vegas. I shot a number of views of the Las Vegas strip from the balcony of my hotel room in the Cosmopolitan hotel. These were pretty straight forward thanks to the cameras dual ISO capabilities. One of the shots was a day to night sequence, shooting locked off shots during the day and then at night to be blended together to go from day to night. The day time shots were done at the base 500 ISO and then the night shots done at 2500 ISO.
Sigma FF Fast Primes.
For these shots I used one of the really nice 24mm Sigma full frame high speed PL primes and the camera was setup in the 6K x 4K full frame mode.
For most of the shoot we did however shoot using the s35mm 17:9 DCI mode shooting at 60fps, and we made quite a few changes to the frame rates and frame sizes in the course of the shoot. The Sigma FF Primes are really beautiful lenses. Very well built, solid lenses that produce very sharp images even when wide open. The ability to shoot at T1.5 or T2.0 turned out to be a huge benefit on this shoot as many shots were done either at night or in some very dark locations.
One thing to note here is that I was working on my own. Venice is set up as a camera for high end shoots where it is expected that there will be a camera assistant working with the cinematographer or camera operator. During the shoot I made many changes to the cameras frame rate and aspect ratio. These changes are most easily done using the LCD screen and hot keys on the side of the camera away from the camera operator. So there were many times when I had to walk around to the other side of the camera or spin the camera around on the tripod to make these changes. It’s not really a big deal, but it is something to be aware of if you are going to use Venice as a “one man band”.
On the operators side of the camera there is a small LCD panel where you can change the shutter speed, ND filter, EI and white balance. But for anything else you need to either use the LCD panel on the other side of the camera or go into the main menu. Talking of the menu system – it’s very well laid out and quite logical.
Venice is a much simpler camera to use than the F55. In part because at the moment the feature set is still a little limited – there are no high speed modes, time-lapse, picture cache or other stunt modes. But the main reason it’s simpler is the design and layout of the main LCD and hot keys has been simplified and is better organised.
The “Home” screen has they key functions that you are likely to change while shooting – shutter speed, EI, ND and white balance. There is also a control for the frame rate, although the options for this are currently quite limited with it currently normally being locked to a single fixed speed set in the project settings. The Home screen also tells you how much space is left on your media, the aspect ratio and frame size, clip name, recording format(s), the battery status. Plus there are audio level displays for channels 1 and 2.
If you need to make any other changes to the cameras setting then you press the large menu button to bring up the main menu. This is divided into 5 sections each with it’s own hot key – Project – TC/Media – Monitoring – Audio – Info. The 6th hot key takes you into further settings for some of the menu pages.
Something else a first time Venice shooter should be aware of is that the cameras audio input is via a single 5 pin XLR connector. This can be set up as either a 2 channel analog line/mic input (with switchable phantom power) or an AES/EBU digital input. There are no 3 pin XLR’s on Venice so make sure you have the right cables or adapters.
While Venice isn’t a big camera, it is very dense. That is to say – a lot of electronics has been packed into quite a small body. It is…. shall we say… reassuringly heavy! I guess I have been a bit spoilt by the light weight of the F55. Venice is quite a lot heavier even though it isn’t really all that much bigger. I was shooting using the R7 recorder, recording to 16 bit X-OCN files as my primary material. With the R7 and a couple of Pag Paglink batteries on the back the camera was nicely balanced with the Sigma primes.
I was pleasantly surprised by the power consumption. A single 95Wh Paglink battery would run the camera for over an hour and a PL150 for around 2 hours which is a pretty respectable run time for a digital cinema camera. Certainly a lot more that I would get from an F65 or Arri.
The new DVF-EL200 viewfinder is a big step up from the DVF-EL100 often used on the F55 and F5 cameras. The image is brighter, higher resolution and the dipoter adjustment much better. Venice puts the information displays outside the picture area so the image isn’t obstructed by any text information. The large rotary encoder on the front of the viewfinder controls peaking, brightness and contrast.
Rating the camera – I had already done a few camera tests with Venice so I knew that the base ISO’s of 500 and 2500 matched well with my light meter. I also knew that there is very little noise at 500 ISO and only a little bit more at 2500. For most daylight shots I shot at 500 ISO/500 EI. I don’t feel that there is the same need to rate the camera 1 to 2 stops lower as I do with the F55, FS7 or FS5. It just isn’t necessary for normal light levels. For some scenes that had low average brightness levels I did choose to shoot at 500 ISO/320 EI as it seemed a waste to shoot at 500 EI when the scene highlights were no where near clipping. The slightly lower Ei helped to put just a little more information into the already highly detailed shadow areas of my images. For the darker locations and night scenes I switched the camera to the higher base ISO of 2500 to gain a pretty decent sensitivity boost. When using the 2500 ISO mode I found I ended up shooting at 1600 EI to keep the noise levels very similar to the noise levels at 500 ISO/500 EI.
The noise that you do get when shooting at the 2500 ISO base is really pleasing. I’m not normally a fan of any noise, my personal preference is normally for clean images as these tend to give the greatest post production flexibility. But there is something that just looks nice about the little bit of extra noise that there is at 2500. It’s very, very fine grain that is different in every frame. Dare I say it looks very film like? I need to experiment with this further, but I suspect that many people may choose to use 2500 ISO even when they have plenty of light as the noise adds some character and a pleasing texture to the shots. I’m not going to get into too much of a debate here about the merits of shooting with a bit of grain verses adding it in post. Personally I would probably normally opt to shoot clean and add any noise later, but Venice certainly brings some interesting options to the table and I would not rule out deliberately choosing 2500 ISO, even when there is plenty of light, to take advantage of the really nice looking noise.
We shot a lot of the film in the bottom of a deep Slot Canyon. For those that don’t know what this is – it is a very narrow, very deep, steep sided twisting gully carved out over millions of years by flood water. The Slot Canyon we shot in was often only 2 or 3ft wide (1m) and around 60ft (20m) deep. In most parts sunlight never reaches directly to the bottom, so it’s often very dark. But in a few spots very narrow shafts of light just about make it to the bottom when the sun is directly overhead. This creates some areas of incredibly high contrast as beams of full desert sun penetrate into near total darkness.
Venice was the perfect camera for this situation. The high base ISO mode and the 2500 ISO exposure rating allowed me to capture the dark textures of the sandstone walls of the canyon, while the 15 stop latitude meant that I could also capture the almost laser like light beams as they created intense pools of light. In addition towards the upper parts of the Canyon you get incredibly vivid reds and oranges as the sunlight reflects of the red rocks. To the naked eye it looks like the canyon walls are on fire and Venice did an amazing job of capturing these intense colours.
My hotel room in the city of Page, Arizona was decorated with photographs taken in Slot Canyons. In many of the photos the shafts of light were completely over exposed with no detail or texture. I’m pleased to say that the footage from Venice almost always retained some detail and texture, even in the the most extreme cases. This for me has been one of the most impressive things about the way Venice behaves. There is something very nice about the way that Venice reaches the extreme ends of it’s exposure range, something the F55 doesn’t quite do and I really like it. Venice seems to hang n to those exposure extremes just that bit better. In addition Venice also retains an amazing amount of color information in the deepest darkest shadow areas.
Another location we shot at was the Grand Canyon’s Horseshoe bend. This a well know spot and frankly, if you have good light, it’s tough to make a bad picture. This is one of those locations where everything is on a grand scale. So it deserved a big image. Time to use the 6K x 4K full frame mode and those beautiful Sigma FF primes again. In the grading suite whenever I show people the shots from Venice at Horseshoe Bend there is almost always a “wow” moment. The texture and detail in the shots is amazing and starting with a 6K image for a 4K production gives you quite a bit of room to crop in to the image if you wish.
What about skin tones? Well we did shoot some Navajo dancers doing traditional native American dances and hoop dances. Even in the very harsh Arizona light the skin tones looked great.
With so many different locations to shoot at in one week, some of them very remote, being highly mobile was really important to us. While Venice is heavier than the F55 that I normally shoot with, it is still an easy camera to transport. We had to lug the kit by hand across the desert to get to the Slot Canyon. I used a Miller CX18 fluid head with a 100mm bowl on a set of Miller carbon fibre legs. This is a pretty light tripod setup, similar to that used by ENG news crews. I didn’t need to go to a 150mm bowl or heavier tripod than this for this shoot because Venice is a very manageable weight and it worked very well.
One scene was a nigh time campfire scene. For the Venice camera shooting at 2500 ISO and paired with the Sigma T1.5 primes this wasn’t really too much of a challenge. The fire was a large wood fire and it was producing enough light to illuminate the faces of the subjects in the scene. Although perhaps this could have been shot without any additional lighting it was decided to add a little bit of extra light to fill in a few shadows and add a small amount of detail into the background of the wide shots.
For this I used a Light and Motion Stella Pro 5000 Led lamp. For a one man band these waterproof LED lights are really excellent. They produce lots of good quality light and can be fitted with all kinds of modifiers. For this application I used a Fresnel lens to narrow down the light cone. In addition they can be remotely controlled using a simple hand held Elinchrom remote control. This makes getting the light level just right really easy as you can look in the cameras viewfinder while dimming the lamps with the remote. It’s possible to control several lamps with one remote.
I shot using Sony’s X-OCN codec recording on to AXS cards in the R7 recorder. These 16 bit linear files are surprisingly easy to work with. The compact file size was a huge help. I didn’t get through more than 2 x 512GB cards in a day, even though we were shooting 4K 60P or 6K 24p. This really helped with data management in the evenings. There’s big difference between backing up 1TB of X-OCN compared to what would have been around 5TB if it had been uncompressed raw. Yet there are no signs of any artefacts in the material. The X-OCN files are also very easy to handle in post production, I can even preview them in real time on my laptop at half resolution. In post production the 16 bit linear files handle beautifully, revealing amazing amounts of picture information.
At the end of this shoot I am left with a big problem though. Now I’ve shot a real production with Venice – I don’t want to shoot with anything else. I have been telling myself that I will stick with my F5/R5 for a bit longer, maybe upgrade the F5 to an F55 and then hire in a Venice as needed. But now I want to shoot with Venice whenever possible. Every time I pick up a Venice and go and shoot with it I come back with images that surprise me. They just seem to look great with very little effort. So now I think I might just have to figure out a way to buy one.
I’m writing this from a hotel room in Page, Arizona. Half way through a shoot covering everything from the city lights of Las Vegas to the Slot Canyons of Arizona. I’m using a Sony Venice to shoot most of the material, but I also have an FS5 recording to ProRes Raw on a gimbal for some shots.
It’s been an interesting shoot with many challenges. Some of the locations have been a long way from our vehicles, so we have had to lug the kit cross country by hand.
Almost everything is being shot at 60fps with some 120fps from the FS5. We also had a Phantom Flex for a couple of days for some 1000fps footage. For some of the really big panoramic scenes we have used the 4:3 6K mode on the Venice (at 24fps).
Our main lenses are a set of full frame T1.5 Sigma primes. These are absolutely amazing lenses and when paired with the Venice camera, it’s hard to produce a poor image. Our Venice has a beta of the dual ISO firmware which has been an absolute godsend as the bottoms of the deep slot canyons are very dark, even in the middle of the day. So being able to shoot at 2500 ISO has been a huge help.
I will write up this project in more detail once the shoot is over. I can’t share any footage yet, but once my client releases the film I will be able to let everyone see it. However I have been allowed to post some frame grabs which you will find below.
I have been working with Sony’s colour science guru Pablo at the Digital Motion Picture Center at Pinewood, looking at the outer limits of what Sony’s Venice camera can do. A large part of the reason for this is that Pablo is developing some really nice LUT’s for use on dailies or even as a grade starting point (Pablo tells me the LUT’s are finished but he is waiting for approvals and feedback from Japan).
As part of this process we have shot test footage with the Venice camera for ourselves and also looked long and hard at test shots done by other cinematographers. Last week we were able to preview a beta version of the cameras dual ISO modes. This beta firmware allowed us to shoot tests at both 500 ISO and 2500 ISO and the results of both are equally impressive.
I can’t share any of the test footage shot at 2500 ISO at this stage. The firmware is still in it’s early stages and the final version may well perform a little differently (probably better). But I can share some of the footage shot at 500 ISO.
Please remember what we were exploring was the extreme ends of the exposure range. So our little test set was set up with some challenges for the camera rather than trying to make a pretty picture.
We have deep, deep shadows on the right behind the couch and we also have strong highlights coming off the guitar, the film can on the shelves and from the practical lamp in the background. The reds of the cushion on the couch look very different with most Rec-709 cameras as the colors are outside the Rec-709 gamut.
Another aspect of the test was to check the exposure rating. For this I used my Sekonic lightmeter to measure both the incident light and the light reflected by the Kodak grey card. My light meter gave me T4 at 1/48th for 500 ISO and this turned out to be pretty much spot on with what the scopes told us. So straight away we were able to establish that the 500 ISO exposure rating appears to be correct. We also found that when we stopped down by 2.3 stops we got the correct exposure at 2500 ISO, so that too appears to be correctly rated.
Once the base exposure was established we shot at 2 stops over and 2 stops under, so from T2 down to T8 using a Sony 35mm PL prime. We used the XOCN-ST codec as we felt this will be the most widely used codec. When looking at the files do remember that the 16 bit XOCN-ST files are smaller than 10 bit ProResHQ. So these are files that are very easy to manage. There is the option to go up in quality to Sony’s linear raw codec or down to X-OCN LT. XOCN-ST sits in the middle and offers a nice balance between file size and image quality, it being very hard to find any visual difference between this and the larger raw files.
The files I’m providing here are single X-OCN frames. They have not been adjusted in any way, they are just as shot (including being perhaps a touch out of focus). You can view them using the latest version of Sony’s raw viewer software or the latest version of DaVinci Resolve. For the best quality preview, at this time I recommend using Sony’s Raw Viewer to view the clips.
Once again it’s time to put pen to paper or fingers to keyboard as this is a subject that just keeps coming up again and again.
People really seem to have a lot of problems with banding in footage and I don’t really fully understand why as it’s something I only ever really encounter if I’m pushing a piece of material really, really hard in post production. General the vast majority of the content I shoot does not exhibit problematic banding, even the footage I shoot with 8 bit cameras.
First things first – Don’t blame it on the bits. Even an 8 bit recording (from a good quality camera) shouldn’t exhibit noticeable banding. An 8 bit recording can contain up to 13 million tonal values. It’s extremely rare for us to shoot luma only, but even if you do it will still have 235 shades and these steps in standard dynamic range are too small for most people to discern so you shouldn’t ever be able to see them. I think that when most people see banding they are not seeing teeny, tiny almost invisible steps what most people see is something much more noticeable – so where is it coming from?
It’s worth considering at this stage that most TV’s, monitors and computer screens are only 8 bit, sometimes less! So if you are looking at one camera and it’s banding free and then you look at another and you see banding, in both cases you are probably looking at an 8 bit image, so it can’t just be the capture bit depth that causing the problem as you cant see 10 bit steps on an 8 bit monitor.
So what could it be?
A very common cause of banding is compression. DCT based codecs such as Jpeg, MJPEG, H264 etc break the image up into small blocks of pixels called macro blocks. Then all the pixels in each block is processed in a similar manner and as a result sometimes there may be a small step between each block or between groups of blocks across a gradient. This can show up as banding. Often we see this with 8 bit codecs because typically 8 bit codecs use older technology or are more highly compressed. It’s not because there are not enough code values. Decreasing the compression ratio will normally eliminate the stepping.
Scaling between bit depths or frame sizes is another very common cause of banding. It’s absolutely vital that you ensure that your monitoring system is up to scratch. It’s very common to see banding in video footage on a computer screen as the video data levels are different to computer data levels and in addition there may also be some small gamma differences so the image has to be scaled on the fly. In addition computer desktops runs at one bit range, the HDMI output another, so all kinds of conversions are taking place that can lead to all kinds of problems when you go from a video clip, to computer levels, to HDMI levels. See this article to fully understand how important it is to get your monitoring pipeline properly sorted. http://www.xdcam-user.com/2017/06/why-you-need-to-sort-out-your-post-production-monitoring/
Look Up Tables (LUT’s) can also introduce banding. LUT’s were never really intended to be used as a quick fix grade, the intention was to use them as an on-set reference or guide, not the final output. The 3D LUT’s that we typically use for grading break the full video range into bands and each band will apply a slightly different correction to the footage than the band above or below. These bands can show up as steps in the LUT’s output, especially with the most common 17x17x17 3D LUT’s. This problem gets even worse if you apply a LUT and then grade on top – a really bad practice.
Noise reduction – In camera or postproduction noise reduction will also often introduce banding. Very often pixel averaging is used to reduce noise. If you have a bunch of pixels that are jittering up and down taking an average value for all those pixels will reduce the noise, but then you can end up with steps across a gradient as you jump from one average value to the next. If you shoot log it’s really important that you turn off any noise reduction (if you can) when you are shooting because when you grade the footage these steps will get exaggerated. Raising the ISO (gain) in a camera also makes this much worse as the cameras built in NR will be working harder, increasing the averaging to compensate the increased noise.
Coming back to 8 bit codecs again – Of course a similar quality 10 bit codec will normally give you more picture information than an 8 bit one. But we have been using 8 bits for decades, largely without any problems. So if you can shoot 10 bit you might get a better end result. But also consider all the other factors I’ve mentioned above.
I know that many of my readers like to shoot log. One of the most common terms used around shooting log is “shooting flat”. Lets take a look at that term and think about what it actually means.
One description of a flat image might be – “An image with low contrast”. Certainly an image with low contrast can be considered flat.
Once upon a time shooting flat meant lighting a scene so that there was very little contrast. The background in an interview might be quite well lit. You would avoid deep shadows or strong highlights. This was done because cameras had very limited dynamic ranges. These flat images of low contrast scenes could then have the contrast boosted in post production to make them look better.
8 years ago, with the advent of DSLR cameras that could shoot with film like depths of field it became fashionable to shoot flat because digital film cameras when shooting using log produced an image that looks flat when viewed on a conventional TV or monitor.
But lets think about that for a moment. A typical digital cinema camera can capture 14 stops of dynamic range. A scene with 14 stops of dynamic range contains a huge contrast range, perhaps a brilliant bright sky and deep shadows, you can possibly describe the capture a scene with 14 stops of dynamic range as “flat”?
The answer is you can’t – or at least you shouldn’t because the recording isn’t flat. The dynamic range that most digital cinema cameras can capture is not flat, not at all.
The problem is that a normal TV or video monitor can’t show a very big dynamic range. A conventional TV can only show around 6 stops. If you take a log video signal with a 14 stop image and try to show that on a 6 stop screen you will be squashing the highlights and shadows closer together, so the highlight that was at +14 stops in the scene and is recorded at 100%, gets pushed closer to the deepest shadow in the scene that is recorded at 1%.
On a normal 6 stop TV the 100% recording level is shown at +6 stops while the deepest shadow will be at 1%, so now the 14 stop recording is being shown with only 6 stops between the deepest black and the brightest highlight. Instead of the highlight being dazzlingly bright it’s now just a bright white and not all that much brighter than the shadows. As a result the image on the screen looks all wrong, nothing like what you recorded and it appears to be “flat”.
BUT THE DATA IN THE FILE IS NOT FLAT – that recording contains a high contrast, 14 stop image – it’s the inability of the TV or monitor to show it correctly that makes it look wrong, not that you have shot flat.
In the early days of DSLR shooting many DSLR shooters decided to mimic the way the image from a digital cinema camera looks flat on a normal TV, perhaps in the miss-guided belief that a flat image must always have a greater dynamic range. This definitely isn’t always the case. I can take any regular dynamic range image and make it look flat by reducing the contrast, raising the blacks a bit, shifting the gamma perhaps, that’s easy. But that doesn’t increase the dynamic range that is captured. Changing the capture range of a camera typically requires fundamental changes to the way it operates rather than simple tweaks to the basic picture settings.
So we went through a period where shooting a flat looking image with a DSLR was the trendy way to shoot because on a normal TV or monitor the image recorded is reminiscent of the image from a true digital cinema camera shooting log, even though in practice the “flat look” was often damaging the image rather than improving it.
Now there are many digital cinema cameras that can capture a very big dynamic range using log encoding and these images look washed out and flat on a normal monitor or TV because of the miss-match between the camera and the monitor, not because the captured scene is flat. But we still call this shooting flat (wrong)!
Why? In many cases people like to leave the image this way as they like this “incorrect” look. Flat is trendy, it’s fashionable, at least to those inside the TV and Video production world. I’m not sure that the wider general audience really understands why their pictures look washed out.
If you have a monitor with high dynamic range display capabilities such as a Atomos Shogun Flame or Inferno, that can show a large dynamic range then you’ll know that if you feed it log and set the display range to HDR and choose the right gamma curve, the picture on the screen is no longer flat, it’s bright and contrasty. This isn’t a LUT or any other cheat. The monitor is simply showing the image with a range much closer to the capture range and now it looks right again.
So next time you use the term “shooting Flat” think very carefully about what it actually means and whether you are really shooting flat or whether it’s simply a case of using the wrong monitor. Using words or terms like this incorrectly causes all kinds of problems. For example most people think that log footage is flat and that that’s how it’s supposed to look. But it isn’t flat and it’s not supposed to look flat, we are just using the wrong monitors!
TV and video production, including digital cinema is a highly technical area. Anyone that tells you otherwise is in my opinion mistaken. Many of the key jobs in the industry require an in depth knowledge of not just the artistic aspects but also the technical aspects.
Almost everyone in the camera department, almost everyone in post production and a large portion of the planning and pre-production crew need to know how the kit we use works.
A key area where there is a big knowledge gap is gamma and color. When I was starting out in this business I had a rough idea of what gamma and gamut was all about. But then 10 years or more ago you didn’t really need to know or understand it because up to then we only ever had variations on 2.2/2.4 gamma. There were very few adjustments you could make to a camera yourself and if you did fiddle, generally you would often create more problems than you solved. So those things were just best left alone.
But now it’s vital that you fully understand gamma, what it does, how it works and what happens if you have a gamma miss-match. But sadly so many camera operators (and post people) like to bury their heads in the sand using the excuse “I’m an artist – I don’t need to understand the technology”. Worse still are those that think they understand it, but in reality do not, mainly I think, due to the spread of miss-information and bad practices that become normal. As an example shooting flat seems to mean something very different today to what it meant 10 years ago. 10 years ago it meant shooting with flat lighting so the editor or color grader could adjust the contrast in post production. Now though, shooting flat is often incorrectly used to describe shooting with log gamma (shooting with log isn’t flat, it’s a gamma miss-match that might fool the operator into thinking it’s flat). The whole “shooting flat” miss-conception comes from the overuse and incorrect use of the term on the internet until it eventually became the accepted term for shooting with log.
As only a very small portion of film makers actually have any formal training and even fewer go back to school to learn about new techniques or technologies properly this is a situation that isn’t going to get any better. As we move into an era where, in the short term at least, we will need to start delivering multiple versions of productions in both standard dynamic range as well as several different HDR versions, additionally saving the programme master in another intermediate format. Things are only going to get more complicated and more and more mistakes will be made, technology will be applied and used incorrectly.
Most people are quite happy to spend thousands on a new camera, new recorder or new edit computer. But then they won’t spend any money on training to learn how to get the very best from it. Instead they will surf the net for information and guides of unknown quality and accuracy.
When you hire a crew member you have no idea how good their knowledge is. As it’s normal for most not to have attended any formal courses we don’t ask for certificates and we don’t expect them. But they could be very useful. Most other industries that benefit from a skilled labour force have some form of formal certification process, but our industry does not, so hiring crew, booking an editor etc becomes a bit of a lottery.
Of course it’s not all about technical skills. Creative skills are equally important. But again it’s hard to prove that you do have such skills to a new client. Showreels are all to easy to fake.
Guilds and associations are a start. But many of these can be joined simply by paying the joining or membership fee. You could be a member of one of the highly exclusive associations such as the ASC or BSC, but even that doesn’t mean you know about technology “A” or technique “Z”.
We should all take a close look at our current skill sets. What is lacking, where do I have holes, what could I do better. I’ve been in this business for 30 years and I’m still learning new stuff almost every day. It’s one of the things that keeps life interesting. Workshops and training events can be hugely beneficial and they really can lead to you getting better results. Or it may simply be that a day of training helps give you the confidence that you are doing it right. They are also great opportunities to meet other similar people and network.
Whatever you do, don’t stop learning, but beware the internet, not everything you read is right. The key is to not just read and then do, but to read, understand why, ask questions if necessary, then do. If you don’t understand why, you’ll never be able to adapt the “do” to fit your exact needs.
This comes up so many times, probably because the answer is rarely clear cut.
First lets look at exactly what the difference between an 8 bit and a 10 bit recording is.
Both will have the same dynamic range. Both will have the same contrast. Both will have the same color range. One does not necessarily have more color or contrast than the other. The only thing you can be sure of is the difference in the number of code values. An 8 bit video recording has a maximum of 235 code values per channel giving 13 million possible tonal values. 10 bit recording has up to 970 code values per channel giving up to 912 million tonal values.
There is a lot of talk of 8 bit recordings resulting in banding because there are only 235 luma shades. This is a bit of a half truth. It is true that if you have a monochrome image there would only be 235 steps. But we are normally making colour images so we are typically dealing with 13 million tonal values, not simply 235 luma shades. In addition it is worth remembering that the bulk of our current video distribution and display technologies are 8 bit – 8 bit H264, 8 bit screens etc. There are more and more 10 bit codecs coming along as well as more 10 bit screens, but the vast majority are still 8 bit.
Compression artefacts cause far more banding problems than too few steps in the recording codec. Most codecs use some form of noise reduction to help reduce the amount of data that needs to be encoded and this can result in banding. Many codecs divide the image data into blocks and the edges of these small blocks can lead to banding and stepping.
Of course 10 bit can give you more shades. But then 4K gives you more shades too. So an 8 bit UHD recording can sometimes have more shades than a 10 bit HD recording. How is this possible? If you think about it, in UHD each color object in the scene is sampled with twice as many pixels. Imagine a gradient that spans 4 pixels. In 4K you will have 4 samples and 4 steps. In HD you will only have 2 samples and 2 steps, so the HD image might show a single big step while the 4K may have 4 smaller steps. It all depends on how steep the gradient is and how it falls relative to the pixels. It then also depends on how you will handle the footage in post production.
So it is not as clear cut as often made out. For some shots with lots of textures 4K 8 bit might actually give more data for grading than 10 bit HD. In other scenes 10 bit HD might be better.
Anyone that is getting “muddy” results in 4K compared to HD is doing something wrong. Going from 8 bit 4K to 10 bit HD should not change the image contrast, brightness or color range. The images shouldn’t really look significantly different. Sure the 10 bit HD recording might show some subtle textures a little better, but then the 8 bit 4K might have more texture resolution.
My experience is that both work and both have pro’s and con’s. I started shooting 8 bit S-log when the Sony PMW-F3 was introduced 7 years ago and have always been able to get great results provided you expose well. 10 bit UHD would be preferable, I’m not suggesting otherwise (at least 10 GOOD bits are always preferable), but 8 bit works too.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.