Category Archives: cinematography

Vegas, Slot Canyons and Venice.

I’m writing this from a hotel room in Page, Arizona. Half way through a shoot covering everything from the city lights of Las Vegas to the Slot Canyons of Arizona. I’m using a Sony Venice to shoot most of the material, but I also have an FS5 recording to ProRes Raw on a gimbal for some shots.

It’s been an interesting shoot with many challenges. Some of the locations have been a long way from our vehicles, so we have had to lug the kit cross country by hand.

day1-3 Vegas, Slot Canyons and Venice.
Lugging the camera kit to the Slot Canyon. Thankful that the Miller CX18 tripod is nice and light.
day1-5 Vegas, Slot Canyons and Venice.
Shooting with Venice in the Slot Canyon.

Almost everything is being shot at 60fps with some 120fps from the FS5. We also had a Phantom Flex for a couple of days for some 1000fps footage. For some of the really big panoramic scenes we have used the 4:3 6K mode on the Venice (at 24fps).

JS_1983 Vegas, Slot Canyons and Venice.
FS5 on a gimbal shooting ProRes Raw via an Atomos Inferno.

Our main lenses are a set of full frame T1.5 Sigma primes. These are absolutely amazing lenses and when paired with the Venice camera, it’s hard to produce a poor image. Our Venice has a beta of the dual ISO firmware which has been an absolute godsend as the bottoms of the deep slot canyons are very dark, even in the middle of the day. So being able to shoot at 2500 ISO has been a huge help.

I will write up this project in more detail once the shoot is over. I can’t share any footage yet, but once my client releases the film I will be able to let everyone see it. However I have been allowed to post some frame grabs which you will find below.

 

Venice3_1.5.1-1024x576 Vegas, Slot Canyons and Venice.
Freemont Street Las Vegas, shot with Venice and 35mm Sigma.
canyon-grab5_1.10.1-1024x576 Vegas, Slot Canyons and Venice.
Secret Canyon Slot Canyon.Sony Venice 4:3 full frame, Sigma 35mm.
vegas-5_1.3.2-1024x576 Vegas, Slot Canyons and Venice.
Las Vegas by night. Sony Venice, Sigma 25mm
canyon-grab2_1.6.1-1024x576 Vegas, Slot Canyons and Venice.
Secret Canyon, Arizona. Sony Venice Sigma 50mm
canyon-grab4_1.8.1-1024x576 Vegas, Slot Canyons and Venice.
Secret Canyon, Arizona. Sony Venice, sigma 35mm.
Navajo2-1024x537 Vegas, Slot Canyons and Venice.
Navajo dancer, Page, Arizona. Sony Venice.
Navajo3-1024x537 Vegas, Slot Canyons and Venice.
Navajo hoop dancer at Horseshoe bend, Grand Canyon. Sony Venice, Sigma 20mm
Horsehoe1-1024x679 Vegas, Slot Canyons and Venice.
Horseshoe bend, Grand Canyon. Sony Venice, Full Frame, Sigma 20mm
riding-1024x536 Vegas, Slot Canyons and Venice.
Riding into the sunset. Sony Venice, Sigma 135mm
campfire3-1024x537 Vegas, Slot Canyons and Venice.
Campfire cookout. Page, Arizona, Sony Venice, Sigma 85mm
Advertisements

Sony Venice – A close look at the dynamic range and noise.

With Sony Venice X-OCN files to download!

I have been working with Sony’s colour science guru Pablo at the Digital Motion Picture Center at Pinewood, looking at the outer limits of what Sony’s Venice camera can do. A large part of the reason for this is that Pablo is developing some really nice LUT’s for use on dailies or even as a grade starting point (Pablo tells me the LUT’s are finished but he is waiting for approvals and feedback from Japan).

As part of this process we have shot test footage with the Venice camera for ourselves and also looked long and hard at test shots done by other cinematographers. Last week we were able to preview a beta version of the cameras dual ISO modes. This beta firmware allowed us to shoot tests at both 500 ISO and 2500 ISO and the results of both are equally impressive.

I can’t share any of the test footage shot at 2500 ISO at this stage. The firmware is still in it’s early stages and the final version may well perform a little differently (probably better). But I can share some of the footage shot at 500 ISO.

Please remember what we were exploring was the extreme ends of the exposure range. So our little test set was set up with some challenges for the camera rather than trying to make a pretty picture.

We have deep, deep shadows on the right behind the couch and we also have strong highlights coming off the guitar, the film can on the shelves and from the practical lamp in the background. The reds of the cushion on the couch look very different with most Rec-709 cameras as the colors are outside the Rec-709 gamut.

Another aspect of the test was to check the exposure rating. For this I used my Sekonic lightmeter to measure both the incident light and the light reflected by the Kodak grey card. My light meter gave me T4 at 1/48th for 500 ISO and this turned out to be pretty much spot on with what the scopes told us. So straight away we were able to establish that the 500 ISO exposure rating appears to be correct. We also found that when we stopped down by 2.3 stops we got the correct exposure at 2500 ISO, so that too appears to be correctly rated.

Once the base exposure was established we shot at 2 stops over and 2 stops under, so from T2 down to T8 using a Sony 35mm PL prime. We used the XOCN-ST codec as we felt this will be the most widely used codec.  When looking at the files do remember that the 16 bit XOCN-ST files are smaller than 10 bit ProResHQ. So these are files that are very easy to manage. There is the option to go up in quality to Sony’s linear raw codec or down to X-OCN LT. XOCN-ST sits in the middle and offers a nice balance between file size and image quality, it being very hard to find any visual difference between this and the larger raw files.

The files I’m providing here are single X-OCN frames. They have not been adjusted in any way, they are just as shot (including being perhaps a touch out of focus). You can view them using the latest version of Sony’s raw viewer software or the latest version of DaVinci Resolve. For the best quality preview, at this time I recommend using Sony’s Raw Viewer to view the clips.

Click here to download these Venice Samples

If you find these files useful please consider buying me a coffee or beer.


Type



pixel Sony Venice - A close look at the dynamic range and noise.

So what do the files look like? First I recommend you download and play with them for yourself. Anything I do has to have a LUT,  grade or other process applied so that the linear data can be viewed on a normal computer screen. So it’s better to take a look at the original files and see what you can do with them rather than just accepting my word. The images here were create in DaVinci Resolve using ACES. ACES adds a film type highlight roll-off and uses film type levels, so the images look a touch dark as there were a lot of low light level areas in the test shots.

Venice at T4 The base exposure for the test.

Venice-base-T4_1.1.1-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T4 (From ACES). This was the “base” exposure for this test. Click on the image to enlarge.

Venice at T8 – 2 Stops under exposed (As exposed).

Venice-T8-uncor_1.4.1-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T8 (2 stops under). Click on the image to enlarge.

Venice at T8 – 2 Stops under exposed (Brightness corrected to match base exposure).

Venice-T8-norm_1.4.2-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T8 (2 stops under). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T5.6 – 1 stop under exposed (brightness corrected to match base exposure).

Venice-T5.6-norm_1.5.1-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T5.6 (1 stops under). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T4 The base exposure for the test.

Venice-base-T4_1.1.1-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T4 (From ACES). This was the “base” exposure for this test. Click on the image to enlarge.

Venice at T2.8 – 1 stop over exposed (brightness adjusted to match base exposure).

Venice-t2.8-norm_1.2.1-1-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T2.8 (1 stops over). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T2.0 – 2 stops over exposed (brightness adjusted to match base exposure).

Venice-T2-norm_1.3.2-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T2 (2 stops over). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T2.0 – 2 stops over exposed (as shot).

Venice-T2-uncor_1.3.1-1024x540 Sony Venice - A close look at the dynamic range and noise.
Venice at T2.0, 2 stops over, as shot. Click on the image to enlarge.

NOTES AND OBSERVATIONS:

I shouldn’t rush these tests! I should have set the focus at T2, not at T4. Focus is on the chart, not the dummy head. It would have been better if the eyes and chart were at the same distance.

It’s amazing how similar all the shots across this 5 stop range look. Just by adjusting the metadata ISO rating in Resolve I was able to get a near perfect match. There is more noise in the under exposed images and less in the over exposed images, that’s expected. But even the 2 under images are still pretty nice.

NOISE:

What noise there is, is very fine in structure. Noise is pretty even across each of the R, G and B channels so there won’t be a big noise change if skewing the white balance towards blue as can happen with some other cameras where the blur channel is noisier than red or green. Even at T8 and 2 stops under the noise is not unacceptable. A touch of post production NR would clean this up nicely. So shooting at 500 ISO base and rating the camera at 2000 EI would be useable if needed, or perhaps to deliberately add some grain. However instead of shooting at 500 ISO / 2000 EI you might be better off using the upper 2500 base ISO instead for low light shoots because that will give a nice sensitivity increase with no change to the dynamic range and only a touch (and it really is just a touch) more noise.

If shooting something super bright or with lot and lots of very important highlights  I would not be concerned about rating the camera at 1000EI.  For most projects I would probably rate the camera at 500EI. If the scene is generally dark I may choose 400EI just to be a touch cleaner. With such a clean image and so much dynamic range you really can pick and choose how you wish to rate the camera.

Venice has more dynamic range than an F55 and maybe a bit more than the F65. Most of the extra dynamic range is in the shadows. There is an amazing amount of picture information that can be pulled out of the darker parts of the images. The very low noise floor is a big help here. In the example below I have taken the base exposure sample and brought the metadata ISO up to 2000 ISO. Then I have used a luma curve to pull up the shadows still further. If you look at the shelves on the left, even in the deep shadow areas it’s possible to see the grain effect on the dark wood panels. In addition you can see both the white and black text on the back of the grey book on the bottom shelf. Yes, there is some noise but my meter put these areas at -6 stops, so being able to pull out so much detail from these areas is really impressive.

Venice-deep-shadows_1.1.2-1024x540 Sony Venice - A close look at the dynamic range and noise.
An amazing amount of information still exists in even the darkest shadow areas. This image adjusted up significantly from base exposure (at least +4 stops).

In the highlights the way the camera reaches it’s upper limit is very pleasing, it does seem to have a tiny roll off just before it clips and this looks really nice. If you look at the light bulbs in this test, at the base exposure, if you bring the highlights down in post you can see that not all of the bulb is completely over exposed they are only over exposed where the element is. Also the highlights reflecting off the guitar and film can on the shelf look very “real” and don’t have that hard clipped look that specular highlights on other cameras can sometimes have.

Another thing that is very nice is the colour tracking. As you go up and down in exposure there are no obvious colour shifts. It’s one of the things that really helps make it so easy to make all 5 exposures look the same.

The start up time of the Venice camera is very impressive at around 6 to 8 seconds. It’s up and running very quickly. The one stop steps in the ND filter system are fantastic. The camera is very simple to use and the menu seems logically laid out. It’s surprisingly small, it’s not much bigger than a PMW-F55, just a little taller and a little longer. Battery consumption is lower than most of the competition, the camera appears to consume around 50w which is half the power consumption of a lot of the competition. It can be run of either 12v or 24v. So all in all it can be rigged as a very compact camera with a standard V-Lock battery on the back.

Looking forward to shooting more with Venice in the very near future.

 

Banding in your footage. What Causes It, is it even there?

Once again it’s time to put pen to paper or fingers to keyboard as this is a subject that just keeps coming up again and again.

People really seem to have a lot of problems with banding in footage and I don’t really fully understand why as it’s something I only ever really encounter if I’m pushing a piece of material really, really hard in post production. General the vast majority of the content I shoot does not exhibit problematic banding, even the footage I shoot with 8 bit cameras.

First things first – Don’t blame it on the bits. Even an 8 bit recording  (from a good quality camera) shouldn’t exhibit noticeable banding. An 8 bit recording can contain up to 13 million tonal values. It’s extremely rare for us to shoot luma only, but even if you do it will still have 235 shades and these steps in standard dynamic range are too small for most people to discern so you shouldn’t ever be able to see them. I think that when most people see banding they are not seeing teeny, tiny almost invisible steps what most people see is something much more noticeable – so where is it coming from?

It’s worth considering at this stage that most TV’s, monitors and computer screens are only 8 bit, sometimes less! So if you are looking at one camera and it’s banding free and then you look at another and you see banding, in both cases you are probably looking at an 8 bit image, so it can’t just be the capture bit depth that causing the problem as you cant see 10 bit steps on an 8 bit monitor.

So what could it be?

A very common cause of banding is compression. DCT based codecs such as Jpeg, MJPEG, H264 etc break the image up into small blocks of pixels called macro blocks. Then all the pixels in each block is processed in a similar manner and as a result sometimes there may be a small step between each block or between groups of blocks across a gradient. This can show up as banding. Often we see this with 8 bit codecs because typically 8 bit codecs use older technology or are more highly compressed. It’s not because there are not enough code values. Decreasing the compression ratio will normally eliminate the stepping.

Scaling between bit depths or frame sizes is another very common cause of banding. It’s absolutely vital that you ensure that your monitoring system is up to scratch. It’s very common to see banding in video footage on a computer screen as the video data levels are different to computer data levels and in addition there may also be some small gamma differences so the image has to be scaled on the fly. In addition computer desktops runs at one bit range, the HDMI output another, so all kinds of conversions are taking place that can lead to all kinds of problems when you go from a video clip, to computer levels, to HDMI levels. See this article to fully understand how important it is to get your monitoring pipeline properly sorted. http://www.xdcam-user.com/2017/06/why-you-need-to-sort-out-your-post-production-monitoring/

Look Up Tables (LUT’s) can also introduce banding. LUT’s were never really intended to be used as a quick fix grade, the intention was to use them as an on-set reference or guide, not the final output. The 3D LUT’s that we typically use for grading break the full video range into bands and each band will apply a slightly different correction to the footage than the band above or below. These bands can show up as steps in the LUT’s output, especially with the most common 17x17x17 3D LUT’s. This problem gets even worse if you apply a LUT and then grade on top – a really bad practice.

Noise reduction – In camera or postproduction noise reduction will also often introduce banding. Very often pixel averaging is used to reduce noise. If you have a bunch of pixels that are jittering up and down taking an average value for all those pixels will reduce the noise, but then you can end up with steps across a gradient as you jump from one average value to the next. If you shoot log it’s really important that you turn off any noise reduction (if you can) when you are shooting because when you grade the footage these steps will get exaggerated. Raising the ISO (gain) in a camera also makes this much worse as the cameras built in NR will be working harder, increasing the averaging to compensate the increased noise.

Coming back to 8 bit codecs again – Of course a similar quality 10 bit codec will normally give you more picture information than an 8 bit one. But we have been using 8 bits for decades, largely without any problems. So if you can shoot 10 bit you might get a better end result. But also consider all the other factors I’ve mentioned above.

 

Shooting Flat – No it’s not!

I know that many of my readers like to shoot log. One of the most common terms used around shooting log is “shooting flat”. Lets take a look at that term and think about what it actually means.

One description of a flat image might be – “An image with low contrast”. Certainly an image with low contrast can be considered flat.

Once upon a time shooting flat meant lighting a scene so that there was very little contrast. The background in an interview might be quite well  lit. You would avoid deep shadows or strong highlights. This was done because cameras had very limited dynamic ranges. These flat images of low contrast scenes could then have the contrast boosted in post production to make them look better.

8 years ago, with the advent of DSLR cameras that could shoot with film like depths of field it became fashionable to shoot flat because digital film cameras  when shooting using log produced an image that looks flat when viewed on a conventional TV or monitor.

But lets think about that for a moment. A typical digital cinema camera can capture 14 stops of dynamic range. A scene with 14 stops of dynamic range contains a huge contrast range, perhaps a brilliant bright sky and deep shadows, you can possibly describe the capture a scene with 14 stops of dynamic range as “flat”?

The answer is you can’t – or at least you shouldn’t because the recording  isn’t flat. The dynamic range that most digital cinema cameras can capture is not flat, not at all.

The problem is that a normal TV or video monitor can’t show a very big dynamic range. A conventional TV can only show around 6 stops. If you take a log video signal with a 14 stop image and try to show that on a 6 stop screen you will be squashing the highlights and shadows closer together, so the highlight that was at +14 stops in the scene and is recorded at 100%, gets pushed closer to the deepest shadow in the scene that is recorded at 1%.

On a normal 6 stop TV the 100% recording level is shown at +6 stops while the deepest shadow will be at 1%, so now the 14 stop recording is being shown with only 6 stops between the deepest black and the brightest highlight. Instead of the highlight being dazzlingly bright it’s now just a bright white and not all that much brighter than the shadows. As a result the image on the screen looks all wrong, nothing like what you recorded and it appears to be “flat”.

BUT THE DATA IN THE FILE IS NOT FLAT – that recording contains a high contrast, 14 stop image – it’s the inability of the TV or monitor to show it correctly that makes it look wrong, not that you have shot flat.

In the early days of DSLR shooting many DSLR shooters decided to mimic the way the image from a digital cinema camera looks flat on a normal TV, perhaps in the miss-guided belief that a flat image must always have a greater dynamic range. This definitely isn’t always the case. I can take any regular dynamic range image and make it look flat by reducing the contrast, raising the blacks a bit, shifting the gamma perhaps, that’s easy. But that doesn’t increase the dynamic range that is captured. Changing the capture range of a camera typically requires fundamental changes to the way it operates rather than simple tweaks to the basic picture settings.

So we went through a period where shooting a flat looking image with a DSLR was the trendy way to shoot because on a normal TV or monitor the image recorded is reminiscent of the image from a true digital cinema camera shooting log, even though in practice the “flat look”  was often damaging the image rather than improving it.

Now there are many digital cinema cameras that can capture a very big dynamic range using log encoding and these images look washed out and flat on a normal monitor or TV because of the miss-match between the camera and the monitor, not because the captured scene is flat. But we still call this shooting flat (wrong)!

Why? In many cases people like to leave the image this way as they like this “incorrect” look. Flat is trendy, it’s fashionable, at least to those inside the TV and Video production world. I’m not sure that the wider general audience really understands why their pictures look washed out.

If you have a monitor with high dynamic range display capabilities such as a Atomos Shogun Flame or Inferno, that can show a large dynamic range then you’ll know that if you feed it log and set the display range to HDR and choose the right gamma curve, the picture on the screen is no longer flat, it’s bright and contrasty. This isn’t a LUT or any other cheat. The monitor is simply showing the image with a range much closer to the capture range and now it looks right again.

storm-PQ-14stop-1024x577 Shooting Flat - No it's not!
This is a high dynamic range image. View it on an HDR TV set to HDR10 and it will be brilliantly bright, highly colorfull and full of contrast. On a regular TV or monitor it looks flat and washed out because the regular TV can’t show it properly.

So next time you use the term “shooting Flat” think very carefully about what it actually means and whether you are really shooting flat or whether it’s simply a case of using the wrong monitor. Using words or terms like this incorrectly causes all kinds of problems. For example most people think that log footage is flat and that that’s how it’s supposed to look. But it isn’t flat and it’s not supposed to look flat, we are just using the wrong monitors!

Skills and knowledge in TV and video production are not keeping up with the technology.

TV and video production, including digital cinema is a highly technical area. Anyone that tells you otherwise is in my opinion mistaken. Many of the key jobs in the industry require an in depth knowledge of not just the artistic aspects but also the technical aspects.
Almost everyone in the camera department, almost everyone in post production and a large portion of the planning and pre-production crew need to know how the kit we use works.
A key area where there is a big knowledge gap is gamma and color. When I was starting out in this business I had a rough idea of what gamma and gamut was all about. But then 10 years or more ago you didn’t really need to know or understand it because up to then we only ever had variations on 2.2/2.4 gamma. There were very few adjustments you could make to a camera yourself and if you did fiddle, generally you would often create more problems than you solved. So those things were just best left alone.
But now it’s vital that you fully understand gamma, what it does, how it works and what happens if you have a gamma miss-match. But sadly so many camera operators (and post people) like to bury their heads in the sand using the excuse “I’m an artist – I don’t need to understand the technology”. Worse still are those that think they understand it, but in reality do not, mainly I think, due to the spread of miss-information and bad practices that become normal. As an example shooting flat seems to mean something very different today to what it meant 10 years ago. 10 years ago it meant shooting with flat lighting so the editor or color grader could adjust the contrast in post production. Now though, shooting flat is often incorrectly used to describe shooting with log gamma (shooting with log isn’t flat, it’s a gamma miss-match that might fool the operator into thinking it’s flat). The whole “shooting flat” miss-conception comes from the overuse and incorrect use of the term on the internet until it eventually became the accepted term for shooting with log.
 
As only a very small portion of film makers actually have any formal training and even fewer go back to school to learn about new techniques or technologies properly this is a situation that isn’t going to get any better. As we move into an era where, in the short term at least, we will need to start delivering multiple versions of productions in both standard dynamic range as well as several different HDR versions, additionally saving the programme master in another intermediate format. Things are only going to get more complicated and more and more mistakes will be made, technology will be applied and used incorrectly.
Most people are quite happy to spend thousands on a new camera, new recorder or new edit computer. But then they won’t spend any money on training to learn how to get the very best from it. Instead they will surf the net for information and guides of unknown quality and accuracy.
When you hire a crew member you have no idea how good their knowledge is. As it’s normal for most not to have attended any formal courses we don’t ask for certificates and we don’t expect them. But they could be very useful. Most other industries that benefit from a skilled labour force have some form of formal certification process, but our industry does not, so hiring crew, booking an editor etc becomes a bit of a lottery.
Of course it’s not all about technical skills. Creative skills are equally important. But again it’s hard to prove that you do have such skills to a new client. Showreels are all to easy to fake.
Guilds and associations are a start. But many of these can be joined simply by paying the joining or membership fee. You could be a member of one of the highly exclusive associations such as the ASC or BSC, but even that doesn’t mean you know about technology “A” or technique “Z”.
We should all take a close look at our current skill sets. What is lacking, where do I have holes, what could I do better. I’ve been in this business for 30 years and I’m still learning new stuff almost every day. It’s one of the things that keeps life interesting. Workshops and training events can be hugely beneficial and they really can lead to you getting better results. Or it may simply be that a day of training helps give you the confidence that you are doing it right. They are also great opportunities to meet other similar people and network.
Whatever you do, don’t stop learning, but beware the internet, not everything you read is right. The key is to not just read and then do, but to read, understand why, ask questions if necessary, then do. If you don’t understand why, you’ll never be able to adapt the “do” to fit your exact needs.

Should I shoot 8 bit UHD or 10 bit HD?

This comes up so many times, probably because the answer is rarely clear cut.

First lets look at exactly what the difference between an 8 bit and a 10 bit recording is.
Both will have the same dynamic range. Both will have the same contrast. Both will have the same color range. One does not  necessarily have more color or contrast than the other. The only thing you can be sure of is the difference in the number of code values. An 8 bit video recording has a maximum of 235 code values per channel giving 13 million possible tonal values. 10 bit recording has up to 970 code values per channel giving up to 912 million tonal values.
 
There is a lot of talk of 8 bit recordings resulting in banding because there are only 235 luma shades. This is a bit of a half truth. It is true that if you have a monochrome image there would only be 235 steps. But we are normally making colour images so we are typically dealing with 13 million tonal values, not simply 235 luma shades. In addition it is worth remembering that the bulk of our current video distribution and display technologies are 8 bit – 8 bit H264, 8 bit screens etc. There are more and more 10 bit codecs coming along as well as more 10 bit screens, but the vast majority are still 8 bit.
Compression artefacts cause far more banding problems than too few steps in the recording codec. Most codecs use some form of noise reduction to help reduce the amount of data that needs to be encoded and this can result in banding. Many codecs divide the image data into blocks and  the edges of these small blocks can lead to banding and stepping.
 
Of course 10 bit can give you more shades. But then 4K gives you more shades too. So an 8 bit UHD recording can sometimes have more shades than a 10 bit HD recording. How is this possible? If you think about it, in UHD each color object in the scene is sampled with twice as many pixels. Imagine a gradient that spans 4 pixels. In 4K you will have 4 samples and 4 steps. In HD you will only have 2 samples and 2 steps, so the HD image might show a single big step while the 4K may have 4 smaller steps. It all depends on how steep the gradient is and how it falls relative to the pixels. It then also depends on how you will handle the footage in post production.
 
So it is not as clear cut as often made out. For some shots with lots of textures 4K 8 bit might actually give more data for grading than 10 bit HD. In other scenes 10 bit HD might be better.
 
Anyone that is getting “muddy” results in 4K compared to HD is doing something wrong. Going from 8 bit 4K to 10 bit HD should not change the image contrast, brightness or color range. The images shouldn’t really look significantly different. Sure the 10 bit HD recording might show some subtle textures a little better, but then the 8 bit 4K might have more texture resolution.
 
My experience is that both work and both have pro’s and con’s. I started shooting 8 bit S-log when the Sony PMW-F3 was introduced 7 years ago and have always been able to get great results provided you expose well. 10 bit UHD would be preferable, I’m not suggesting otherwise (at least 10 GOOD bits are always preferable), but 8 bit works too. 

Sony Venice. Dual ISO’s, 1 stop ND’s and Grading via Metadata.

With the first of the production Venice cameras now starting to find their way to some very lucky owners it’s time to take a look at some features that are not always well understood, or that perhaps no one has told you about yet.

Dual Native ISO’s: What does this mean?

An electronic camera uses a piece of silicon to convert photons of light into electrons of electricity. The efficiency at doing this is determined by the material used. Then the amount of light that can be captured and thus the sensitivity is determined by the size of the pixels. So, unless you physically change the sensor for one with different sized pixels (which will in the future be possible with Venice) you can’t change the true sensitivity of the camera. All you can do is adjust the electronic parameters.

With most video cameras the ISO is changed by increasing the amount of amplification applied to the signal coming off the sensor. Adding more gain or increasing the amplification will result in a brighter picture. But if you add more amplification/gain then the noise from the sensor is also amplified by the same amount. Make the picture twice as bright and normally the noise doubles.

In addition there is normally an optimum amount of gain where the full range of the signal coming from the sensor will be matched perfectly with the full recording range of the chosen gamma curve. This optimum gain level is what we normally call the “Native ISO”. If you add too much gain the brightest signal from the sensor would be amplified too much and exceed the recording range of the gamma curve. Apply too little gain and your recordings will never reach the optimum level and darker parts of the image may be too dark to be seen.

As a result the Native ISO is where you have the best match of sensor output to gain. Not too much, not too little and hopefully low noise. This is typically also referred to as 0dB gain in an electronic camera and normally there is only 1 gain level where this perfect harmony between sensor, gain and recording range is achieved, this becoming the native ISO.

Side Note: On an electronic camera ISO is an exposure rating, not a sensitivity measurement. Enter the cameras ISO rating into a light meter and you will get the correct exposure. But it doesn’t really tell you how sensitive the camera is as ISO has no allowance for increasing noise levels which will limit the darkest thing a camera can see.

Tweaking the sensor.

However, there are some things we can tweak on the sensor that effect how big the signal coming from the sensor is. The sensors pixels are analog devices. A photon of electricity hits the silicone photo receptor (pixel) and it gets converted into an electron of electricity which is then stored within the structure of the pixel as an analog signal until the pixel is read out by a circuit that converts the analog signal to a digital one, at the same time adding a degree of noise reduction. It’s possible to shift the range that the A to D converter operates over and the amount of noise reduction applied to obtain a different readout range from the sensor. By doing this (and/or other similar techniques, Venice may use some other method) it’s possible to produce a single sensor with more than one native ISO.

A camera with dual ISO’s will have two different operating ranges. One tuned for higher light levels and one tuned for lower light levels. Venice will have two exposure ratings: 500 ISO for brighter scenes and 2500 ISO for shooting when you have less light. With a conventional camera, to go from 500 ISO to 2500 ISO you would need to add just over 12dB of gain and this would increase the noise by a factor of more than 4. However with Venice and it’s dual ISO’s, as we are not adding gain but instead altering the way the sensor is operating the noise difference between 500 ISO and 2500 ISO will be very small.

You will have the same dynamic range at both ISO’s. But you can choose whether to shoot at 500 ISO for super clean images at a sensitivity not that dissimilar to traditional film stocks. This low ISO makes it easy to run lenses at wide apertures for the greatest control over the depth of field. Or you can choose to shoot at the equivalent of 2500 ISO without incurring a big noise penalty.

One of Venice’s key features is that it’s designed to work with Anamorphic lenses. Often Anamorphic lenses are typically not as fast as their spherical counterparts. Furthermore some Anamorphic lenses (particularly vintage lenses) need to be stopped down a little to prevent excessive softness at the edges. So having a second higher ISO rating will make it easier to work with slower lenses or in lower light ranges.

COMBINING DUAL ISO WITH 1 STOP ND’s.

Next it’s worth thinking about how you might want to use the cameras ND filters. Film cameras don’t have built in ND filters. An Arri Alexa does not have built in ND’s. So most cinematographers will work on the basis of a cinema camera having a single recording sensitivity.

The ND filters in Venice provide uniform, full spectrum light attenuation. Sony are incredibly fussy over the materials they use for their ND filters and you can be sure that the filters in Venice do not degrade the image. I was present for the pre-shoot tests for the European demo film and a lot of time was spent testing them. We couldn’t find any issues. If you introduce 1 stop of ND, the camera becomes 1 stop less sensitive to light.  In practice this is no different to having a camera with a sensor 1 stop less sensitive. So the built in ND filters, can if you choose, be used to modify the base sensitivity of the camera in 1 stop increments, up to 8 stops lower.

So with the dual ISO’s and the ND’s combined you have a camera that you can setup to operate at the equivalent of 2 ISO all the way up to 2500 ISO in 1 stop steps (by using 2500 ISO and 500 together you can have approximately half stops steps between 10 ISO and 650 ISO). That’s an impressive range and at no stage are you adding extra gain. There is no other camera on the market that can do this.

On top of all this we do of course still have the ability to alter the Exposure Index of the cameras LUT’s to offset the exposure to move the exposure mid point up and down within the dynamic range. Talking of LUT’s I hope to have some very interesting news about the LUT’s for Venice. I’ve seen a glimpse of the future and I have to say it looks really good!

METADATA GRADING.

The raw and X-OCN material from a Venice camera (and from a PMW-F55 or F5 with the R7 recorder) contains a lot of dynamic metadata. This metadata tells the decoder in your grading software exactly how to handle the linear sensor data stored in the files. It tells your software where in the recorded data range the shadows start and finish, where the mid range sits and where the highlights start and finish. It also informs the software how to decode the colors you have recorded.

I recently spent some time with Sony Europe’s color grading guru Pablo Garcia at the Digital Motion Picture Center in Pinewood. He showed me how you can manipulate this metadata to alter the way the X-OCN is decoded to change the look of the images you bring into the grading suite. Using a beta version of Black Magic’s DaVinci Resolve software, Pablo was able to go into the clips metadata in real time and simply by scrubbing over the metadata settings adjust the shadows, mids and highlights BEFORE the X-OCN was decoded. It was really incredible to see the amount of data that Venice captures in the highlights and shadows. By adjusting the metadata you are tailoring the the way the file is being decoded to suit your own needs and getting the very best video information for the grade. Need more highlight data – you got it. Want to boost the shadows, you can, at the file data level before it’s converted to a traditional video signal.

It’s impressive stuff as you are manipulating the way the 16 bit linear sensor data is decoded rather than a traditional workflow which is to decode the footage to a generic intermediate file and then adjust that. This is just one of the many features that X-OCN from the Sony Venice offers. It’s even more incredible when you consider that a 16 bit linear  X-OCN LT file is similar in size to 10 bit XAVC-I(class 480) and around half the size of Apples 10 bit ProRes HQ.  X-OCN LT looks fantastic and in my opinion grades better than XAVC S-Log. Of course for a high end production you will probably use the regular X-OCN ST codec rather than the LT version, but ST is still smaller than ProRes HQ. What’s more X-OCN is not particularly processor intensive, it’s certainly much easier to work with X-OCN than cDNG. It’s a truly remarkable technology from Sony.

Next week I will be shooting some more test with a Venice camera as we explore the limits of what it can do. I’ll try and get some files for you to play with.

Using LUT’s for exposure – choosing the right LUT.

If using a LUT to judge the exposure of a camera shooting log or raw it’s really important that you fully understand how that LUT works.

When a LUT is created it will expect a specific input range and convert that input range to a very specific output range. If you change the input range then the output will range will be different and it may not be correct. As an example a LUT designed and created for use with S-Log2 should not be used with S-Log3 material as the the higher middle grey level used by S-Log3 would mean that the mid range of the LUT’s output would be much brighter than it should be.

Another consideration comes when you start offsetting your exposure levels, perhaps to achieve a brighter log exposure so that after grading the footage will have less noise.

Lets look at a version of Sony’s 709(800) LUT designed to be used with S-Log3 for a moment. This LUT expects middle grey to come in at 41% and it will output middle grey at 43%. It will expect a white card to be at 61% and it will output that same shade of white at a little over 85%. Anything on the S-Log3 side brighter than 61% (white) is considered a highlight and the LUT will compress the highlight range (almost 4 stops) into the output range between 85% and 109% resulting in flat looking highlights. This is all perfectly fine if you expose at the levels suggested by Sony. But what happens if you do expose brighter and try to use the same LUT either in camera or in post production?

Well if you expose 1.5 stops brighter on the log side middle grey becomes around 54% and white becomes around 74%. Skin tones which sit half way between middle grey and white will be around 64% on the LUT’s input. That’s going to cause a problem! The LUT considers anything brighter than 61% on it’s input to be a highlight and it will compresses anything brighter than 61%. As a result on the output of your LUT your skin tones will not only be bright, but they will be compressed and flat looking. This makes them hard to grade. This is why if you are shooting a bit brighter it is much, much easier to grade your footage if your LUT’s have offsets to allow for this over exposure.

If the camera has an EI mode (like the FS7, F5, F55 etc) the EI mode offsets the LUT’s input so you don’t see this problem in camera but there are other problems you can encounter if you are not careful like unintentional over exposure when using the Sony LC709 series of LUTs.

Sony’s  709(800) LUT closely matches the gamma of most normal monitors and viewfinders, so 709(800) will deliver the correct contrast ie. contrast that matches the scene you are shooting plus it will give conventional TV brightness levels when viewed on standard monitors or viewfinders.

If you use any of the LC709 LUT’s you will have a miss-match between the LUT’s gamma and the monitors gamma so the images will show lower contrast and the levels will be lower than conventional TV levels when exposed correctly. LC709 stands for low contrast gamma with 709 color primaries, it is not 709 gamma!

Sony’s LC709 Type A LUT is very popular as it mimics the way an Arri Alexa might look. That’s fine but you also need to be aware that the correct exposure levels for this non-standard LC gamma are middle grey at around 41% and white at 70%.

An easy trap to fall into is to set the camera to a low EI to gain a brighter log exposure and then to use one of the LC709 LUT’s and try to eyeball the exposure. Because the LC709 LUT’s are darker and flatter it’s harder to eyeball the exposure and often people will expose them as you would regular 709. This then results in a double over exposure. Bright because of the intentional use of the lower EI but even brighter because the LUT has been exposed at or close to conventional 709 brightness. If you were to mistakenly expose the LC709TypeA LUT with skin tones at 70%, white at 90% etc then that will add almost 2 stops to the log exposure on top of any EI offset.

Above middle grey with 709(800) a 1 stop exposure change results in an a 20% change in brightness, with LC709TypeA the same exposure change only gives a just over 10% change, as a result over or under exposure is much less obvious and harder to measure or judge by eye with LC709. The cameras default zebra settings for example have a 10% window. So with LC709 you could easily be a whole stop out, while with 709(800) only half a stop.

Personally when shooting I don’t really care too much about how the image looks in terms of brightness and contrast. I’m more interested in using the built in LUT’s to ensure my exposure is where I want it to be. So for exposure assessment I prefer to use the LUT that is going to show the biggest change when my exposure is not where it should be. For the “look” I will feed a separate monitor and apply any stylised looks there. To understand how my highlights and shadows, above and below the LUT’s range are being captured I use the Hi/Low Key function.

If you are someone that creates your own LUT’s an important consideration is to ensure that if you are shooting test shots, then grading these test shots to produce a LUT it’s really, really important that the test shots are very accurately exposed.

You have 2 choices here. You can either expose at the levels recommended by Sony and then use EI to add any offsets or you can offset the exposure in camera and not use EI but instead rely on the offset that will end up in the LUT. What is never a good idea is to add an EI offset to a LUT that was also offset.

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.

Why hasn’t anyone brought out a super sensitive 4K camera?

Our current video cameras are operating at the limits of current sensor technology. As a result there isn’t much a camera manufacturer can do to improve sensitivity without compromising other aspects of the image quality.
Every sensor is made out of silicon and silicon is around 70% efficient at converting photons of light into electrons of electricity. So the only things you can do to alter the sensitivity is change the pixel size, reduce losses in the colour and low pass filters, use better micro lenses and use various methods to prevent the wires and other electronics on the face of the sensor from obstructing the light. But all of these will only ever make very small changes to the sensor performance as the key limiting factor is the silicon used to make the sensor.
 
This is why even though we have many different sensor manufacturers, if you take a similar sized sensor with a similar pixel count from different manufacturers the performance difference will only ever be small.
 
Better image processing with more advanced noise reduction can help reduce noise which can be used to mimic greater sensitivity. But NR rarely comes without introducing other artefacts such as smear, banding or a loss of subtle details. So there are limits as to how much noise reduction you want to apply. 
 

So, unless there is a new sensor technology breakthrough we are unlikely to see any new camera come out with a large, actual improvement in sensitivity. Also we are unlikely to see a sudden jump in resolution without a sensitivity or dynamic range penalty with a like for like sensor size. This is why Sony’s Venice and the Red cameras are moving to larger sensors as this is the only realistic way to increase resolution without compromising other aspects of the image. It’s why all the current crop of S35mm 4K cameras are all of very similar sensitivity, have similar dynamic range and similar noise levels.

 

A great example of this is the Sony A7s. It is more sensitive than most 4K S35 video cameras simply because it has a larger full frame sensor, so the pixels can be bigger, so each pixel can capture more light. It’s also why cameras with smaller 4K sensors will tend to be less sensitive and in addition have lower dynamic range (because the pixel size determines how many electrons it can store before it overloads).