Category Archives: cinematography

Inside the Big Top. A short film from Glastonbury 2019. Shot on Venice.

As there is no Glastonbury Festival this year the organisers and production company have been releasing some videos from last year. This video was shot mostly with Venice using Cooke 1.8x anamorphics. The non Venice material is from an FS5. It’s a behind the scenes look at the activities and performances around the Glastonbury Big Top and the Theater and Circus fields. 

 

Advertisements

Are We Missing Problems In Our Footage Because We Don’t Use Viewfinders Anymore?

I see it so many times on various forums and user groups – “I didn’t see it until I looked at it at home and now I find the footage is unusable”.

We all want our footage to be perfect all of the time, but sometimes there might be something that trips up the technology that we are using. And that can introduce problems into a shot. The problem is perhaps that these things are not normal. As a result we don’t expect them to be there, so we don’t necessarily look for them. But thinking about this, I also think a lot of it is because very often the only thing being used to view what is being shot is a tiny LCD screen.

For the first 15 years of my career the only viewfinders available were either a monocular viewfinder with a magnifier or a large studio style viewfinder (typically 7″).  Frankly if all you are using is a 3.5″ LCD screen, then you will miss many things!

I see many forum post about these missed image issues on my phone which has a 6″ screen. When I view the small versions of the posted examples of the issue I can rarely see it. But view it full screen and it becomes obvious. So what hope do you have of picking up these issue on location with a tiny monitor screen, often viewed too closely to be in good focus.

A 20 year old will typically have a focus range of around 12 diopters, but by the time you get to 30 that decreases to about 8, by 40 to 5 and 50 just 1 or 2. What that means (for the average person) is that if you are young enough you might be able to focus sufficiently on that small LCD when it’s close enough to your eyes for you to be able to see it properly and be able to see potential problems. But by the time you get to 30 most people won’t be able to adequately focus on a 3.5″ LCD until it’s too far from their eyes to resolve everything it is capable of showing you. If you are hand holding a camera with a 3.5″ screen such that the screen is 30cm or more from your eyes there is no way you can see critical focus or small image artefacts, the screen is just too small. Plus most people that don’t have their eyesight tested regularly don’t even realise it is deteriorating until it gets really bad.

There are very good reason why viewfinders have diopters/magnifiers. They are there to allow you to see everything your screen can show, they make the image appear larger, they keep out unwanted light. When you stop using them you risk missing things that can ruin a shot, whether that’s focus that’s almost but not quite right, something in the background that shouldn’t be there or some subtle technical issue.

It’s all too easy to remove the magnifier and just shoot with the LCD, trusting that the camera will do what you hope it to. Often it’s the easiest way to shoot, we’ve all been there I’m sure. BUT easy doesn’t mean best. When you remove the magnifier you are choosing easy shooting over the ability to see issues in your footage before it’s too late to do something about it.

ProResRaw Now In Adobe Creative Cloud Mac Versions!

Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.

Are LUT’s Killing Creativity And Eroding Skills?

I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.

In my distant past I worked in and helped run  a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with  the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.

But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.

Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.

You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.

This creates two issues.

1: Everything looks the same – BORING!!!!

2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.

How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?

If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.

Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.

LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.

Temporal Aliasing – Beware!

As camera resolutions increase and the amount of detail and texture that we can record increases we need to be mindful more and more of temporal aliasing. 

Temporal aliasing occurs when the differences between the frames in a video sequence create undesirable sequences of patterns that move from one frame to the next, often appearing to travel in the opposite direction to any camera movement. The classic example of this is the wagon wheels going backwards effect often seen in old cowboy movies. The cameras shutter captures the spokes of the wheels in a different position in each frame but the timing of the shutter relative to the position of the spokes means that the wheels appear to go backwards rather than forwards. This was almost impossible to prevent with film cameras that were stuck with a 180 degree shutter as there was no way to blur the motion of the spokes so that they were contiguous from one frame to the next. A 360 degree shutter would have prevented this problem in most cases. But it’s also reasonable to note that at 24fps a 360 degree shutter would have introduced an excessive amount of motion blur elsewhere.

Another form of temporal aliasing that often occurs is when you have rapidly moving grass, crops, reeds or fine branches. Let me try to explain:

You are shooting a field of wheat, the stalks are very small in the frame, almost too small to discern individually. As the stalks of wheat move left, perhaps blown by the wind, each stalk will be captured in each frame a little more to the left, perhaps by just a few pixels. But in the video they appear to be going the other way. This is  because every stalk looks the same as all the others and in the following captured frame,  the original stalk may have moved  say 6 pixels to the left. But now there is also a different stalk just 2 pixels to the right of where the original was. Because both stalks look the same it appears that the stalk has moved right instead of left. As the wind speed and the movement of the stalks changes they may appear to move randomly left or right or a combination of both. The image looks very odd, often a jumbled mess, as perhaps the tops of the stalks appear to move one way while lower parts appear to go the other.

There is a great example of temporal aliasing here in this clip on Pond5 https://www.pond5.com/stock-footage/item/58471251-wagon-wheel-effect-train-tracks-optical-illusion-perception

Notice in the pond 5 clip how it’s not only the railway sleepers that appear to move in the wrong direction or at the wrong speed but notice how the stones between the sleepers appear to look like some kind of boiling noise.

Like the old movie wagon wheels one thing that makes this worse is the use of too fast a shutter speed. The more you freeze the motion of the offending objects or textures in each frame the higher the risk of temporal aliasing with moving textures or patterns. Often a slower shutter speed will introduce enough motion blur that the motion looks normal again. You may need to experiment with different shutter speeds to find the sweet spot where the temporal aliasing goes away or is minimised.  If shooting at 50fps or faster try a 360 degree 1/50th shutter as by the time you get to a 1/50th shutter motion is already starting to be as crisp as it needs to be for most types of shots unless you are intending to do some for of frame by frame motion analysis.

How We Judge Exposure Looking At an Image And The Importance Of ViewFinder Contrast.

This came out of a discussion about viewfinder brightness where the compliant was that the viewfinder on the FX9 was too bright when compared side by side with another monitor. It got me into really thinking about how we judge exposure when purely looking at a monitor or viewfinder image.

To start with I think it’s important to thing understand a couple of things:

1: Our perception of how bright a light source is depends on the ambient light levels. A candle in a dark room looks really bright, but outside on a sunny day it is not perceived as being so bright. But of course we all know that the light being emitted by that candle is exactly the same in both situations.

2: Between the middle grey of a grey card and the white of a white card there are about 2.5 stops. Faces and skin tones fall roughly half way between middle grey and white. Taking that a step further between what most people will perceive as black, something like a black card, black shirt and a white card there are around 5 to 6 stops and faces will always be roughly 3/4 of the way up that brightness range at somewhere around about 4 stops above black . It doesn’t matter whether that’s outside on a dazzlingly bright day in the desert in the middle East or on a dull overcast winters day in the UK, those relative levels never change.

Now think about this:

If you look at a picture on a screen and the face is significantly brighter than middle grey and much closer to white than middle grey what will you think? To most it will almost certainly appear over exposed because we know that in the real world a face sits roughly 3/4 of the way up the relative brightness range and roughly half way between middle gray and white.

What about if the face is much darker than white and close to middle grey? Then it will generally look under exposed as relative to black, white and middle grey the face is too dark.

The key point here is that we make these exposure judgments based on where faces and other similar things are relative to black and white. We don’t know the actual intensity of the white, but we do know how bright a face should be relative to white and black.

This is why it’s possible to make an accurate exposure assessment using a 100 Nit monitor or a 1000 Nit daylight viewable monitor. Provided the contrast range of the monitor is correct and black looks black, middle grey is in the middle and white looks white then skin tones will be 3/4 of the way up from black and 1/4 down from white when the image is correctly exposed.

But here’s the rub: If you put the 100 Nit monitor next to the 1000 Nit monitor and look at both at the same time, the two will look very, very different. Indoors in a dim room the 1000 Nit monitor will be dazzlingly bright, meanwhile outside on a sunny day the 100 Nit monitor will be barely viewable. So which is right?

The answer is they both are. Indoors, with controlled light levels or when covered with a hood or loupe then the 100 Nit monitor might be preferable. In a grading suite with controlled lighting you would normally use a monitor with white at 100 nits. But outside on a sunny day with no shade or hood the 1000 Nit monitor might be preferable because the 100 nit monitor will be too dim to be of any use.

Think of this another way: Take both monitors into a dark room and take a photo of each monitor with your phone.  The phone’s camera will adjust it’s exposure so both will look the same and the end result will be two photos where the screens will look the same. Our eyes have iris’s just like a cameras and do exactly the same thing, adjust so that the brightness is with the range our eyes can deal with. So the actual brightness is only of concern relative to the ambient light levels.

This presents a challenge to designers of viewfinders that can be used both with or without a loupe or shade such as the LCD viewfinder on the FX9 that which be used both with the loupe/magnifier and without it. How bright should you make it? Not so bright it’s dazzling when using the loupe but bright enough to be useful on a sunny day without the loupe.

The actual brightness isn’t critical (beyond whether it’s bright enough to be seen or not) provided the perceived contrast is right.

When setting up a monitor or viewfinder it’s the adjustment of the black level and black pedestal which alters the contrast of the image (the control of which is confusingly called the brightness control). This “brightness” control is the critical one because if the brightness adjustment raises the blacks by too much then you make the shadows and mids brighter relative to white and less contrasty, so you will tend to expose lower in an attempt to have good contrast and a normal looking mid range. Exposing brighter makes the mids look excessively bright relative to where white is and the black screen surround is.

If the brightness is set too low it pulls the blacks and mids down then you will tend to over expose in an attempt to see details and textures in the shadows and to make the mids normal.

It’s all about the monitor or viewfinders contrast and where everything stits between the darkest and brightest parts pf the image. The peak brightness (equally confusingly set by the contrast control) is largely irrelevant because our perception of how bright this is depends entirely on the ambient light level, just don’t over drive the display.

We don’t look at a VF and think – “Ah that face is 100 nits”.  We think – “that face is 3/4 of the way up between black and white” because that’s exactly how we see faces in all kinds of light conditions – relative levels – not specific brightness.

So far I have been discussing SDR (standard dynamic range) viewfinders. Thankfully I have yet to see an HDR viewfinder because an HDR viewfinder could actually make judging exposure more difficult as “white” such as a white card isn’t very bright in the world of HDR and an HDR viewfinder would have a far greater contrast range than just the 5 or 6 stops of an SDR finder. The viewfinders peak brightness could well be 10 times or more brighter than the white of a white card. So that complicates things as first you need to judge and asses where white is within a very big brightness range. But I guess I’ll cross that bridge when it comes along.

What’s So Magical About Full Frame – Or Is It all Just ANOTHER INTERNET MYTH?

FIRST THINGS FIRST:
The only way to change the perspective of a shot is to change the position of the camera relative to the subject or scene.  Just put a 1.5x wider lens on a s35camera and you have exactly the same angle of view as a Full Frame camera. It is an internet myth that Full Frame changes the perspective or the appearance of the image in a way that cannot be exactly replicated with other sensor or frame sizes. The only thing that changes perspective is how far you are from the subject. It’s one of those laws of physics and optics that can’t be broken. The only way to see more or less around an object is by changing your physical position.

The only thing changing the focal length or sensor size changes is magnification and you can change the magnification either by changing sensor size or focal length and the effect is exactly the same either way. So in terms of perspective, angle of view or field of view an 18mm s35 setup will produce an identical image to a 27mm FF setup. The only difference may be in DoF depending on the aperture where  f4 on FF will provide the same DoF as f2.8 on s35. If both lenses are f4 then the FF image will have a shallower DoF.

Again though physics play a part here as if you want to get that shallower DoF from a FF camera then the lens FF lens will normally need to have the same aperture as the s35 lens. To do that the elements in the FF lens need to be bigger to gather twice as much light so that it can put the same amount of light as the s35 lens across the twice as large surface area of the FF sensor.  So generally you will pay more for a comparable FF like for like aperture lens as a s35 lens. Or you simply won’t be able to get an equivalent in FF because the optical design becomes too complex, too big, too heavy or too costly.
This in particular is a big issue for parfocal zooms. At FF and larger imager sizes they can be fast or have a big zoom range, but to do both is very, very hard and typically requires some very exotic glass. You won’t see anything like the affordable super 35mm Fujinon MK’s in full frame, certainly not at anywhere near the same price. This is why for decades 2/3″ sensors and 16mm film before that, ruled the roost for TV news as lenses with big zoom ranges and large fast apertures were relatively affordable.
Perhaps one of the commonest complaints I see today with larger sensors is “why can’t I find an affordable fast, parfocal zoom with more than a 4x zoom range”. Such lenses do exist, for s35 you have lenses like the $22K Canon CN7 17-120mm  T2.9, which is pretty big and pretty heavy. For Full Frame the nearest equivalent is the more expensive $40K Fujinon Premista 28-100 t2.9. which is a really big lens weighing in at almost 4kg. But look at the numbers: Both will give a very similar AoV on their respective sensors at the wide end but the much cheaper Canon has a greatly extended zoom range and will get a tighter shot than the Premista at the long end. Yes, the DoF will be shallower with the Premista, but you are paying almost double, it is a significantly heavier lens and it has a much reduced zoom ratio. So you may need both the $40K Premista 28-100 and the $40K Premista 80-250 to cover everything the Canon does (and a bit more). So as you can see, getting that extra shallow DoF may be very costly. And it’s not so much about the sensor, but more about the lens.
The History of large formats:
It is worth considering that back in the 50’s and 60’s we had VistaVision, a horizontal 35mm format the equivalent of 35mm FF, plus 65mm and a number of other larger than s35 formats. All in an effort to get better image quality.
VistaVision (The closet equivalent to 35mm Full Frame).
VistaVision didn’t last long, about 7 or 8 years because better quality film stocks meant that similar image quality could be obtained from regular s35mm film and shooting VistaVision was difficult due to the very shallow DoF and focus challenges, plus it was twice the cost of regular 35mm film. It did make a brief comeback in the 70’s for shooting special effects sequences where very high resolutions were needed. VistaVision was superseded by Cinemascope which uses 2x Anamorphic lenses and conventional vertical super 35mm film and Cinemascope was subsequently largely replaced by 35mm Panavision (the two being virtually the same thing and often used interchangeably).
65mm formats.
 At around the same time there were various 65mm (with 70mm projection) formats including Super Panavision, Ultra Panavision and Todd-AO These too struggled and very few films were made using 65mm film after the end of the 60’s. There was a brief resurgence in the 80’s and again recently there have been a few films, but production difficulties and cost has meant they tend to be niche productions.
Historically there have been many attempts to establish mainstream  larger than s35 formats. But by and large audiences couldn’t tell the difference and even if they did they wouldn’t pay extra for them. Obviously today the cost implication is tiny compared to the extra cost of 65mm film or VistaVision. But the bottom line remains that normally the audience won’t actually be able to see any difference, because in reality there isn’t one, other than perhaps a marginal resolution increase. But it is harder to shoot FF than s35. Comparable lenses are more expensive, lens choices more limited, focus is more challenging at longer focal lengths or large apertures. If you get carried away with too large an aperture you get miniaturisation and cardboarding effects if you are not careful (these can occur with s35 too).
Can The Audience Tell – Does The Audience Care?
Cinema audiences have not been complaining that the DoF isn’t shallow enough, or that the resolution isn’t high enough (Arri’s success has proved that resolution is a minor image quality factor). But they are noticing focus issues, especially in 4K theaters.
 So while FF and the other larger format are here to stay. Full Frame is not the be-all and end-all. Many, many people believe that FF has some kind of magic that makes the images different to smaller formats because they “read it on the internet so it must be true”.  I think sometimes some things read on the internet create a placebo effect where when you read it enough times you will actually become convinced that the images are different, even when in fact they are not. Once they realise that actually it isn’t different, I’m quite sure many will return to s35 because that does seem to be the sweet spot where DoF and focus is manageable and IQ is plenty good enough. Only time will tell, but history suggest s35 isn’t going anywhere any time soon.

Today’s modern cameras give us the choice to shoot either FF or s35. Either can result in an identical image, it’s only a matter of aperture and focal length. So pick the one that you feel most comfortable with for you production. FF is nice, but it isn’t magic.

Really it’s all about the lens.

The really important thing is your lens choice. I believe that what most people put down as “the full frame effect” is nothing to do with the sensor size but the qualities of the lenses they are using. Full frame stills cameras have been around for a long time and as a result there is a huge range of very high quality glass to choose from (as well as cheaper budget lenses). In the photography world APS-C which is similar to super 35mm movie film has always been considered a lower cost or budget option and many of the lenses designed for APS-C have been built down to a price rather than up in quality. This makes a difference to the way the images may look. So often Full Frame lenses may offer better quality or a more pleasing look, just because the glass is better.

I recently shot a project using Sony’s Venice camera over 2 different shoots. For the shoot we used Full Frame and the Sigma Cine Primes. The images we got looked amazing. But then the second shoot where we needed at times to use higher frame rates we shot using super 35 with a mix of the Fujinon MK zooms and Sony G-Master lenses. Again the images looked amazing and the client and the end audience really can’t tell the footage from the first shoot with the footage from the second shoot.

Downsampling from 6K.

One very real benefit shooting 6K full frame does bring, with both the FX9 and Sony Venice (or any other 6K FF camera) is that when you shoot at 6K and downsample to 4K you will have a higher resolution image with better colour and in most cases lower noise than if you started at 4K. This is because the bayer sensors that all the current large sensor camera use don’t resolve 4K when shooting at 4K. To get 4K you need to start with 6K.

Struggling With Blue LED Lighting? Try Turning On The adaptive Matrix.

It’s a common problem. You are shooting a performance or event where LED lighting has been used to create dramatic coloured lighting effects. The intense blue from many types of LED stage lights can easily overload the sensor and instead of looking like a nice lighting effect the blue light becomes an ugly splodge of intense blue that spoils the footage.

Well there is a tool hidden away in the paint settings of many recent Sony cameras that can help. It’s called “adaptive matrix”.

When adaptive matrix is enabled, when the camera sees intense blue light such as the light from a blue LED light, the matrix adapts to this and reduces the saturation of the blue colour channel in the problem areas of the image. This can greatly improve the way such lights and lighting look. But be aware that if trying to shoot objects with very bright blue colours, perhaps even a bright blue sky, if you have the adaptive matrix turned on it may desaturate them. Because of this the adaptive matrix is normally turned off by default.

If you want to turn it on, it’s normally found in the cameras paint and matrix settings and it’s simply a case of setting adaptive matrix to on. I recommend that when you don’t actually need it you turn it back off again.

Most of Sony’s broadcast quality cameras produced in the last 5 years have the adaptive matrix function, that includes the FS7, FX9, Z280, Z450, Z750 and many others.

Guess The Lens! A little bit of fun and an interesting test.

Last week I was at O-Video in Bucharest preparing for a workshop the following day. They are a full service dealer. We had an FX9 for the workshop and they had some very nice lenses. So with their help I decided to do a very quick comparison of the lenses we had. I was actually very surprised by the results. At the end of the day I definitely had a favourite lens. But I’m not going to tell you which one yet.

The 5 lenses we tested were: Rokinon Xeen, Cooke Panchro 50mm, Leitz (lecia) Thalia, Zeiss Supreme Radiance and the Sony 28-135mm zoom that can be purchased as part of a kit with the FX9.

I included a strong backlight in the shot to see how the different lenses dealt with flare from off-axis lights. 2 of the lenses produced very pronounced flare, so for those lenses you will see two frame grabs. One with the flare and one with the back light flagged off.

I used S-Cinetone on the FX9 and set the aperture to f2.8 for all of the lenses except the Sony 28-135mm. For that lens I added 6dB of gain to normalise the exposure, you should be able to figure out which of the examples is the Sony zoom.

One of the lenses was an odd focal length compared to all the others. Some of you might be able to work out which one that is, but again I’m not going to tell you just yet.

Anyway, enjoy playing guess the lens. This isn’t intended to be an in depth test. But it’s interesting to compare lenses when you have access to them.  I’ll reveal which lens is which in a couple of weeks in the comments. You can click on each image to enlarge it.

Big thanks to everyone at O-Video Bucharest for making this happen.

Lens1-flare Guess The Lens! A little bit of fun and an interesting test.
Lens 1 with flare from backlight.
lens1-no-flare Guess The Lens! A little bit of fun and an interesting test.
Lens 1 with backlight flagged to reduce the flare.
lens2 Guess The Lens! A little bit of fun and an interesting test.
Lens 2
lens-3 Guess The Lens! A little bit of fun and an interesting test.
Lens 3
lens-4-no-flare Guess The Lens! A little bit of fun and an interesting test.
Lens 4
lens-5-flare Guess The Lens! A little bit of fun and an interesting test.
Lens 5 with flare from backlight
lens-5-No-flare Guess The Lens! A little bit of fun and an interesting test.
Lens 5 with backlight masked to kill the flare.

The “E” in “E-Mount” stands for Eighteen.

A completely useless bit of trivia for you is that the “E” in E-mount stands for eighteen. 18mm is the E-mount flange back distance. That’s the distance between the sensor and the face of the lens mount. The fact the e-mount is only 18mm while most other DSLR systems have a flange back distance of around 40mm means thare are 20mm or more in hand that can be used for adapters to go between the camera body and 3rd party lenses with different mounts.

Here’s a little table of some common flange back distances:

MOUNT FLANGE BACK SPARE/Difference
e-mount 18mm
Sony FZ (F3/F5/F55) 19mm 1mm
Canon EF 44mm 26mm
Nikon F Mount 46.5mm 28.5mm
PL 52mm 34mm
Arri LPL 44mm 26mm
Sony A, Minolta 44.5mm 26.5mm
M42 45.46mm 27.46mm