Category Archives: Shooting Tips

Beware the LC709 LUT double exposure offset.

The use o f the LC709 Type A LUT in Sony’s Cinealta cameras such as the PXW-FS7 or PMW-F55 is very common. This LUT is popular because it was designed to mimic the Arri cameras when in their Rec-709 mode. But before rushing out to use this LUT and any of the other LC709 series of LUT’s there are some things to consider.

The Arri cameras are rarely used in Rec-709 mode for anything other than quick turn around TV. You certainly wouldn’t normally record this for any feature or drama productions. It isn’t the “Arri Look” The Arri look normally comes as a result of shooting using Arri’s LogC and then grading that to get the look you want. The reason it exists is to provide a viewable image on set. It has more contrast than LogC and uses Rec 709 color primaries so the colors look right, but it isn’t Rec-709. It squeezes almost all of the cameras capture range into a something that can be viewed on a 709 monitor so it looks quite flat.

Because a very large dynamic range is being squeezed into a range suitable to be viewed on a regular, standard dynamic range monitor the white level is much reduced compared to regular Rec-709. In fact, white (such as a white piece of paper) should be exposed at around 70%. Skin tones should be exposed at around 55-60%.

If you are shooting S-Log on a Sony camera and using this LUT to monitor, if you were to expose using conventional levels, white at 85-90% skin tones at 65-70%, then you will be offsetting your exposure by around +1.5 stops. On it’s own this isn’t typically going to be a problem. In fact I often come across people that tell me that they always shoot at the cameras native EI using this LUT and get great, low noise pictures. When I dig a little deeper I often find that they are exposing white at 85% via the LC709 LUT. So in reality they are actually shooting with an exposure the equivalent of +1 to +1.5 stops over the base level.

Where you can really run into problems is when you have already added an exposure offset. Perhaps you are shooting on an FS7 where the native ISO is 2000 ISO and using an EI of 800. This is a little over a +1 stop exposure offset. Then if you use one of the LC709 LUT’s and expose the LUT so white is at 90% and skin tones at 70% you are adding another +1.5 stops to the exposure, so your total exposure offset is approaching 3 stops. This large an offset is rarely necessary and can be tricky to deal with in post. It’s also going to impact your highlight range.

So just be aware that different LUT’s require different white and grey levels and make sure you are exposing the LUT at it’s correct level so that you are not adding an additional offset to your desired exposure.

Advertisements

How to get the best White Balance (Push Auto WB).

Getting a good white balance is critical to getting a great image, especially if you are not going to be color correcting or grading your footage. When shooting traditionally ie – not with log or raw – A common way to set the cameras white balance is to use the one push auto white balance combined with a white target. You point the camera at the white target, then press the WB button (normally found at the front of the camera just under the lens).
The white target needs to occupy a good portion of the shot but it doesn’t have to completely fill the shot. It can be a pretty small area, 30% is normally enough. The key is to make sure that the white or middle grey target is obvious enough and at the right brightness that the camera uses the right part of the image for the white balance. For example, you could have a white card filling 50% of the screen, but there might be a large white car filling the rest of the shot. The camera could be confused by the car if the brightness of the car is closer to the brightness the camera wants than the white/grey card.
The way it normally works is that the camera looks for a uniformly bright part of the image with very little saturation (color) somewhere between 45 and 90IRE. The camera will then assume this area to be the white balance target. The camera then adjusts the gain of the red and blue channels so that the saturation in that part of the image becomes zero and as a result there is no color over the white or grey target.
 
If you fill the frame with your white/grey card then there can be no confusion. But that isn’t always possible or practical as the card needs to be in the scene and under the light you are balancing for rather than just directly in front of the lens. The larger your white or grey card is the more likely it is that you will get a successful and accurate white balance – provided it’s correctly exposed and in the right place.
 
The white target needs to be in reasonable focus as if it is out of focus this will create a blurred edge with color from any background objects blending into the blur. This could upset the white balance as the camera uses an average value for the whole of white area, so any color bleed at the edges due to defocussing may result in a small color offset.
 
You can use a white card or grey card (white paper at a push, but most paper is bleached slightly blue to make it look whiter to our eyes and this will offset the white balance). The best white balance is normally achieved by using a good quality photography grey card. As the grey card will be lower down in the brightness range, if there is any color, it will be more saturated. So when the camera offsets the R and B gain to eliminate the color it will be more accurate.
 
The shiny white plastic cards often sold as white balance cards are often not good choices for white balance. They are too bright and shiny. Any reflections off a glossy white card will seriously degrade the cameras ability to perform an accurate white balance as the highlights will be in the cameras knee or roll-off and as a result have much reduced saturation and also reduced R and B gain, making it harder for the camera to get a good white balance. In addition the plastics used tend to yellow with age, so if you do use a plastic white balance card make sure it isn’t past it’s best.
Don’t try to white balance off clouds or white cars, they tend to introduce offsets into the white balance.
 
Don’t read too much into the Kelvin reading the camera might give. These values are only a guide, different lenses and many other factors will introduce inaccuracies. It is not at all unusual to have two identical cameras give two different Kelvin values even though both are perfectly white balance matched. If you are not sure that your white balance is correct, repeat the process. If you keep getting the same kelvin number it’s likely you are doing it correctly.

Use the cameras media check to help ensure you don’t get file problems.

Any of the Sony cameras that use SxS or XQD cards include a media check and media restore function that is designed to detect any problems with your recording media or the files stored on that media.
However the media check is only normally performed when you insert a card into the camera, it is not done when you eject a card as the camera never knows when you are about to do that.
So my advice is: When you want to remove the card to offload your footage ensure you have a green light next to the card, this means it should be safe to remove. Pop the card out as you would do normally but then re-insert the card and wait for the light to go from red, back to green. Check the LCD/VF for any messages, if there are no messages, take the card out and do your offload as normal.
 
Why? Every time you put an XQD or SxS card into the camera the card and files stored on it are checked for any signs of any issues. If there is a problem the camera will give you a “Restore Media” warning. If you see this warning always select OK and allow the camera to repair whatever the problem is. If you don’t restore the media and you then make a copy from the card, any copy you make will also be corrupt and the files may be inaccessible.
Once the files have been copied from the card it is no longer possible to restore the media.  If there is a problem with the files on the card, the restore can only be done by the camera, before offload. So this simple check that takes just a few seconds can save a whole world of hurt. I wish there was a media check button you could press to force the check, but there isn’t. However this method works.
It’s also worth knowing that Catalyst Browse and the old Media Browser software performs a data integrity check if you directly attach an SxS card or XQD card to the computer and access the card from the software. If a problem is found you will get a message telling you to return the media to the camera and perform a media restore. But if this is some time after the shoot and you don’t have the camera to hand, this can be impossible. Which is why I like to check my media in the camera by re-inserting it back into the camera so that it gets checked for problems before the end of the shoot.

What shutter speed to use if shooting 50p or 60p for 50i/60i conversion.

An interesting question got raised on Facebook today.

What shutter speed should I use if I am shooting at 50p so that my client can later convert the 50p to 50i? Of course this would also apply to shooting at 60p for 60i conversion.

Lets first of all make sure that we all understand that what’s being asked for here is to shoot at 50(60) progressive frames per second so that the footage can later be converted to 25(30) frames per second interlace – which has 50(60) fields.

If we just consider normal 50p or 60p shooing the the shutter speed that you would chooses on many factors including what you are shooting and how much light you have and personal preference.

1/48 or 1/50th of a second is normally considered the slowest shutter speed at which motion blur in a typical frame no longer significantly softens the image. This is why old point and shoot film cameras almost always had a 1/50th shutter, it was the slowest you could get away with.

Shooting with a shutter speed that is half the duration of the cameras frame rate is also know as using a 180 degree shutter, a very necessary practice with a film movie camera due to the way the mechanical shutter must be closed while the film is physically advanced to the next frame. But it isn’t essential that you have the closed shutter period with an electronic camera as there is no film to move, so you don’t have to use a 180 degree shutter if you don’t want to.

There is no reason why you can’t use a 1/50th or 1/60th shutter when shooting at 50fps or 60fps, especially if you don’t have a lot of light to work with. 1/50(1/60) at 50fps(60fps) will give you the smoothest motion as there are no breaks in the motion between each frame. But many people like to sharpen up the image still further by using 1/100th(1/120th) to reduce motion blur.  Or they prefer the slightly steppy cadence this brings as it introduces a small jump in motion between each frame. Of course 1/100th needs twice as much light. So there is no hard and fast rule and some shots will work better at 1/50th while others may work better at 1/100th.

However if you are shooting at 50fps or 60fps so that it can be converted to 50i or 60i, with each frame becoming a field, then the “normal” shutter speed you should use will be 1/50th or 1/60th to mimic a 25fps-50i camera or 30fps-60i camera which would typically have it’s shutter running at 1/50 or 1/60th. 1/100th(120th) at 50i(60i) can look a little over sharp due to an increase in aliasing due to the way a interlace video field only has half the resolution of the full frame. Particularly with 50p converted to 50i as there is no in-camera anti-aliasing and each frame will simply have it’s resolution divided by 2 to produce the equivalent of a single field. When you shoot with a “real” 50i camera line pairs on the sensor are combined and read out together as a  single field line and this slightly softens and anti-aliases each of fields. 50i has lower vertical resolution than 25p. But with simple software conversions from 50p to 50i this anti-aliasing does not occur. If you combine that with a faster than typical shutter speed the interlaced image can start to look over sharp and may have jaggies or color moire not present in the original 50/60p footage.

Using LUT’s for exposure – choosing the right LUT.

If using a LUT to judge the exposure of a camera shooting log or raw it’s really important that you fully understand how that LUT works.

When a LUT is created it will expect a specific input range and convert that input range to a very specific output range. If you change the input range then the output will range will be different and it may not be correct. As an example a LUT designed and created for use with S-Log2 should not be used with S-Log3 material as the the higher middle grey level used by S-Log3 would mean that the mid range of the LUT’s output would be much brighter than it should be.

Another consideration comes when you start offsetting your exposure levels, perhaps to achieve a brighter log exposure so that after grading the footage will have less noise.

Lets look at a version of Sony’s 709(800) LUT designed to be used with S-Log3 for a moment. This LUT expects middle grey to come in at 41% and it will output middle grey at 43%. It will expect a white card to be at 61% and it will output that same shade of white at a little over 85%. Anything on the S-Log3 side brighter than 61% (white) is considered a highlight and the LUT will compress the highlight range (almost 4 stops) into the output range between 85% and 109% resulting in flat looking highlights. This is all perfectly fine if you expose at the levels suggested by Sony. But what happens if you do expose brighter and try to use the same LUT either in camera or in post production?

Well if you expose 1.5 stops brighter on the log side middle grey becomes around 54% and white becomes around 74%. Skin tones which sit half way between middle grey and white will be around 64% on the LUT’s input. That’s going to cause a problem! The LUT considers anything brighter than 61% on it’s input to be a highlight and it will compresses anything brighter than 61%. As a result on the output of your LUT your skin tones will not only be bright, but they will be compressed and flat looking. This makes them hard to grade. This is why if you are shooting a bit brighter it is much, much easier to grade your footage if your LUT’s have offsets to allow for this over exposure.

If the camera has an EI mode (like the FS7, F5, F55 etc) the EI mode offsets the LUT’s input so you don’t see this problem in camera but there are other problems you can encounter if you are not careful like unintentional over exposure when using the Sony LC709 series of LUTs.

Sony’s  709(800) LUT closely matches the gamma of most normal monitors and viewfinders, so 709(800) will deliver the correct contrast ie. contrast that matches the scene you are shooting plus it will give conventional TV brightness levels when viewed on standard monitors or viewfinders.

If you use any of the LC709 LUT’s you will have a miss-match between the LUT’s gamma and the monitors gamma so the images will show lower contrast and the levels will be lower than conventional TV levels when exposed correctly. LC709 stands for low contrast gamma with 709 color primaries, it is not 709 gamma!

Sony’s LC709 Type A LUT is very popular as it mimics the way an Arri Alexa might look. That’s fine but you also need to be aware that the correct exposure levels for this non-standard LC gamma are middle grey at around 41% and white at 70%.

An easy trap to fall into is to set the camera to a low EI to gain a brighter log exposure and then to use one of the LC709 LUT’s and try to eyeball the exposure. Because the LC709 LUT’s are darker and flatter it’s harder to eyeball the exposure and often people will expose them as you would regular 709. This then results in a double over exposure. Bright because of the intentional use of the lower EI but even brighter because the LUT has been exposed at or close to conventional 709 brightness. If you were to mistakenly expose the LC709TypeA LUT with skin tones at 70%, white at 90% etc then that will add almost 2 stops to the log exposure on top of any EI offset.

Above middle grey with 709(800) a 1 stop exposure change results in an a 20% change in brightness, with LC709TypeA the same exposure change only gives a just over 10% change, as a result over or under exposure is much less obvious and harder to measure or judge by eye with LC709. The cameras default zebra settings for example have a 10% window. So with LC709 you could easily be a whole stop out, while with 709(800) only half a stop.

Personally when shooting I don’t really care too much about how the image looks in terms of brightness and contrast. I’m more interested in using the built in LUT’s to ensure my exposure is where I want it to be. So for exposure assessment I prefer to use the LUT that is going to show the biggest change when my exposure is not where it should be. For the “look” I will feed a separate monitor and apply any stylised looks there. To understand how my highlights and shadows, above and below the LUT’s range are being captured I use the Hi/Low Key function.

If you are someone that creates your own LUT’s an important consideration is to ensure that if you are shooting test shots, then grading these test shots to produce a LUT it’s really, really important that the test shots are very accurately exposed.

You have 2 choices here. You can either expose at the levels recommended by Sony and then use EI to add any offsets or you can offset the exposure in camera and not use EI but instead rely on the offset that will end up in the LUT. What is never a good idea is to add an EI offset to a LUT that was also offset.

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.

Low Light Performance – It’s all about the lens!

This post follows on from my previous post about sensors and was inspired by one of the questions asked following that post.

While sensor size does have some effect on low light performance, the biggest single factor is really the lens. It isn’t really bigger sensor that has revolutionised low light performance. It’s actually the lenses that we can use that has chnge our ability to shoot in low light. When we used to use 1/2″ or 2/3″ 3 chip cameras for most high end video production the most common lenses were the wide range zoom lenses. These were typically f1.8 lenses, reasonably fast lenses.

But the sensors were really small, so the pixels on those sensors were also relatively small, so having a fast lens was important.

Now we have larger sensors, super 35mm sensors are now common place. These larger sensors often have larger pixels than the old 1/2″ or 2/3″ sensors, even though we are now cramming more pixels onto the sensors. Bigger pixels do help increase sensitivity, but really the biggest change has been the types of lenses we use.

Let me explain:

The laws of physics play a large part in all of this.
We start off with the light in our scene which passes through a lens.

If we take a zoom lens of a certain physical size, with a fixed size front element and as a result fixed light gathering ability, for example a typical 2/3″ ENG zoom. You have a certain amount of light coming in to the lens.
When the size of the image projected by the rear of the lens is small it will be relatively bright and as a result you get an effective large aperture.

Increase the size of the sensor and you have to increase the size of the projected image. So if we were to modify the rear elements of this same lens to create a larger projected image (increase the image circle) so that it covers a super 35mm sensor what light we have. is spread out “thinner” and as a result the projected image is dimmer. So the effective aperture of the same lens becomes smaller and because the image is larger the focus more critical and as a result the DoF narrower.

But if we keep the sensor resolution the same, a bigger sensor will have bigger pixels that can capture more light and this makes up for dimmer image coming from the lens.

So where a small sensor camera (1/2″, 2/3″) will typically have a f1.8 zoom lens when you scale up to a s35mm sensor by altering the projected image from the lens, the same lens becomes the equivalent of around f5.6. But because for like for like resolution the pixels size is much bigger, the large sensor will be 2 to 3 stops more sensitive, so the low light performance is almost exactly the same, the DoF remains the same and the field of view remains the same (the sensor is larger, so DoF decreases, but the aperture becomes smaller so DoF increases again back to where we started). Basically it’s all governed by how much light the lens can capture and pass through to the sensor.

It’s actually the use of prime lenses that are much more efficient at capturing light has revolutionised low light shooting as the simplicity of a prime compared to a zoom makes fast lenses for large sensors affordable. When we moved to sensors that are much closer to the size of sensors used on stills cameras the range and choice of affordable lenses we could use increased dramatically. We were no longer restricted to expensive zooms designed specifically for video cameras.

Going the other way. If you were to take one of todays fast primes like a common and normally quite affordable 50mm f1.4 and build an optical adapter of the “speedbooster” type so you could use it on a 2/3″ sensor you would end up with a lens the equivalent of a f0.5 10mm lens that would turn that 2/3″ camera into a great low light system with performance similar to that of a s35mm camera with a 50mm f1.4.

Adjusting the Color Matrix

Every now and again I get asked how to adjust the color matrix in a video camera. Back in 2009 I made a video on how to adjust the color matrix in the Sony’s EX series of cameras. This video is just as relevant today as it was then. The basic principles have not changed.

The exact menu settings and menu layout may be a little different in the latest cameras, but the adjustment of the matrix setting (R-G, G-R etc) have exactly the same effect in the latest camera that provide matrix adjustments (FS7, F5, F55 and most of the shoulder mount and other broadcast cameras). So if you want a better understanding of how these settings and adjustment works, take a look at the video.

I’ll warn you now that adjusting the color matrix is not easy as each setting interacts with the others. So creating a specific look via the matrix is not easy and requires a fair bit of patience and a lot of fiddling and testing to get it just right.

Why do we strive to mimic film? What is the film look anyway?

 

Please don’t take this post the wrong way. I DO understand why some people like to try and emulate film. I understand that film has a “look”. I also understand that for many people that look is the holy grail of film production. I’m simply looking at why we do this and am throwing the big question out there which is “is it the right thing to do”? I welcome your comments on this subject as it’s an interesting one worthy of discussion.

In recent years with the explosion of large sensor cameras with great dynamic range it has become a very common practice to take the images these cameras capture and apply a grade or LUT that mimics the look of many of todays major movies. This is often simply referred to as the “film look”.

This look seems to be becoming more and more extreme as creators attempt to make their film more film like than the one before, leading to a situation where the look becomes very distinct as opposed to just a trait of the capture medium. A common technique is the “teal and orange” look where the overall image is tinted teal and then skin tones and other similar tones are made slightly orange. This is done to create colour contrast between the faces of the cast and the background as teal and orange are on opposite sites of the colour wheel.

Another variation of the “film look” is the flat look. I don’t really know where this look came from as it’s not really very film like at all. It probably comes from shooting with a log gamma curve, which results in a flat, washed out looking image when viewed on a conventional monitor. Then because this look is “cool” because shooting on log is “cool” much of the flatness is left in the image in the grade because it looks different to regular TV ( or it may simply be that it’s easier to create a flat look than a good looking high contrast look). Later in the article I have a nice comparison of these two types of “film look”.

Not Like TV!

Not looking like TV or Video may be one of the biggest drivers for the “film look”. We watch TV day in, day out. Well produced TV will have accurate colours, natural contrast (over a limited range at least) and if the TV is set up correctly should be pretty true to life. Of course there are exceptions to this like many daytime TV or game shows where the saturation and brightness is cranked up to make the programmes vibrant and vivid.  But the aim of most TV shows is to look true to life. Perhaps this is one of the drivers to make films look different, so that they are not true to life, more like a slightly abstract painting or other work of art. Colour and contrast can help setup different moods, dull and grey for sadness, bright and colourful for happy scenes etc, but this should be separate from the overall look applied to a film.

Another aspect of the TV look comes from the fact that most TV viewing takes place in a normal room where light levels are not controlled. As a result bright pictures are normally needed, especially for daytime TV shows.

But What Does Film Look Like?

But what does film look like? As some of you will know I travel a lot and spend a lot of time on airplanes. I like to watch a film or 2 on longer flights and recently I’ve been watching some older films that were shot on film and probably didn’t have any of the grading or other extensive manipulation processes that most modern movies go through.

Lets look at a few frames from some of those movies, shot on film and see what they look like.

Lawrence-of-Arabia-01-1024x576 Why do we strive to mimic film? What is the film look anyway?
Lawrence of Arabia.

The all time classic Lawrence of Arabia. This film is surprisingly colourful. Red, blues, yellows are all well saturated. The film is high contrast. That is, it has very dark blacks, not crushed, but deep and full of subtle textures. Skin tones  are around 55 IRE and perhaps very slightly skewed towards brown/red, but then the cast are all rather sun tanned. But I wouldn’t call the skin tones orange. Diffuse whites typically around 80 IRE and they are white, not tinted or coloured.

braveheart1-1024x576 Why do we strive to mimic film? What is the film look anyway?
Braveheart.

When I watched Braveheart, one of the things that stood out to me was how green the foliage and grass was. The strong greens really stood out in this movie compared to more modern films. Overall it’s quite dark, skin tones are often around 45 IRE and rarely more than 55 IRE, very slightly warm/brown looking, but not orange. Again it’s well saturated and high contrast with deep blacks. Overall most scenes have a quite low peak and average brightness level. It’s quite hard to watch this film in a bright room on a conventional TV, but it looks fantastic in a darkened room.

Indy_cuts_bridge Why do we strive to mimic film? What is the film look anyway?
Raiders Of The Lost Ark

Raiders of the Lost Ark does show some of the attributes often used for the modern film look. Skin tones are warm and have a slight orange tint and overall the movie is very warm looking. A lot of the sets use warm colours with browns and reds being prominent. Colours are well saturated. Again we have high contrast with deep blacks and those much lower than TV skin tones, typically 50-55IRE in Raiders. Look at the foliage and plants though, they are close to what you might call TV greens, ie realistic shades of green.

A key thing I noticed in all of these (and other) older movies is that overall the images are darker than we would use for daytime TV. Skin tones in movies seem to sit around 55IRE. Compare that to the typical use of 70% zebras for faces on TV. Also whites are generally lower, often diffuse white sitting at around 75-80%. One important consideration is that films are designed to be shown in dark cinema theatres where  white at 75% looks pretty bright. Compare that to watching TV in a bright living room where to make white look bright you need it as bright as you can get. Having diffuse whites that bit lower in the display range leaves a little more room to separate highlights from whites giving the impression of a greater dynamic range. It also brings the mid range down a bit so the shadows also look darker without having to crush them.

Side Note: When using Sony’s Hypergammas and Cingeammas they are supposed to be exposed so that white is around 70-75% with skin tones around 55-60%. If used like this with a sutable colour matrix such as “cinema” they can look quite film like.

If we look at some recent movies the look can be very different.

the_revenant Why do we strive to mimic film? What is the film look anyway?
The Revenant

The Revenant is a gritty film and it has a gritty look. But compare it to Braveheart and it’s very different. We have the same much lower skin tone and diffuse white levels, but where has the green gone? and the sky is very pale.  The sky and trees are all tinted slightly towards teal and de-saturated. Overall there is only a very small colour range in the movie. Nothing like the 70mm film of Laurence of Arabia or the 35mm film of Braveheart.

deadmen-1024x576 Why do we strive to mimic film? What is the film look anyway?
Dead Men Tell No Tales.

In the latest instalment of the Pirates of the Caribbean franchise the images are very “brown”. Notice how even the whites of the ladies dresses or soldiers uniforms are slightly brown. The sky is slightly grey (I’m sure the sky was much bluer than this). The palm tree fronds look browner than green and Jack Sparrow looks like he’s been using too much fake tan as his face is border line orange (and almost always also quite dark).

wonder_woman_still_6 Why do we strive to mimic film? What is the film look anyway?
Wonder Woman.

Wonder woman is another very brown movie. In this frame we can see that the sky is quite brown. Meanwhile the grass is pushed towards teal and de-saturated, it certainly isn’t the colour of real grass.  Overall colours are subdued with the exception of skin tones.

These are fairly typical of most modern movies. Colours generally quite subdued, especially greens and blues. The sky is rarely a vibrant blue, grass is rarely a grassy green. Skin tones tend to be very slightly orange and around 50-60IRE. Blacks are almost always deep and the images contrasty. Whites are rarely actually white, they tend to be tinted either slightly brown or slightly teal. Steel blues and warm browns are favoured hues. These are very different looking images to the movies shot on film that didn’t go through extensive post production manipulation.

So the film look, isn’t really about making it look like it was shot on film, it’s a stylised look that has become stronger and stronger in recent years with most movies having elements of this look. So in creating the “film look” we are not really mimicking film, but copying a now almost standard colour grading recipe that has some film style traits.

BUT IS IT A GOOD THING?

In most cases these are not unpleasant looks and for some productions the look can add to the film, although sometimes it can be taken to noticeable and objectionable extremes. However we do now have cameras that can capture huge colour ranges. We also have the display technologies to show these enormous colour ranges. Yet we often choose to deliberately limit what we use and very often distort the colours in our quest for the “film look”.

HDR TV’s with Rec2020 colour can show both a greater dynamic range and a greater colour range than we have ever seen before. Yet we are not making use of this range, in particular the colour range except in some special cases like some TV commercials as well as high end wild life films such as Planet Earth II.

This TV commercial for TUI has some wonderful vibrant colours that are not restricted to just browns and teal yet it looks very film like. It does have an overall warm tint, but the other colours are allowed to punch through. It feels like the big budget production that it clearly was without having to resort to  the modern defacto  restrictive film look colour palette. Why can’t feature films look like this? Why do they need to be dull with a limited colour range? Why do we strive to deliberately restrict our colour pallet in the name of fashion?

What’s even more interesting is what was done for the behind the scenes film for the TUI advert…..

The producers of the BTS film decided to go with an extremely flat, washed out look, another form of modern “film look” that really couldn’t be further from film. When an typical viewer watches this do they get it in the same way as we that work in the industry do?  Do they understand the significance of the washed out, flat, low contrast pictures or do they just see weird looking milky pictures that lack colour with odd skin tones? The BTS film just looks wrong to me. It looks like it was shot with log and not graded.  Personally, I don’t think it looks cool or stylish, it just looks wrong and cheap compared to the lush imagery in the actual advert (perhaps that was the intention).

I often see people looking for a film look LUT. Often they want to mimic a particular film. That’s fine, it’s up to them. But if everyone starts to home in on one particular look or style then the films we watch will all look the same. That’s not what I want. I want lush rich colours where appropriate. Then I might want to see a subdued look in a period piece or a vivid look for a 70’s film. Within the same movie colour can be used to differentiate between different parts of the story. Take Woody Allen’s Cafe Society, shot by Vittorio Storaro for example. The New York scenes are grey and moody while the scenes in LA that portray a fresh start are vibrant and vivid. This is I believe important, to use colour and contrast to help tell the story.

Our modern cameras give us an amazing palette to work with. We have the tools such as DaVinci Resolve to manipulate those colours with relative ease. I believe we should be more adventurous with our use of colour. Reducing exposure levels a little compared to the nominal TV and video – skin tones at 70% – diffuse whites at 85-90%, helps replicate the film look and also leaves a bit more space in the highlight range to separate highlights from whites which really helps give the impression of a more contrasty image. Blacks should be black, not washed out and they shouldn’t be crushed either.

Above all else learn to create different styles. Don’t be afraid of using colour to tell your story and remember that real film isn’t just brown and teal, it’s actually quite colourful. Great artists tend to stand out when their works are different, not when they are the same as everyone else.

 

The Dangers Of Hidden Moisture.

Electronics and water are two things that just don’t match. We all know this and we all know that dropping a camera into a river or the sea probably isn’t going to do it a great deal of good. But one of the very real risks with any piece of electronics is hidden moisture, moisture you can’t see.

Most modern high definition or 4K pro video cameras have fans and cooling systems designed to keep them operating for long periods. But these cooling systems mean that the camera will be drawing in air from the outside world into the cameras interior. Normally this is perfectly fine, but if you are operating in rain or a very wet environment such as high humidity, spray, mist, fog etc it will mean a lot of moisture circulating through the camera and this can be a cause of problems.

If the camera is warm relative to the ambient temperature then generally humid air will simply pass through the camera (or other electronics) without issue. But if the camera is colder than the airs dewpoint then some of the moisture in the air will condense on the cameras parts and turn into water droplets.

A typical dangerous scenario is having the camera in a nice cool air conditioned car or building and then taking the camera out of the car/building to shoot on a warm day.  As the warm air hits the slightly colder camera parts moisture will form, both on the outside and the inside of the cameras body.

Moisture on the outside of the camera is normally obvious. It also tends to dry off quite quickly, but moisture inside the camera can’t be seen, you have no way of knowing whether it’s there or not. If you only use the camera for a short period the moisture won’t dry out and once the fans shut down the cameras interior is no longer ventilated and the moisture stays trapped inside.

Another damaging scenario is a camera that’s been splashed with water, maybe you got caught in an unexpected rain shower. Water will find it’s way into the smallest of holes and gaps through capillary action. A teeny, tiny droplet of water inside the camera will stay there once it gets inside. Get the camera wet a couple of times and that moisture can start to build up and it really doesn’t take a lot to do some serious damage. Many of the components in modern cameras are the size of pin heads. Rain water, sea water etc contain chemicals that can react with the materials used in a cameras construction, especially if electricity is passing through the components or the water and before you know it the camera stops working due to corrosion from water ingress.

Storing you delicate electronics inside a nice waterproof flight case such as a Pelicase (or any other similar brand) might seem like a good idea as these cases are waterproof. But a case that won’t let water in also won’t let water and moisture out. Put a camera that is damp inside a wateproof case and it will stay damp. It will never dry out.  All that moisture is gong to slowly start eating away at the metals used in a lightweight camera body and some of the delicate electronic components. Over time this gets worse and worse until eventually the camera stops working.

So What Should You Do?

Try to avoid getting the camera wet. Always use a rain cover if you are using a camera in the rain, near the sea or in misty, foggy weather. Just because you can’t see water flowing off your camera it doesn’t mean it’s safe. Try to avoid taking a cold camera from inside an air conditioned office or car into a warmer environment. If you need to do this a lot consider putting the camera in a waterproof bag ( a bin bag will do) before taking the camera into the warmer environment. Then allow the camera to warm up in the bag before you start to use it. If driving around in a car from location to location consider using less air conditioning so the car isn’t so cold inside.

Don’t store or put away a damp camera. Always, always throughly dry out any camera before putting it away. Consider warming it up and drying it with a hairdryer on a gentle/low heat setting (never let the camera get too hot to handle). Blow warm dry air gently into any vents to ensure the warm air circulates inside to remove any internal moisture. Leave the camera overnight in a warm, dry place with any flaps or covers open to allow it to dry out throughly.

If you know you camera is wet then turn it off. Remove the battery and leave it to dry out in a warm place for 24 hours. If it got really wet consider taking it to a dealer or engineer to have it opened up to make sure it’s dry inside before adding any power.

If you store your kit in waterproof cases, leave the lids open to allow air to circulate and prevent moisture building up inside the cases. Use Silica Gel sachets inside the cases to absorb any unwanted moisture.

If you live or work in a warm humid part of the world it’s tough. When I go storm chasing going from inside the car to outside in the warm to shoot is not healthy for the camera. So at the end of each day take extra care to make sure the camera is dry. Not just any obvious moisture on the outside but dry on the inside. So this normally means warming it up a little (not hot, just warm). Again a hair drier is useful or leave the camera powered up for a couple of hours in an air conditioned room (good quality aircon should mean the air in the room is dry). I keep silica gel sachets in my camera bags to help absorb any surplus moisture. Silica gel sachets should be baked in an oven periodically to dry them out and refresh them.

Fogged Up Lens?

Another symptom of unwanted moisture is a fogged up lens. If the lens is fogged up then there will almost certainly be moisture elsewhere. In the case of a fogged up lens one thing that sometimes helps (other than a hairdryer) is to zoom in and out a lot if it’s a zoom or change the focus a lot. Moving the lens elements backwards and forwards inside the lens helps to circulate air inside the lens and can speed up the time it takes to dry out.