Category Archives: cinematography

What Benefits Do I Gain By Using CineEI?

This is a question that comes up a lot. Especially from those migrating to a camera with a CineEI mode from a camera without one. It perhaps isn’t obvious why you would want to use a shooting mode that has no way of adding gain to the recordings.

If using the CineEI mode shooting S-log3 at the base ISO, with no offsets or anything else then there is very little difference between what you record in Custom mode at the base ISO and CineEI at the base EI.

But we have to think about what the CineEI mode is all about. It’s all about image quality. You would normally chose  to shoot S-Log3 when you want to get the highest possible quality image and CineEI is all about quality.

The CineEI mode allows you to view via your footage via a LUT so that you can get an appreciation of how the footage will look after grading. Also when monitoring and exposing via the LUT because the dynamic range of the LUT is narrower, your exposure will be more accurate  and consistent because bad exposure looks more obviously bad. This makes grading easier. One of the keys to easy grading is consistent footage, footage where the exposure is shifting or the colours changing (don’t use ATW with Log!!) can be very hard to grade.

Then once you are comfortable exposing via a LUT you can start to think about using EI offsets to make the LUT brighter or darker. When the LUT is darker you open the aperture or reduce the ND to return the LUT to a normal looking image and vice versa with a brighter LUT.  This then changes the brightness of the S-log3 recordings and you use this offsetting process  to shift the highlight/shadow range as well as noise levels to suit the types of scenes you are shooting. Using a low EI (which makes the LUT darker) plus correct LUT exposure  (the darker LUT will make you open the aperture to compensate) will result in a brighter recording which will improve the shadow details and textures that are recorded and thus can be seen in the shadow areas. At the same time however that brighter exposure will reduce the highlight range by a similar amount to the increase in the shadow range. And no matter what the offset, you always record at the cameras full dynamic range.

I think what people misunderstand about CineEI is that it’s there to allow you to get the best possible, highly controlled images from the camera. Getting the best out of any camera requires appropriate and sufficient light levels. CineEI is not designed or intended to be a replacement for adding gain or shooting at high recording ISOs where the images will be already compromised by noise and lowered dynamic range.
 
CineEI exists so that when you have enough light to really make the camera perform well you can make those decisions over noise v highlights v shadows to get the absolute best “negative” with consistent and accurate exposure to take into post production. It is also the only possible way you can shoot when using raw as raw recordings are straight from the sensor and never have extra gain added in camera.
 
Getting that noise/shadow/highlight balance exactly right, along with good exposure is far more important than the use of external recorders or fatter codecs. You will only ever really benefit fully from higher quality codecs if what you are recording is as good as it can be to start with. The limits as to what you can do in post production are tied to image noise no matter what codec or recording format you use. So get that bit right and everything else gets much easier and the end result much better. And that’s what CineEI gives you great control over.
 
When using CineEI or S-Log3 in general you need to stop thinking “video camera – slap in a load if gain if its dark” and think “film camera – if its too dark I need more light”. The whole point of using log is to get the best possible image quality, not shooting with insufficient light and a load of gain and noise. It requires a different approach and completely different way of thinking, much more in line with the way someone shooting on film would work.

What surprises me is the eagerness to adopt shutter angles and ISO ratings for electronic video cameras because they sound cool but less desire to adopt a film style approach to exposure based on getting the very best from the sensor.  In reality a video sensor is the equivalent of a single sensitivity film stock. When a camera has dual ISO then it is like having a camera that takes two different film stocks.  Adding gain or raising the ISO away from the base sensitivity in custom mode is a big compromise that can never be undone. It adds noise and decreases the dynamic range. Sometimes it is necessary, but don’t confuse that necessity with getting the very best that you can from the camera.

For more information on CineEI see:

Using CineEI with the FX6  
 
 

New addition to Sony’s Cinema Line On The Way

In some regards this is now already old news. I’m under NDA so there are limits as to what I can, or what I should write. Hopefully I won’t get into too much trouble if I point out that there are already a lot of rumours circulating right now that a new camera called the FX3 might be about to be launched. The official line is that there is going to be an announcement on the 23rd of February, just a week from now.  So why not just chill out for a one week as then you will be able to get the full facts about whatever it is that’s going to be added to the Cinema Line. 

Aliens Might Think The Earth Is Perpetually Dark!

If you were an alien on another planet and had access to nothing but all the latest camera demo or “film-maker” reels on YouTube or Vimeo you would quite possibly believe that much of planet Earth is in perpetual darkness.

All you see is clips shot at night or in blacked out studios. Often with very little dynamic range, often incredibly (excessively?) dark. I keep having to check that the brightness on my monitor is set correctly. Even daytime interiors always seem to be in rooms that are 90% dark with just a tiny, low contrast pool of daylight from a window filling one corner and even the light coming through the window is dim and dull. 

I recently viewed a clip that was supposed to show the benefits of raw that contained nothing but low dynamic range shots where 70% of each frame was nothing but black. Sadly there were no deep shadow details, just blackness, some very dark faces and where there were highlights they were clipped. It was impossible to tell anything about the format being used.

The default showreel or demo shot is now a very dark space with a 3/4 profile person, very dimly lit, low key, by a large soft source. Throw in a shiny car with some specular highlights or a few dim practical lights into the background for extra brownie points. 

Let me let you in to a little secret – it’s really, really easy to make black frames. Want a little pool of light – add a light to your mostly black frame. It’s really easy to shoot under exposed and then as a result not have any issues with dynamic range. It’s really easy to shoot a face so that it’s so dark you can barely see the persons eyes and then not have a problem with shiny skin.

But try shooting someone at a desk in a bright office and make that look really great. Try shooting a proper daytime living room scene and making that look flawless. A summer picnic on a beach with brilliant blue sky perhaps. These are all challenging to do very well for both the DoP and the camera.

We have these wonderful cameras with huge dynamic ranges that work really well in daylight too. But we seem to be losing the ability to shoot anything other than shots that have very low average brightness levels and low dynamic range. They depend on often coloured or tinted lighting to provide some interest in what would otherwise be a very, very boring images. Where are all the challenging shots in difficult lighting? Where are the bright, vibrant shots where we can see the dynamic range, resolution and natural colour palettes that separate a good camera from a great camera? 

Dark is getting very boring. 

Sony Launches Airpeak Drone – Designed to carry Alpha sized cameras.

slide_aid_01-600x338 Sony Launches Airpeak Drone - Designed to carry Alpha sized cameras.
Sony Airpeak Drone

 

Sony has launched an entirely new division called Airpeak. Airpeak have produced a large drone that can carry an Alpha sized camera. They claim that this is the smallest drone capable of carrying an Alpha sized camera. It’s unknown at this time whether the Airpeak division will purely focus on larger drones capable of carrying non integrated cameras or whether they will also produce smaller drones with integral cameras. It would certainly make sense to leverage Sony’s sensor expertise by creating dedicated cameras for drones and then drones to carry those cameras. 

The drone market is going to be a tough one to make inroads into. There are already a couple of very well regarded drone manufacturers making some great drones such as the DJI inspire or Mavic Pro. But most of these are small and cannot carry larger external cameras. However the cameras that these drones are equipped with can deliver very high quality images – and they continue to get better and better. The use of larger drones for video applications is more specialist, however globally it is a large market. Whether Sony can compete in the more specialist area of larger drones that carry heavier payloads is yet to be seen. I hope the succeed.

One thing I intend to do in the next few years as the Sun enters the more active phase of it’s 11 year solar cycle is to shoot the Aurora from a drone and a camera like the A7S III and a larger, stable drone would be perfect. But there is no indication of pricing yet and a drone of this size won’t be cheap. So unless I decide to do a lot more drone work than I do already, perhaps it will be better to hire someone with the right kit. But that’s not as much fun as doing it yourself!

For more information on Airpeak do take a look at their website. There is already some impressive footage of it being used to shoot a Vision-S car on a test track.

Sony Airpeak Website.

 

Sony FX6 Launch

Sony will launch a new small 4K handheld camcorder – the FX6 on Tuesday the 17th of November.

Sony_FX6_side_44062_02-Mid-1024x994 Sony FX6 Launch
Sony FX6 4K Camcorder

To find out more about this new and very exciting camcorder you can watch the launch event via Instagram. After the launch event I am hosting a Q and A on Instagram. I’ve been lucky enough to have shot with the camera and have extensively tested it, so tune in to the Q&A to learn more. There is a lot to like and I am certain this camcorder will prove to be extremely popular. The Instagram session will be here: https://www.instagram.com/sonyprofilmmaking/

Then on Wednesday the 18th I will be presenting a webinar on the FX6 for Visual Impact in the UK at 11.00 GMT: https://www.visuals.co.uk/events/events.php?event=eid1748059180-892

 

 

 

Then once the initial launch dust settles I will bring you more information about this exciting new camera including tutorials and guides.

Inside the Big Top. A short film from Glastonbury 2019. Shot on Venice.

As there is no Glastonbury Festival this year the organisers and production company have been releasing some videos from last year. This video was shot mostly with Venice using Cooke 1.8x anamorphics. The non Venice material is from an FS5. It’s a behind the scenes look at the activities and performances around the Glastonbury Big Top and the Theater and Circus fields. 

 

Are We Missing Problems In Our Footage Because We Don’t Use Viewfinders Anymore?

I see it so many times on various forums and user groups – “I didn’t see it until I looked at it at home and now I find the footage is unusable”.

We all want our footage to be perfect all of the time, but sometimes there might be something that trips up the technology that we are using. And that can introduce problems into a shot. The problem is perhaps that these things are not normal. As a result we don’t expect them to be there, so we don’t necessarily look for them. But thinking about this, I also think a lot of it is because very often the only thing being used to view what is being shot is a tiny LCD screen.

For the first 15 years of my career the only viewfinders available were either a monocular viewfinder with a magnifier or a large studio style viewfinder (typically 7″).  Frankly if all you are using is a 3.5″ LCD screen, then you will miss many things!

I see many forum post about these missed image issues on my phone which has a 6″ screen. When I view the small versions of the posted examples of the issue I can rarely see it. But view it full screen and it becomes obvious. So what hope do you have of picking up these issue on location with a tiny monitor screen, often viewed too closely to be in good focus.

A 20 year old will typically have a focus range of around 12 diopters, but by the time you get to 30 that decreases to about 8, by 40 to 5 and 50 just 1 or 2. What that means (for the average person) is that if you are young enough you might be able to focus sufficiently on that small LCD when it’s close enough to your eyes for you to be able to see it properly and be able to see potential problems. But by the time you get to 30 most people won’t be able to adequately focus on a 3.5″ LCD until it’s too far from their eyes to resolve everything it is capable of showing you. If you are hand holding a camera with a 3.5″ screen such that the screen is 30cm or more from your eyes there is no way you can see critical focus or small image artefacts, the screen is just too small. Plus most people that don’t have their eyesight tested regularly don’t even realise it is deteriorating until it gets really bad.

There are very good reason why viewfinders have diopters/magnifiers. They are there to allow you to see everything your screen can show, they make the image appear larger, they keep out unwanted light. When you stop using them you risk missing things that can ruin a shot, whether that’s focus that’s almost but not quite right, something in the background that shouldn’t be there or some subtle technical issue.

It’s all too easy to remove the magnifier and just shoot with the LCD, trusting that the camera will do what you hope it to. Often it’s the easiest way to shoot, we’ve all been there I’m sure. BUT easy doesn’t mean best. When you remove the magnifier you are choosing easy shooting over the ability to see issues in your footage before it’s too late to do something about it.

ProResRaw Now In Adobe Creative Cloud Mac Versions!

Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.

Are LUT’s Killing Creativity And Eroding Skills?

I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.

In my distant past I worked in and helped run  a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with  the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.

But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.

Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.

You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.

This creates two issues.

1: Everything looks the same – BORING!!!!

2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.

How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?

If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.

Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.

LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.

Temporal Aliasing – Beware!

As camera resolutions increase and the amount of detail and texture that we can record increases we need to be mindful more and more of temporal aliasing. 

Temporal aliasing occurs when the differences between the frames in a video sequence create undesirable sequences of patterns that move from one frame to the next, often appearing to travel in the opposite direction to any camera movement. The classic example of this is the wagon wheels going backwards effect often seen in old cowboy movies. The cameras shutter captures the spokes of the wheels in a different position in each frame but the timing of the shutter relative to the position of the spokes means that the wheels appear to go backwards rather than forwards. This was almost impossible to prevent with film cameras that were stuck with a 180 degree shutter as there was no way to blur the motion of the spokes so that they were contiguous from one frame to the next. A 360 degree shutter would have prevented this problem in most cases. But it’s also reasonable to note that at 24fps a 360 degree shutter would have introduced an excessive amount of motion blur elsewhere.

Another form of temporal aliasing that often occurs is when you have rapidly moving grass, crops, reeds or fine branches. Let me try to explain:

You are shooting a field of wheat, the stalks are very small in the frame, almost too small to discern individually. As the stalks of wheat move left, perhaps blown by the wind, each stalk will be captured in each frame a little more to the left, perhaps by just a few pixels. But in the video they appear to be going the other way. This is  because every stalk looks the same as all the others and in the following captured frame,  the original stalk may have moved  say 6 pixels to the left. But now there is also a different stalk just 2 pixels to the right of where the original was. Because both stalks look the same it appears that the stalk has moved right instead of left. As the wind speed and the movement of the stalks changes they may appear to move randomly left or right or a combination of both. The image looks very odd, often a jumbled mess, as perhaps the tops of the stalks appear to move one way while lower parts appear to go the other.

There is a great example of temporal aliasing here in this clip on Pond5 https://www.pond5.com/stock-footage/item/58471251-wagon-wheel-effect-train-tracks-optical-illusion-perception

Notice in the pond 5 clip how it’s not only the railway sleepers that appear to move in the wrong direction or at the wrong speed but notice how the stones between the sleepers appear to look like some kind of boiling noise.

Like the old movie wagon wheels one thing that makes this worse is the use of too fast a shutter speed. The more you freeze the motion of the offending objects or textures in each frame the higher the risk of temporal aliasing with moving textures or patterns. Often a slower shutter speed will introduce enough motion blur that the motion looks normal again. You may need to experiment with different shutter speeds to find the sweet spot where the temporal aliasing goes away or is minimised.  If shooting at 50fps or faster try a 360 degree 1/50th shutter as by the time you get to a 1/50th shutter motion is already starting to be as crisp as it needs to be for most types of shots unless you are intending to do some for of frame by frame motion analysis.