I’ve covered this before, but as this came up again in an online discussion I thought I would write about it again. For decades when I was doing a lot of corporate video work we shot greenscreen and chroma key with analoge or 8 bit, limited dynamic range, standard definition cameras and generally got great results (it was very common to use a bluescreen as blue spill doesn’t look as bad on skin tones as green). So now when we have cameras with much greater dynamic ranges and 10 bit recording is it better to shoot for greenscreen using S-Log3 (or any other log curve for that matter) or perhaps Rec-709?
Before going further I will say that there is no yes-no, right-wrong, answer to this question. I will also add that Rec-709 gets a bad rap because people don’t really understand how gamma curves/transfer functions actually work and how modern grading software is able to re-map the aquisition transfer function to almost any other transfer function. If you use a colour managed workflow in DaVinci Resolve it is very easy to take a Rec-709 recording and map it to S-Log3 so that you can apply the same grades to the 709 as you would to material originated using S-Log3. Of course the 709 recording may not have as much dynamic range as an S-Log3 recording, but it will “look” more or less the same.
Comming back to shooting greenscreen and chromakey:
S-Log3: ? Shoot using 10 bit S-log3 and you have 791 code values available (95-886) to record 14/15 stops of dynamic range. so on average across the entire curve each stop has around 55 code values. Between Middle Grey and +2 stops there are approx 155 code values – this region is important as this is where the majority of skin tones and the key background are likely to fall.
Rec-709: ? Shoot using vanilla Rec-709 and you are using 929 code values (90-1019) to record 6/7 stops so each stop has on average across the entire curve has around 125 code values. Between Middle Grey and +2 stops there are going to be around 340 code values. ? That is not an insignificant difference, it’s not far off the difference between shooting with 10 bit or 12 bit. ? If you were to ask someone whether it is better to shoot using 10 bit or 12 bit I am quite sure the automatic answer would be 12 bit because the general concensus is – more bits is always better. ? A further consideration is that the Sony cameras operate at a lower ISO when shooting with standard gammas and as a result you will have an improved signal to noise ratio using 709 than when using S-log3 and this can also make it easier to achieve a good, clean, key. ? However you do also need to think about what it is you are shooting and how it will be used. If you are shooting greenscreen in a studio then you should have full control over your lighting and in most cases 6 or 7 stops is all you need, so Rec-709 should be able to capture everything comfortably well. If you are shooting outside with less control over the light perhaps Rec-709 won’t have sufficient range. ? If the background plates have been shot using S-Log3 then some people don’t like keying 709 into S-Log3. However a colour managed workflow can deal with this very easily. We should consider that 709 and S-Log3 in a workflow where grading is a big part should not be though of as “looks” but simply as transfer functions or maps of what brightness/saturation seen by the camera is recorded at what code value. Handle these transfer functions correctly via a colour managed workflow and both will “look” the same and both will grade the same within their respective capture limits. ? For an easy workflow you might chose to shoot the greenscreen elements using log with the same settings as the plates. There is nothing wrong with this, it works, it is a very commonly used workflow but it isn’t necessarily always going to be optimum. A lot of people will put a lot of emphasis on using raw or greater bit depths to maximise the quality of their keying, but overlook gamma choice altogether, simply because “Rec-709” is almost a dirty word these days. ? If you have more control, and want absolutely the best possible key, you might be better off using Rec-709. As you will have more data per stop which makes it easier for the keying software to identify edges and less noise. If using Rec-709 you want to chose a version of Rec-709 where you can turn off the camera’s knee as this will prevent the 709 curve from crushing the highlights which can make them difficult to grade. In a studio situation you shouldn’t need to use a heavy knee.
I suggest you experiment and test for yourself and not every situation will be the same, sometimes S-Log3 will be the right choice, other times Rec-709. ?
Before the large sensor resolution most professional video cameras used 3 sensors, one each for red, green and blue. And each of those sensors normally had as many pixels as the resolution of the recording format. So you had enough pixels in each colour for full resolution in each colour.
Then along came large sensor cameras where the only way to make it work was by using a single sensor (the optical prism would be too big to accomodate any existing lens system). So now you have to have all your pixels on one sensor divided up between red, green and blue.
Almost all of camera manufacturers ignored the inconvenient truth that a colour sensor with 4K of pixels won’t deliver 4K of resolution. We were sold these new 4K cameras. But the 4K doesn’t mean 4K resolution, it means 4K of pixels. To be fair to the manufactures, they didn’t claim 4K resolution, but they were also quite happy to let end users think that that’s what the 4K meant.
My reason for writing about this topic again is because I just had someone on my facebook feed discussing how wonderful it was to be shooting at 6K with a new camera as this would give lots of space for reframing for 4K.
The nature of what he wrote – “shooting at 6K” – implies shooting at 6K resolution. But he isn’t, his 6K sensor is probably delivering around 4K resolution and he won’t have any room for reframing if he wants to end up with a 4K resolution final image. Now again, in the name of fairness, shooting with 6K of pixels is going to be better than shooting with 4K of pixels if you do choose to reframe. But we really, really need to be careful about how we use terms like 4K or 6K. What do we really mean, what are we really talking about. Because the more we muddle pixels with resolution the less clear it will be what we are actually recording. Eventually no one will really understand that the two are different and the differences really do matter.
This is a question that comes up a lot. Especially from those migrating to a camera with a CineEI mode from a camera without one. It perhaps isn’t obvious why you would want to use a shooting mode that has no way of adding gain to the recordings.
If using the CineEI mode shooting S-log3 at the base ISO, with no offsets or anything else then there is very little difference between what you record in Custom mode at the base ISO and CineEI at the base EI.
But we have to think about what the CineEI mode is all about. It’s all about image quality. You would normally chose to shoot S-Log3 when you want to get the highest possible quality image and CineEI is all about quality.
The CineEI mode allows you to view via your footage via a LUT so that you can get an appreciation of how the footage will look after grading. Also when monitoring and exposing via the LUT because the dynamic range of the LUT is narrower, your exposure will be more accurate and consistent because bad exposure looks more obviously bad. This makes grading easier. One of the keys to easy grading is consistent footage, footage where the exposure is shifting or the colours changing (don’t use ATW with Log!!) can be very hard to grade.
Then once you are comfortable exposing via a LUT you can start to think about using EI offsets to make the LUT brighter or darker. When the LUT is darker you open the aperture or reduce the ND to return the LUT to a normal looking image and vice versa with a brighter LUT. This then changes the brightness of the S-log3 recordings and you use this offsetting process to shift the highlight/shadow range as well as noise levels to suit the types of scenes you are shooting. Using a low EI (which makes the LUT darker) plus correct LUT exposure (the darker LUT will make you open the aperture to compensate) will result in a brighter recording which will improve the shadow details and textures that are recorded and thus can be seen in the shadow areas. At the same time however that brighter exposure will reduce the highlight range by a similar amount to the increase in the shadow range. And no matter what the offset, you always record at the cameras full dynamic range.
I think what people misunderstand about CineEI is that it’s there to allow you to get the best possible, highly controlled images from the camera. Getting the best out of any camera requires appropriate and sufficient light levels. CineEI is not designed or intended to be a replacement for adding gain or shooting at high recording ISOs where the images will be already compromised by noise and lowered dynamic range.
CineEI exists so that when you have enough light to really make the camera perform well you can make those decisions over noise v highlights v shadows to get the absolute best “negative” with consistent and accurate exposure to take into post production. It is also the only possible way you can shoot when using raw as raw recordings are straight from the sensor and never have extra gain added in camera.
Getting that noise/shadow/highlight balance exactly right, along with good exposure is far more important than the use of external recorders or fatter codecs. You will only ever really benefit fully from higher quality codecs if what you are recording is as good as it can be to start with. The limits as to what you can do in post production are tied to image noise no matter what codec or recording format you use. So get that bit right and everything else gets much easier and the end result much better. And that’s what CineEI gives you great control over.
When using CineEI or S-Log3 in general you need to stop thinking “video camera – slap in a load if gain if its dark” and think “film camera – if its too dark I need more light”. The whole point of using log is to get the best possible image quality, not shooting with insufficient light and a load of gain and noise. It requires a different approach and completely different way of thinking, much more in line with the way someone shooting on film would work.
What surprises me is the eagerness to adopt shutter angles and ISO ratings for electronic video cameras because they sound cool but less desire to adopt a film style approach to exposure based on getting the very best from the sensor. In reality a video sensor is the equivalent of a single sensitivity film stock. When a camera has dual ISO then it is like having a camera that takes two different film stocks. Adding gain or raising the ISO away from the base sensitivity in custom mode is a big compromise that can never be undone. It adds noise and decreases the dynamic range. Sometimes it is necessary, but don’t confuse that necessity with getting the very best that you can from the camera.
In some regards this is now already old news. I’m under NDA so there are limits as to what I can, or what I should write. Hopefully I won’t get into too much trouble if I point out that there are already a lot of rumours circulating right now that a new camera called the FX3 might be about to be launched. The official line is that there is going to be an announcement on the 23rd of February, just a week from now. So why not just chill out for a one week as then you will be able to get the full facts about whatever it is that’s going to be added to the Cinema Line.
If you were an alien on another planet and had access to nothing but all the latest camera demo or “film-maker” reels on YouTube or Vimeo you would quite possibly believe that much of planet Earth is in perpetual darkness.
All you see is clips shot at night or in blacked out studios. Often with very little dynamic range, often incredibly (excessively?) dark. I keep having to check that the brightness on my monitor is set correctly. Even daytime interiors always seem to be in rooms that are 90% dark with just a tiny, low contrast pool of daylight from a window filling one corner and even the light coming through the window is dim and dull.
I recently viewed a clip that was supposed to show the benefits of raw that contained nothing but low dynamic range shots where 70% of each frame was nothing but black. Sadly there were no deep shadow details, just blackness, some very dark faces and where there were highlights they were clipped. It was impossible to tell anything about the format being used.
The default showreel or demo shot is now a very dark space with a 3/4 profile person, very dimly lit, low key, by a large soft source. Throw in a shiny car with some specular highlights or a few dim practical lights into the background for extra brownie points.
Let me let you in to a little secret – it’s really, really easy to make black frames. Want a little pool of light – add a light to your mostly black frame. It’s really easy to shoot under exposed and then as a result not have any issues with dynamic range. It’s really easy to shoot a face so that it’s so dark you can barely see the persons eyes and then not have a problem with shiny skin.
But try shooting someone at a desk in a bright office and make that look really great. Try shooting a proper daytime living room scene and making that look flawless. A summer picnic on a beach with brilliant blue sky perhaps. These are all challenging to do very well for both the DoP and the camera.
We have these wonderful cameras with huge dynamic ranges that work really well in daylight too. But we seem to be losing the ability to shoot anything other than shots that have very low average brightness levels and low dynamic range. They depend on often coloured or tinted lighting to provide some interest in what would otherwise be a very, very boring images. Where are all the challenging shots in difficult lighting? Where are the bright, vibrant shots where we can see the dynamic range, resolution and natural colour palettes that separate a good camera from a great camera?
Sony has launched an entirely new division called Airpeak. Airpeak have produced a large drone that can carry an Alpha sized camera. They claim that this is the smallest drone capable of carrying an Alpha sized camera. It’s unknown at this time whether the Airpeak division will purely focus on larger drones capable of carrying non integrated cameras or whether they will also produce smaller drones with integral cameras. It would certainly make sense to leverage Sony’s sensor expertise by creating dedicated cameras for drones and then drones to carry those cameras.
The drone market is going to be a tough one to make inroads into. There are already a couple of very well regarded drone manufacturers making some great drones such as the DJI inspire or Mavic Pro. But most of these are small and cannot carry larger external cameras. However the cameras that these drones are equipped with can deliver very high quality images – and they continue to get better and better. The use of larger drones for video applications is more specialist, however globally it is a large market. Whether Sony can compete in the more specialist area of larger drones that carry heavier payloads is yet to be seen. I hope the succeed.
One thing I intend to do in the next few years as the Sun enters the more active phase of it’s 11 year solar cycle is to shoot the Aurora from a drone and a camera like the A7S III and a larger, stable drone would be perfect. But there is no indication of pricing yet and a drone of this size won’t be cheap. So unless I decide to do a lot more drone work than I do already, perhaps it will be better to hire someone with the right kit. But that’s not as much fun as doing it yourself!
For more information on Airpeak do take a look at their website. There is already some impressive footage of it being used to shoot a Vision-S car on a test track.
Sony will launch a new small 4K handheld camcorder – the FX6 on Tuesday the 17th of November.
To find out more about this new and very exciting camcorder you can watch the launch event via Instagram. After the launch event I am hosting a Q and A on Instagram. I’ve been lucky enough to have shot with the camera and have extensively tested it, so tune in to the Q&A to learn more. There is a lot to like and I am certain this camcorder will prove to be extremely popular. The Instagram session will be here: https://www.instagram.com/sonyprofilmmaking/
As there is no Glastonbury Festival this year the organisers and production company have been releasing some videos from last year. This video was shot mostly with Venice using Cooke 1.8x anamorphics. The non Venice material is from an FS5. It’s a behind the scenes look at the activities and performances around the Glastonbury Big Top and the Theater and Circus fields.
I see it so many times on various forums and user groups – “I didn’t see it until I looked at it at home and now I find the footage is unusable”.
We all want our footage to be perfect all of the time, but sometimes there might be something that trips up the technology that we are using. And that can introduce problems into a shot. The problem is perhaps that these things are not normal. As a result we don’t expect them to be there, so we don’t necessarily look for them. But thinking about this, I also think a lot of it is because very often the only thing being used to view what is being shot is a tiny LCD screen.
For the first 15 years of my career the only viewfinders available were either a monocular viewfinder with a magnifier or a large studio style viewfinder (typically 7″). Frankly if all you are using is a 3.5″ LCD screen, then you will miss many things!
I see many forum post about these missed image issues on my phone which has a 6″ screen. When I view the small versions of the posted examples of the issue I can rarely see it. But view it full screen and it becomes obvious. So what hope do you have of picking up these issue on location with a tiny monitor screen, often viewed too closely to be in good focus.
A 20 year old will typically have a focus range of around 12 diopters, but by the time you get to 30 that decreases to about 8, by 40 to 5 and 50 just 1 or 2. What that means (for the average person) is that if you are young enough you might be able to focus sufficiently on that small LCD when it’s close enough to your eyes for you to be able to see it properly and be able to see potential problems. But by the time you get to 30 most people won’t be able to adequately focus on a 3.5″ LCD until it’s too far from their eyes to resolve everything it is capable of showing you. If you are hand holding a camera with a 3.5″ screen such that the screen is 30cm or more from your eyes there is no way you can see critical focus or small image artefacts, the screen is just too small. Plus most people that don’t have their eyesight tested regularly don’t even realise it is deteriorating until it gets really bad.
There are very good reason why viewfinders have diopters/magnifiers. They are there to allow you to see everything your screen can show, they make the image appear larger, they keep out unwanted light. When you stop using them you risk missing things that can ruin a shot, whether that’s focus that’s almost but not quite right, something in the background that shouldn’t be there or some subtle technical issue.
It’s all too easy to remove the magnifier and just shoot with the LCD, trusting that the camera will do what you hope it to. Often it’s the easiest way to shoot, we’ve all been there I’m sure. BUT easy doesn’t mean best. When you remove the magnifier you are choosing easy shooting over the ability to see issues in your footage before it’s too late to do something about it.
Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.