In the last month or so it has become increasingly hard to find dealers or stores with 3rd party BP-U style batteries in stock.
After a lot of digging around and talking to dealers and battery manufacturers it became apparent that Sony were asking the manufacturers of BP-U style batteries to stop making and selling them or face legal action. The reason given being that the batteries impinge on Sony’s Intellectual Property rights.
Why Is This Happening Now?
It appears that the reason for this clamp down is because it was discovered that the design of some of these 3rd party batteries was such that the battery could be inserted into the camera in a way that instead of power flowing through the power pins to the camera, power was flowing through the data pins. This will burn out the circuit boards in the camera and the camera will no longer work.
Users of these damaged cameras, unaware that the problem was caused by the battery were sending them back to Sony for repair under warranty. I can imagine that many arguments would have then followed over who was to pay for these potentially very expensive repairs or camera replacements.
So it appears that to prevent further issues Sony is trying to stop potentially damaging batteries from being manufactured and sold.
This is good and bad. Of course no one wants to use a battery that could result in the need to replace a very expensive camera with a new one (and if you were not aware it was the battery you could also damage the replacement camera). But many of us, myself included, have been using 3rd party batteries so that we can have a D-Tap power connection on the battery to power other devices such as monitors.
Only Option – BP-U60T?
Sony don’t produce batteries with D-Tap outlets. They do make a battery with a hirose connector (BP-U60T), but that’s not what we really want and compared to the 3rd party batteries it’s very expensive and the capacity isn’t all that high.
So where do we go from here?
If you are going to continue to use 3rd party batteries, do be very careful about how you insert them and be warned that there is the potential for serious trouble. I don’t know how widespread the problem is.
We can hope perhaps that maybe Sony will either start to produce batteries with a D-Tap of their own. Or perhaps they can work with a range of chosen 3rd party battery manufacturers to find a way to produce safe batteries with D-Tap outputs under licence.
Almost all modern day video and electronic stills cameras have the ability to change the brightness of the images they record. The most common way to achieve this is through the addition of gain or through the amplification of the signal that comes from the sensor.
On older video cameras this amplification was expressed as dB (decibels) of gain. A brightness change of 6dB is the same as one stop of exposure or a doubling of the ISO rating. But you must understand that adding gain to raise the ISO rating of a camera is very different to actually changing the sensitivity of a camera.
The problem with increasing the amplification or adding gain to the sensor output is that when you raise the gain you increase the level of the entire signal that comes from the sensor. So, as well as increasing the levels of the desirable parts of the image, making it brighter, the extra gain also increases the amplitude of the noise, making that brighter too.
Imagine you are listening to an FM radio. The signal starts to get a bit scratchy, so in order to hear the music better you turn up the volume (increasing the gain). The music will get louder, but so too will the scratchy noise, so you may still struggle to hear the music. Changing the ISO rating of an electronic camera by adding gain is little different. When you raise the gain the picture does get brighter but the increase in noise means that the darkest things that can be seen by the camera remain hidden in the noise which has also increased in amplitude.
Another issue with adding gain to make the image brighter is that you will also normally reduce the dynamic range that you can record.
This is because amplification makes the entire signal bigger. So bright highlights that may be recordable within the recording range of the camera at 0dB or the native ISO may be exceed the upper range of the recording format when even only a small amount of gain is added, limiting the high end.
At the same time the increased noise floor masks any additional shadow information so there is little if any increase in the shadow range.
Reducing the gain doesn’t really help either as now the brightest parts of the image from the sensor are not amplified sufficiently to reach the cameras full output. Very often the recordings from a camera with -3dB or -6dB of gain will never reach 100%.
A camera with dual base ISO’s works differently.
Instead of adding gain to increase the sensitivity of the camera a camera with a dual base ISO sensor will operate the sensor in two different sensitivity modes. This will allow you to shoot at the low sensitivity mode when you have plenty of light, avoiding the need to add lots of ND filters when you want to obtain a shallow depth of field. Then when you are short of light you can switch the camera to it’s high sensitivity mode.
When done correctly, a dual ISO camera will have the same dynamic range and colour performance in both the high and low ISO modes and only a very small difference in noise between the two.
How dual sensitivity with no loss of dynamic range is achieved is often kept very secret by the camera and sensor manufacturers. Getting good, reliable and solid information is hard. Various patents describe different methods. Based on my own research this is a simplified description of how I believe Sony achieve two completely different sensitivity ranges on both the Venice and FX9 cameras.
The image below represents a single microscopic pixel from a CMOS video sensor. There will be millions of these on a modern sensor. Light from the camera lens passes first through a micro lens and colour filter at the top of the pixel structure. From there the light hits a part of the pixel called a photodiode. The photodiode converts the photons of light into electrons of electricity.
In order to measure the pixel output we have to store the electrons for the duration of the shutter period. The part of the pixel used to store the electrons is called the “image well” (in an electrical circuit diagram the image well would be represented as a capacitor and is often simply the capacitance of the the photodiode itself).
Then as more and more light hits the pixel, the photodiode produces more electrons. These pass into the image well and the signal increases. Once we reach the end of the shutter opening period the signal in the image well is read out, empty representing black and full representing very bright.
Consider what would happen if the image well, instead of being a single charge storage area was actually two charge storage areas and there is a way to select whether we use the combined image well storage areas or just one part of the image well.
When both areas are connected to the pixel the combined capacity is large. So it will take more electrons to fill it up, so more light is needed to produce the increased amount of electrons. This is the low sensitivity mode.
If part of the charge storage area is disconnected and all of the photodiodes output is directed into the remaining, now smaller storage area then it will fill up faster, producing a bigger signal more quickly. This is the high sensitivity mode.
What about noise?
In the low sensitivity mode with the bigger storage area any unwanted noise generated by the photodiode will be more diluted by the greater volume of electrons, so noise will be low. When the size of the storage area or image well is reduced the noise from the photodiode will be less diluted so the noise will be a little bit higher. But overall the noise will be much less that that which would be seen if a large amount of extra gain was added.
Note for the more technical amongst you: Strictly speaking the image well starts full. Electrons have a negative charge so as more electrons are added the signal in the image well is reduced until maximum brightness output is achieved when the image well is empty!!
As well as what I have illustrated above there may be other things going on such as changes to the amplifiers that boost the pixels output before it is passed to the converters that convert the pixel output from an analog signal to a digital one. But hopefully this will help explain why dual base ISO is very different to the conventional gain changes used to give electronic cameras a wide range of different ISO rating.
On the Sony Venice and the PXW-FX9 there is only a very small difference between the noise levels when you switch from the low base ISO to the high one. This means that you can pick and choose between either base sensitivity level depending on the type of scene you are shooting without having to worry about the image becoming unusable due to noise.
NOTE: This article is my own work and was prepared without any input from Sony. I believe that the dual ISO process illustrated above is at the core of how Sony achieve two different base sensitivities on the Venice and FX9 cameras. However I can not categorically guarantee this to be correct.
The simple answer as to whether you can shoot anamorphic on the FX9 or not, is no, you can’t. The FX9 certainly to start with, will not have an anamorphic mode and it’s unknown whether it ever will. I certainly wouldn’t count on it ever getting one (but who knows, perhaps if we keep asking for it we will get it).
But just because a camera doesn’t have a dedicated anamorphic mode it doesn’t mean you can’t shoot anamorphic. The main thing you won’t have is de-squeeze. So the image will be distorted and stretched in the viewfinder. But most external monitors now have anamorphic de-squeeze so this is not a huge deal and easy enough to work around.
1.3x or 2x Anamorphic?
With a 16:9 or 17:9 camera you can use 1.3x anamorphic lenses to get a 2:39 final image. So the FX9, like most 16:9 cameras will be suitable for use with 1.3x anamorphic lenses out of the box.
But for the full anamorphic effect you really want to shoot with 2x anamorphic lenses. A 2x anamorphic lens will give your footage a much more interesting look than a 1.3x anamorphic. But if you want to reproduce the classic 2:39 aspect ratio normally associated with anamorphic lenses and 35mm film then you need a 4:3 sensor rather than a 16:9 one – or do you?
Anamorphic on the PMW-F5 and F55.
It’s worth looking at shooting 2x Anamorphic on the Sony F5 and F55 cameras. These cameras have 17:9 sensors, so they are not ideal for 2x Anamorphic. However the cameras do have a dedicated Anamorphic mode. When shooting with a 2x Anamorphic lens because the 17:9 F55 sensor, like most super 35mm sensors, is not tall enough, after de-squeezing you will end up with a very narrow 3.55:1 aspect ratio. To avoid this very narrow final aspect ratio, once you have de-squeezed the image you need to crop the sides of the image by around 0.7x and then expand the cropped image to fill the frame. This not only reduces the resolution of the final output but also the usable field of view. But even with the resolution reduction as a result of the crop and zoom it was still argued that because the F55 starts from a 4K sensor that this was roughly the equivalent of Arri’s open gate 3.4K. However the loss of field of view still presents a problem for many productions.
What if I have Full Frame 16:9?
The FX9 has a 6K full frame sensor and a full frame sensor is bigger, not just wider but most importantly it’s taller than s35mm. Tall enough for use with a 2x s35 anamorphic lens! The FX9 sensor is approx 34mm wide and 19mm tall in FF6K mode.
In comparison the Arri 35mm 4:3 open gate sensor is area is 28mm x 18.1mm and we know this works very well with 2x Anamorphic lenses as this mimics the size of a full size 35mm cine film frame. The important bit here is the height – 18.1mm with the Arri open gate and 18.8mm for the FX9 in Full Frame Scan Mode.
Crunching the numbers.
If you do the maths – Start with the FX9 in FF mode and use a s35mm 2x anamorphic lens.
Because the image is 6K subsampled to 4K the resulting recording will have 4K resolution.
But you will need to crop the sides of the final recording by roughly 30% to remove the left/right vignette caused by using an anamorphic lens designed for 35mm movie film (the exact amount of crop will depend on the lens). This then results in a 2.8K ish resolution image depending on how much you need to crop.
4K Bayer doesn’t won’t give 4K resolution.
That doesn’t seem very good until you consider that a 4K 4:3 bayer sensor would only yield about 2.8K resolution anyway.
Arri’s s35mm cameras are open gate 3.2K bayer sensors so will result in an even lower resolution image, perhaps around 2.2K. Do remember that the original Arri ALEV sensor was designed when 2K was the norm for the cinema and HD TV was still new. The Arri super 35 cameras were for a long time the gold standard for Anamorphic because their sensor size and shape matches the size and shape of a full size 35mm movie film frame. But now cameras like Sony’s Venice that can shoot both 6K and 4K 4:3 and 6:5 are starting now taking over.
The FX9 in Full Frame scan mode will produce a great looking image with a 2x anamorphic lens without losing any of the field of view. The horizontal resolution won’t be 4K due to the left and right edge crop required, but the horizontal resolution should be higher than you would get from a 4K 16:9 sensor or a 3.2K 4:3 sensor. Unlike using a 16:9 4K sensor where both the horizontal and vertical resolution are compromised the FX9’s vertical resolution will be 4K and that’s important.
What about Netflix?
While Netflix normally insist on a minimum of a sensor with 4K of pixels horizontally for capture, they are permitting sensors with lower horizontal pixel counts to be used for anamorphic capture. Because the increased sensor height needed for 2x anamorphic means that there are more pixels vertically. The total usable pixel count when using the Arri LF with a typical 35mm 2x anamorphic lens is 3148 x 2636 pixels. Thats a total of 8 megapixels which is similar to the 8 megapixel total pixel count of a 4K 16:9 sensor with a spherical lens. The argument is that the total captured picture information is similar for both, so both should be, and are indeed allowed. The Arri format does lead to a final aspect ratio slightly wider than 2:39.
So could the FX9 get Netflix approval for 2x Anamorphic?
The FX9’s sensor has is 3168 pixel tall when shooting FF 16:9 as it’s pixel pitch is finer than the Arri LF sensor. When working with a 2x anamorphic super 35mm lens the image circle from the lens will cover around 4K x 3K of pixels, a total of 12 megapixels on the sensor when it’s operating in the 6K Full Frame scan mode. But then the FX9 will internally down scale this to that vignetted 4K recording that needs to be cropped.
6K down to 4K means that the 4K covered by the lens becomes roughly 2.7K. But then the 3.1K from the Arri when debayered will more than likely be even less than this, perhaps only 2.1K
But whether Netflix will accept the in camera down conversion is a very big question. The maths indicates that the resolution of the final output of the FX9 would be greater than that of the LF, even taking the necessary crop into account. But this would need to be tested and verified in practice. If the math is right, I see no reason why the FX9 won’t be able to meet Netflix’s minimum requirements for 2x anamorphic production. If this is a workflow you wish to pursue I would recommend taking the 10 bit 4:2:2 HDMI out to a ProRes recorder and record using the best codec you can until the FX9 gains the ability to output raw. Meeting the Netflix standard is speculation on my part, perhaps it never will get accepted for anamorphic, but to answer the original question –
– Can you shoot anamorphic with the FX9 – Absolutely, yes you can and the end result should be pretty good. But you’ll have to put up with a distorted image with the supplied viewfinder (for now at least).
This is BIG. Atomos have just announced a completely new range of monitors for HDR production. From 17″ to 55″ these new monitors will compliment their Atomos Sumo, Shogun, Shinobi and Ninja products to provide a complete suite of HDR monitors.
The new Neon displays are Dolby certified and for me this is particularly interesting and perfect timing as I am just about to do the post production on a couple of Dolby certified HDR productions.
I’m just about to leave for the Cinegear show over at Paramount Studios so I don’t have time to list all the amazing features here. So follow the link below to get the full low down on these 10 bit, million:1 contrast monitors.
This is a question that gets asked a lot. And if you are thinking about buying a new camera it has to be one the you need to think about. But in reality I don’t think 8K is a concern for most of us.
I recently had a conversation with a representative of a well known TV manufacturer. We discussed 8K and 8K TV’s. An interesting conclusion to the conversation was that this particular TV manufacturer wasn’t really expecting their to be a lot of 8K content anytime soon. The reason for selling 8K TV’s is the obvious one – In the consumers eyes. 8K is a bigger number than 4K, so it must mean that it is better. It’s any easy sell for the TV manufacturers, even though it’s arguable that most viewers will never be able to tell the difference between an 8K TV and a 4K one (lets face it most struggle to tell the difference between 4K and HD).
Instead of expecting 8K content this particular TV manufacturer will be focussing on high quality internal upscaling of 4K content to deliver an enhanced viewing experience.
It’s also been shown time and time again that contrast and Dynamic Range trump resolution for most viewers. This was one of the key reasons why it took a very long time for electronic film production to really get to the point where it could match film. A big part of the increase in DR for video cameras came from the move from the traditional 2/3″ video sensor to much bigger super 35mm sensors with bigger pixels. Big pixels are one of the keys to good dynamic range and the laws of physics that govern this are not likely to change any time soon.
This is part of the reason why Arri have stuck with the same sensor for so long. They know that reducing the pixel size to fit more into the same space will make it hard to maintain the excellent DR their cameras are known for. This is in part why Arri have chosen to increase the sensor size by combining sensors. It’s at least in part why Red and Sony have chosen to increase the size of their sensors beyond super 35mm as they increase resolution. The pixels on the Venice sensor are around the same size as most 4K s35 cameras. 6K was chosen as the maximum resolution because that allows this same pixel size to be used, no DR compromise, but it necessitates a full frame sensor and the use of high quality full frame lenses.
So, if we want 8K with great DR it forces us to use ever bigger sensors. Yes, you will get a super shallow DoF and this may be seen as an advantage for some productions. But what’s the point of a move to higher and higher resolutions if more and more of the image is out of focus due to a very shallow DoF? Getting good, pin sharp focus with ever bigger sensors is going to be a challenge unless we also dramatically increase light levels. This goes against the modern trend for lower illumination levels. Only last week I was shooting a short film with a Venice and it was a struggle to balance the amount of the subject that was in focus with light levels, especially at longer focal lengths. I don’t like shots of people where one eye is in focus but the other clearly not, it looks odd, which eye should you choose as the in-focus eye?
And what about real world textures? How many of the things that we shoot really contain details and textures beyond 4K? And do we really want to see every pore, wrinkle and blemish on our actors faces or sets? too much resolution on a big screen creates a form of hyper reality. We start to see things we would never ever normally see as the image and the textures become magnified and expanded. this might be great for a science documentary but is distracting for a romantic drama.
If resolution really, really was king then every town would have an IMAX theater and we would all be shooting IMAX.
Before 8K becomes normal and mainstream I believe HDR will be the next step. Consumers can see the benefits of HDR much more readily than 8K. Right now 4K is not really the norm, HD is. There is a large amount of 4K acquisition, but it’s not mainstream. The amount of HDR content being produced is still small. So first we need to see 4K become normal. When we get to the point that whenever a client rings the automatic assumption is that it’s a 4K shoot, so we won’t even bother to ask, that’s when we can consider 4K to be normal, but that’s not the case for most of us just yet. Following on from that the next step (IMHO) will be where for every project the final output will be 4K HDR. I see that as being at least a couple of years away yet.
After all that, then we might see a push for more 8K. At some point in the not too distant future 8K TV’s will be no more expensive than 4K ones. But I also believe that in-TV upscaling will be normal and possibly the preferred mode due to bandwidth restrictions. less compressed 4K upscaled to 8K may well look just as good if not better than an 8K signal that needs more compression.
8K may not become “normal” for a very long time. We have been able to easily shoot 4K for 6 years or more, but it’s only just becoming normal and Arri still have a tremendous following that choose to shoot at less than 4K for artistic reasons. The majority of Cinemas with their big screens are still only 2K, but audiences rarely complain of a lack of resolution. More and more content is being viewed on small phone or tablet screens where 4K is often wasted. It’s a story of diminishing returns, HD to 4K is a much bigger visual step than 4K to 8K and we still have to factor in how we maintain great DR.
So for the next few years at least, for the majority of us, I don’t believe 8K is actually desirable. many struggle with 4K workflows and the extra data and processing power needed compared to HD. An 8K frame is 4 times the size of a 4K frame. Some will argue that shooting in 8K has many benefits. This can be true if you main goal is resolution but in reality it’s only really very post production intensive projects where extensive re-framing, re-touching etc is needed that will benefit from shooting in 8K right now. It’s hard to get accurate numbers, but the majority of Hollywood movies still use a 2K digital intermediate and only around 20% of cinemas can actually project at more than 2K.
So in conclusion, in my humble opinion at least. 8K is more about the sales pitch than actual practical use and application. So people will use it – just because they can and it sounds impressive. But for most of us right now it simply isn’t necessary and it may well be a step too far.
There are a lot of videos circulating on the web right now showing what appears to be some kind of magic trick where someone has shot over exposed, recorded the over exposed images using ProRes Raw and then as if by magic made some adjustments to the footage and it goes from being almost nothing but a white out of over exposure to a perfectly exposed image.
This isn’t magic, this isn’t raw suddenly giving you more over exposure range than you have with log, this is nothing more than a quirk of the way FCP-X handles ProRes Raw material.
Before going any further – this isn’t a put-down of raw or ProRes raw. It’s really great to be able to take raw sensor data and record that with only minimal processing. There are a lot of benefits to shooting with raw (see my earlier post showing all the extra data that 12 bit raw can give). But a magic ability to let you over expose by seemingly crazy amounts isn’t something raw does any better than log.
Currently to work with ProRes Raw you have to go through FCP-X. FCP-X applies a default sequence of transforms to the Raw footage to get it from raw data to a viewable image. These all expect the footage to be exposed exactly as per the camera manufacturers recommendations, with no leeway. Inside FCP-X it’s either exposed exactly right, or it isn’t.
The default decode settings include a heavy highlight roll-off. Apple call it “Tone Mapping”. Fancy words used to make it sound special but it’s really no different to a LUT or the transforms and processes that take place in other raw decoders. Like a LUT it maps very specific values in the raw data to very specific output brightness values. So if you shoot just a bit bright – as you would often do with log to improve the signal to noise ratio – The ProRes raw appears to be heavily over exposed. This is because anything bright ends up crushed into nothing but flat white by the default highlight roll off that is applied by default.
In reality the material is probably only marginally over exposed, maybe just one to 2 stops which is something we have become used to doing with log. When you view brightly exposed log, the log itself doesn’t look over exposed, but if you apply a narrow high contrast 709 LUT to it, it then the footage looks over exposed until you grade it or add an exposure compensated LUT. This is what is happening by default inside FCP-X, a transform is being applied that makes brightly exposed footage look very bright and possibly over exposed – because thats the way it was shot!
This is why in FCP-X it is typical to change the color library to WCG (Wide Color Gamut) as this changes the way FCP-X processes the raw, changing the Tone Mapping and most importantly getting rid of the highlight roll off. With no roll-off, highlights and any even slight over exposure will still blow out as you can’t show 14 stops on a conventional 6 stop TV or monitor. Anything beyond the first 6 stops will be lost, the image will look over exposed until you grade or adjust the material to control the brighter parts of the image and bring them back into a viewable range. When you are in WCG mode in FCP-X the there is no longer a highlight roll off crushing the highlights and now because they are not crushed they can be recovered, but there isn’t any more highlight range than you would have if you shot with log on the same camera!
None of this is some kind of Raw over exposure magic trick as is often portrayed. It’s simply not really understanding how the workflow works and appreciating that if you shoot bright – well it’s going to look bright – until you normalise it in post. We do this all the time with log via LUT’s and grading too! It can be a little more straight forward to recover highlights from Linear Raw footage as comes form an FS5 or FS7 compared to log. That’s because of the way log maintains a constant data level in each highlight stop and often normal grading and colour correction tools don’t deal with this correctly. The highlight range is there, but it can be tricky to normalise the log without log grading tools such as the log controls in DaVinci Resolve.
Another problem is the common use of LUT’s on log footage. The vast majority of LUT’s add a highlight roll off, if you try to grade the highlights after adding a LUT with a highlight roll off it’s going to be next to impossible to recover the highlights. You must do the highlight recovery before the LUT is added or use a LUT that has compensation for any over exposure. All of these things can give the impression that log has less highlight range than the raw from the same camera. This is not normally the case, both will be the same as it’s the sensor that limits the range.
The difference in the highlight behaviour is in the workflows and very often both log and raw workflows are miss-understood. This can lead to owners and users of these cameras thinking that one process has more than the other, when in reality there is no difference, it’s appears to be different because the workflow works in a different way.
There has been a lot of discussion recently and few videos posted that perhaps give the impression that if you shoot with S-Log2 on an FS5 and compare it to raw shot on the FS5 there is very little difference.
Many of the points raised in the videos are correct. ProRes raw won’t give you any more dynamic range. It won’t improve the cameras low light performance. There are features such as automatic lens aberration correction applied when shooting internally which isn’t applied when shooting raw. Plus it’s true that shooting ProRes raw requires an external recorder that makes the diminutive little FS5 much more bulky.
So why in that case shoot ProRes Raw?
Frankly, if all you are doing is producing videos that will be compressed to within an inch of their life for YouTube, S-Log2 can do an excellent job when exposed well, it can be graded and can produce a nice image.
But if you are trying to produce the highest quality images possible then well shot ProRes raw will give you more data to work with in post production with fewer compression artefacts than the internal 8 bit UHD XAVC.
I was looking at some shots that I did in preparation for my recent webinar on ProRes raw earlier today and at first glance there isn’t perhaps much difference between the UHD 8 bit XAVC S-Log2 files and the ProRes raw files that were shot within seconds of each other. But look more closely and there are some important differences, especially if skin tones are important too you.
Skin tones sit half way between middle grey and white and typically span around 2 to 3 stops. So with S-Log 2 and an 8 bit recording a face would span around 24 to 34 IRE and have a somewhere between 24 and 35 code values – Yes, that’s right, maybe as few as 24 shades in each of the R, G and B channels. If you apply a basic LUT to this and then view it on a typical 8 bit monitor it will probably look OK.
But compare that to 12 bit linear raw recording and the same face with 2 to 3 stops across it will have anywhere up to 10 to 20 times as many code values ( somewhere around 250 – 500 code values depending on exactly how it’s exposed) . Apply the same LUT as for the S-Log2 and on the surface it looks pretty much the same – or does it?
Look closely and you will see more texture in the 12 bit raw. If you are lucky enough to have a 10 bit monitor the differences are even more apparent. Sure, it isn’t an in-your-face night and day difference but the 12 bit skin tones look less like plastic and much more real, they just look nicer, especially if it’s someone with a good complexion.
In addition looking at my test material I am also seeing some mpeg compression artefacts on the skin tones in the 8 bit XAVC that has a smoothing effect on the skin tones, reducing some of the subtle textures and adding to the slightly un-real, video look.
The other deal with a lack of code values and H624 compression is banding. Take 8 bit S-Log2 and start boosting the contrast in a sky scene, perhaps to bring out some cloud details and you will start to see banding and stair stepping if you are not very careful. You will also see it across wall and other textureless surfaces. You can even see this on your grading suite waveform scopes in many cases. You won’t get this with well exposed 12 bit linear raw (for any normal grading at least).
None of these are huge deals perhaps. But what is it that makes a great picture? Take Sony’s Venice or the Arri Alexa as examples. We know these to be great cameras that produce excellent images. But what is it that makes the images so good? The short answer is that it is a combination of a wide range of factors, each done as well as is possible. Good DR, good colour, good skin tones etc. So what you want to record is whatever the sensor in your camera can deliver as well as you can. 8 bit UHD compressed to only 100Mb/s is not really that great. 12 bit raw will give you more textures in the mid range and highlights. It does have some limitations in the shadows, but that is easily overcome with a nice bright exposure and printing down in post.
And it’s not just about image quality.
Don’t forget that ProRes Raw makes shooting at 4K DCI possible. If you hope to ever release your work for cinema display, perhaps on the festival circuit, you are going to be much better off shooting in the cinema DCI 4K standard rather than the UHD TV standard. It also allows you to shoot 60fps in 4K (I’m in the middle of a very big 4K 60p project right now). Want to shoot even faster – well with ProRes Raw you can, you can shoot at up to 120fps in 4K. So there are many other benefits to the raw option on the FS5 and recording to ProRes raw on a Shogun Inferno.
There is also the acceptability of 8 bit UHD. No broadcaster that I know of will ever consider 8 bit UHD unless there is absolutely no other way to get the material. You are far more likely to be able to get them to accept 12 bit raw.
Future proofing is another consideration. I am sure that ProRes raw decoders will improve and support in other applications will eventually come. By going back to your raw sensor data with better software you may be able to gain better image quality from your footage in the future. With Log you are already somewhat limited as the bulk of the image processing has already been done and is baked into the footage.
It’s late on Friday afternoon here in the UK and I’ve promised to spend some time with the family this evening. So no videos today. But next week I’ll post some of the examples I’ve been looking at so that you can see where ProRes raw elevates the image quality possible from the FS5.
At NAB 2018 a very hot topic is the launch of the FS5 II. The FS5 II is an update on the existing FS5 that includes the FS Raw output option and the HFR option as standard. So out of the box this means that this camera will be a great match to an Atomos Inferno to take advantage of the new Apple ProRes Raw codec.
Just like the FS5 the FS5 II can shoot using a range of different gamma curves including Rec-709, HLG, S-Log2 and S-Log3. So for those more involved projects where image control is paramount you can shoot in log (or raw) then take the footage into your favourite grading software and create whatever look you wish. You can tweak and tune your skin tones, play with the highlight roll off and create that Hollywood blockbuster look – with both the FS5 and the FS5 II. There is no change to this other than the addition of FS-Raw as standard on the FS5 II.
The big change, is to the cameras default colour science.
Ever since I started shooting on Sony cameras, which was a very long time ago, they have always looked a certain way. If you point a Sony camera at a Rec-709 test chart you will find that the colours are actually quite accurate, the color patches on the chart lining up with the target boxes on a vector scope. All Sony cameras look this way so that if you use several different cameras on the same project they should at least look very similar, even if one of those cameras is a few years old. But this look and standard was establish many years ago when camera and TV technology was nowhere near as advanced as it is today.
in addition, sometimes accurate isn’t pretty. Television display technology has come a long way in recent years. Digital broadcasting combined with good quality LCD and OLED displays now mean that we are able to see a wider range of colours and a larger dynamic range. Viewers expectations are changing, we all want prettier images.
When Sony launched the high end Venice digital cinema camera a bold step was taken, which was to break away from the standard Sony look and instead develop a new, modern, “pretty” look. A lot of research was done with both cinematographers and viewers trying to figure out what makes a pretty picture. Over several months I’ve watched Pablo, Sony’s colourist at the Digital Motion Picture Center at Pinewood studios develop new LUT’s with this new look for the Venice camera. It hasn’t been easy, but it looks really nice and is quite a departure from that standard Sony look.
The FS5 II includes many aspects of this new look. It isn’t just a change to the colours it is also a change to the default gamma curve that introduces a silky smooth highlight roll off that extends the dynamic range well beyond that normally possible with a conventional Rec-709 gamma curve. A lot of time was spent looking at how this new gamma behaves when shooting people and faces. In particular those troublesome highlights that you get on a nose or cheek that’s catching the light. You know – those pesky highlights that just don’t normally look nice on a video camera.
So as well as rolling off the brightness of these highlights in a smooth way, the color also subtly washes out to prevent the highlight color bloom that can be a video give away. This isn’t easy to do. Any colorist will tell you that getting bright skin tone highlights to look nice is tough. You bring down the brightness and it looks wrong because you loose too much contrast. De-saturate too much and it looks wrong as it just becomes a white blob. Finding the right balance of extended dynamic range with good contrast, plus a pleasing roll-off without a complete white-out is difficult enough to do in a grading suite where you can tweak and tune the settings for each shot. Coming up with a profile that will work over a vast range of shooting scenarios with no adjustment is even tougher. But it looks to me as though the engineers at Sony have really done a very nice job in the FS5 II.
Going forwards from here I would expect to see, or at least like to see, most of Sony’s future cameras have this new colour science. But this is a big step for Sony to break away from decades of one look and every camera looking more or less the same. But do remember this change is primarily to the default, “standard” gamma look. It does not effect the FS5 II’s log or raw recordings. There is also going to have to be a set of LUT’s to go with this new color science so that those shooting with with a mix of the baked in look and S-log or raw can make all the footage match. In addition users of other S-Log cameras will want to be able to make their cameras match. I see no reason why this won’t be possible via a LUT or set of LUT’s, within the limitations of each cameras sensor technology.
There has been a lot of people that seem unhappy with the FS5 II. I think many people want a Sony Venice for the price of an FS5. Let’s be realistic, that isn’t going to happen. 10 bit recording in UHD would be nice, but that would need higher bit rates to avoid motion artefacts which would then need faster and more expensive media. If you want higher image quality in UHD or 4K DCI do consider an Atomos recorder and the new ProRes Raw codec. The files are barely any bigger than ProRes HQ, but offer 12 bit quality.
Given that the price of the FS5 II is going to be pretty much the same or maybe even a little lower than the regular FS5 (before you even add any options), I am not sure why so many people are complaining. The FS5-II takes a great little camera, makes it even better and costs even less.
Over the last few days there have been various rumours and posts coming from Apple about how they intend to get back to providing decent support for professional users of their computers. Apple have openly admitted that the Trash Can Mac Pro has thermal problems and as a result has become a dead end design, which is why there haven’t been any big updates to the flagship workstation from Apple. Apple have hinted that new workstations are on the way, although it would seem that we won’t see these until next year perhaps.
Another announcement came out today, a new version of FCP-X is to be released which includes support for a new ProRes codec called ProRes Raw. This is BIG!
Raw recordings can be made from certain cameras that have bayer sensors such as the Sony FS5 and FS7. Recording the raw data from the sensor maximises your post production flexibility and normally offers the best possible image quality from the camera. Currently if you record 4K raw with these cameras using an Atomos Shogun or similar the bit rate will be close to 3Gb/s at 24p. These are huge files and the cDNG format used to record them is difficult and clunky to work with. As a result most users take the raw output from the camera and transform it to S-Log2 or S-Log3 and record it as 10 bit ProRes on the external recorder. This is a bit of a shame as going from 12 bit linear raw to 10 bit S-log means you are not getting the full benefit of the raw output.
Enter ProRes Raw: ProRes Raw will allow users to record the cameras raw output at a much reduced bit rate with no significant of quality. There are two versions, ProRes Raw and ProRes Raw HQ. The HQ bit rate is around 1Gb/s at 24fps. This is not significantly bigger than the ProRes HQ (880Mb/s) that most users are using now to record the raw, yet the full benefit of 12 bit linear will be retained. A 1TB SSD will hold around an hour of ProRes Raw, compare that to uncompressed raw where you only get around 20 mins and you can see that this is a big step forwards for users of the FS5 in particular.
ProRes Raw (the non HQ version) is even smaller! The files are smaller than typical ProRes HQ files. This is possible because recording raw is inherently more efficient than recording component video.
It is claimed by Apple that ProRes Raw will play back in real time on MacBook Pro’s and iMacs without any additional rendering or external graphics cards, so it obviously isn’t terribly processor intensive. This is excellent news! Within FCP-X the playback resolution can be decreased to bring improved playback performance in less powerful systems or mutistream playback.
It looks like you will be able to record from a 4K DCI from an FS5 or FS7 at up to 60fps continuously. This breaks through the previous limits for the Shogun of 30fps. The FS7 will be able to record 2K raw at up to 240fps and the FS5 will be able to record 4K raw at 100 and 120fps for 4 seconds. Other raw cameras are also supported by the Atomos recorders at differing frame sizes and frame rates.
At the moment the only recorders listed as supporting ProRes Raw are the Atomos Shogun Inferno and the Sumo19 and it looks like it will be a free update. In addition the DJI Inspire 2 drone and Zenmuse X7 Super 35mm camera will also support ProRes Raw.
Whether you will be able to use ProRes Raw in other applications such as Resolve or Premiere is unclear at this time. I hope that you can (or at least will be able to in the near future).