Sony has launched an entirely new division called Airpeak. Airpeak have produced a large drone that can carry an Alpha sized camera. They claim that this is the smallest drone capable of carrying an Alpha sized camera. It’s unknown at this time whether the Airpeak division will purely focus on larger drones capable of carrying non integrated cameras or whether they will also produce smaller drones with integral cameras. It would certainly make sense to leverage Sony’s sensor expertise by creating dedicated cameras for drones and then drones to carry those cameras.
The drone market is going to be a tough one to make inroads into. There are already a couple of very well regarded drone manufacturers making some great drones such as the DJI inspire or Mavic Pro. But most of these are small and cannot carry larger external cameras. However the cameras that these drones are equipped with can deliver very high quality images – and they continue to get better and better. The use of larger drones for video applications is more specialist, however globally it is a large market. Whether Sony can compete in the more specialist area of larger drones that carry heavier payloads is yet to be seen. I hope the succeed.
One thing I intend to do in the next few years as the Sun enters the more active phase of it’s 11 year solar cycle is to shoot the Aurora from a drone and a camera like the A7S III and a larger, stable drone would be perfect. But there is no indication of pricing yet and a drone of this size won’t be cheap. So unless I decide to do a lot more drone work than I do already, perhaps it will be better to hire someone with the right kit. But that’s not as much fun as doing it yourself!
For more information on Airpeak do take a look at their website. There is already some impressive footage of it being used to shoot a Vision-S car on a test track.
Raw can be a brilliant tool, I use it a lot. High quality raw is my preferred way of shooting. But it isn’t magic, it’s just a different type of recording codec.
All too often – and I’m as guilty as anyone – people talk about raw as “raw sensor data” a term that implies that raw really is something very different to a normal recording. In reality it’s really not that different. When shooting raw all that happens is that the video frames from the sensor are recorded before they are converted to a colour image. A raw frame is still a picture, it’s just that it’s a bitmap image made up of brightness values, each pixel represented by a single brightness code value rather than a colour image where each location in the image is represented by 3 values one for each of Red, Green and Blue or Luma, Cb and Cr.
As that raw frame is still nothing more than a normal bitmap all the cameras settings such as white balance, ISO etc are in fact baked in to the recording. Each pixel only has one single value and that value will have been determined by the way the camera is setup. Nothing you do in post production can change what was actually recorded. Most CMOS sensors are daylight balanced, so unless the camera adjusts the white balance prior to recording – which is what Sony normally do – your raw recording will be daylight balanced.
Modern cameras when shooting log or raw also record metadata that describes how the camera was set when the image was captured.
So the recorded raw file already has a particular white balance and ISO. I know lots of people will be disappointed to hear this or simply refuse to believe this but that’s the truth about a raw bitmap image with a single code value for each pixel and that value is determined by the camera settings.
This can be adjusted later in post production, but the adjustment range is not unlimited and it is not the same as making an adjustment in the camera. Plus there can be consequences to the image quality if you make large adjustments.
Log can be also adjusted extensively in post too. For decades feature films shot on film were scanned using 10 bit Cineon log (which is the log gamma curve S-Log3 is based on) and 10 bit log used for post production until 12 bit and then 16 bit linear intermediates came along like OpenEXR. So this should tell you that actually log can be graded very well and very extensively.
But then many people will tell you that you can’t grade log as well as raw. Often they will give photographers as an example where there is a huge difference between what you can do with a raw photo and a normal image. But we also have to remember this is typically comparing what you can do with a highly compressed 8 bit jpeg file and an often uncompressed 12 or 14 bit raw file. It’s not a fair comparison, of course you would expect the 14 bit file to be better.
The other argument often given is that it’s very hard to change the white balance of log in post, it doesn’t look right or it falls apart. Often these issues are nothing to do with the log recording but more to do with the tools being used.
When you work with raw in your editing or grading software you will almost always be using a dedicated raw tool or raw plugin designed for the flavour of raw you are using. As a result everything you do to the file is optimised for the exact flavour of raw you are dealing with. It shouldn’t come as a surprise to find that to get the best from log you should be using tools specifically designed for the type of log you are using. In the example below you can see how Sony’s Catalyst Browse can perfectly correctly change the white balance and exposure of S-log material with simple sliders just as effectively as most raw formats.
Applying the normal linear or power law (709 is power law) corrections found in most edit software to Log won’t have the desired effect and basic edit software rarely has proper log controls. You need to use a proper grading package like Resolve and it’s dedicated log controls. Better still some form of colour managed workflow like ACES where your specific type of log is precisely converted on the fly to a special digital intermediate and the corrections are made to the intermediate file. There is no transcoding, you just tell ACES what the footage was was shot on and magic happens under the hood. Once you have done that you can change the white balance or ISO of log material in exactly the same way as raw. There is very, very little difference.
When people say you can’t push log, more often than not it isn’t a matter of can’t, it’s a case of can – but you need to use the right tools.
Less compression or a greater bit depth are where the biggest differences between a log or raw recording come from, not so much from whether the data is log or raw. Don’t forget raw is often recorded using log, which kind of makes the “you can’t grade log” argument a bit daft.
Camera manufactures and raw recorder manufacturers are perfectly happy to allow everyone to believe raw is magic and worse still, let people believe that ANY type of raw must be better than all other types of recordings. Read though any camera forum and you will see plenty of examples of “it’s raw so it must be better” or “I need raw because log isn’t as good” without any comprehension of what raw is and how in reality it’s the way the raw is compressed and the bit depth that really matters.
If we take ProRes Raw as an example: For a 4K 24/25fps file the bit rate is around 900Mb/s. For a conventional ProRes HQ file the bit rate is around 800Mb/s. So the file size difference between the two is not at all big.
But the ProRes Raw file only has to store around 1/3 as many data points as the component ProResHQ file. As a result, even though the ProRes Raw file often has a higher bit depth, which in itself usually means better a better quality recording, it is also much, much less compressed and as a result will have fewer artefacts.
It’s the reduced compression and deeper bit depth possible with raw that can lead to higher quality recordings and as a result may bring some grading advantages compared to a normal ProRes or other compressed file. The best bit is there is no significant file size penalty. So you have the same amount of data, but you data should be of higher quality. So given that you won’t need more storage, which should you use? The higher bit depth less compressed file or the more compressed file?
But, not all raw files are the same. Some cameras feature highly compressed 10 bit raw, which frankly won’t be any better than most other 10 bit recordings as you are having to do all the complex math to create a colour image starting with just 10 bit. Most cameras do this internally at at least 12 bit. I believe raw needs to be at least 12 bit to be worth having.
If you could record uncompressed 12 bit RGB or 12 bit component log from these cameras that would likely be just as good and just as flexible as any raw recordings. But the files would be huge. It’s not that raw is magic, it’s just that raw is generally much less compressed and depending on the camera may also have a greater bit depth. That’s where the benefits come from.
It’s amazing how often people will tell you how easy it is to change the white balance or adjust the ISO of raw footage in post. But can you, is it really true and is it somehow different to changing the ISO or white balance of Log footage?
Let’s start with ISO. If ISO is sensitivity, or the equivalent of sensitivity how on earth can you change the sensitivity of the camera once you get into post production. The answer is you can’t.
But then we have to consider how ISO works on an electronic camera. You can’t change the sensor in a video camera so in reality you can’t change how sensitive an electronic camera is (I’m ignoring cameras with dual ISO for a moment). All you can do is adjust the gain or amplification applied to the signal from the sensor. You can add gain in post production too. So, when you adjust the exposure or using the ISO slider for your raw footage in post all you are doing is adjusting how much gain you are adding. But you can do the same with log or any other gamma.
One thing that makes a difference with raw is that the gain is applied in such a way that what you see looks like an actual sensitivity change no matter what gamma you are transforming the raw to. This makes it a little easier to make changes to the final brightness in a pleasing way. But you can do exactly the same thing with log footage. Anything you do in post must be altering the recorded file, it can never actually change what you captured.
Changing the white balance in post: White Balance is no different to ISO, you can’t change in post what the camera captured. All you can do is modify it through the addition or subtraction of gain.
Think about it. A sensor must have a certain response to light and the colours it sees depending on the material it’s made from and the colour filters used. There has to be a natural fixed white balance or a colour temperature that it works best at.
The Silicon that video sensors are made from is almost always more sensitive at the red end of the spectrum than the blue end. So as a result almost all sensors tend to produce the best results with light that has a lot of blue (to make up for the lack of blue sensitivity) and not too much red. So most cameras naturally perform best with daylight and as a result most sensors are considered daylight balanced.
If a camera produces a great image under daylight how can you possibly get a great image under tungsten light without adjusting something? Somehow you need to adjust the gain of the red and blue channels.
Do it in camera and what you record is optimised for your choice of colour temperature at the time of shooting. But you can always undo or change this in post by subtracting or adding to whatever was added in the camera.
If the camera does not move away from its native response then if you want anything other than the native response you will have to do it in post and you will be recording at the cameras native white balance. If you want a different colour temp then you need to add or subtract gain to the R & B channels in post to alter it.
Either way what you record has a nominal white balance and anything you do in post is skewing what you have recorded using gain. There is no such thing as a camera with no native white balance, all cameras will favour one particular colour temperature. So even if a manufacturer claims that the white balance isn’t baked in what they mean is they don’t offer the ability to make any adjustments to the recorded signal. If you want the very best image quality, the best method is to adjust at the time of recording. So, as a result a lot of camera manufacturers will skew the gain of the red and blue channels of the sensor in the camera when shooting raw as this optimises what you are recording. You can then skew it again in post should you want a different balance.
With either method if you want to change the white balance from what was captured you are altering the gain of the red and blue channels. Raw doesn’t magically not have a white balance, so shooting with the wrong white balance and correcting it in post is not something you want to do. Often you can’t correct badly balanced raw any better than you can correct incorrectly balanced log.
How far you can adjust or correct raw depends on how it’s been compressed (or not), the bit depth, whether it’s log or linear and how noisy it is. Just like a log recording really, it all depends on the quality of the recording.
The big benefit raw can have is that the amount of data that needs to be recorded is considerably reduced compared conventional component or RGB video recordings. As a result it’s often possible to record using a greater bit depth or with much less compression. It is the greater bit depth or reduced compression that really makes a difference. 16 bit data can have up to 65,536 luma gradations, compare that to the 4096 of 12 bit or 1024 of 10 bit and you can see how a 16 bit recording can have so much more information than a 10 bit one. And that makes a difference. But 10 bit log v 10 bit raw, well it depends on the compression, but well compressed 10 bit log will likely outperform 10 bit raw as the all important colour processing will have been done in the camera at a much higher bit depth than 10 bit.
With some difficult times ahead and the need for most of us to minimise contact with others there has never been a greater need for streaming and online video services that now.
I’m setting up some streaming gear in my home office so that I can do some presentations and online workshops over the coming weeks.
I am not an expert on this and although I did recently buy a hardware RTMP streaming encoder, like many of us I didn’t have a good setup for live feeds and streaming.
So like so many people I tried to buy a Blackmagic Design Atem, which is a low cost all in one switcher and streaming device. But guess what? They are out of stock everywhere with no word on when more will become available. So I have had to look at other options.
The good news is that there are many options. There is always your mobile phone, but I want to be able to feed several sources including camera feeds, the feed from my laptop and the video output from a video card.
OBS is s great piece of software that can convert almost any video source connected to a computer into a live stream that can be sent to most platforms including Facebook and YouTube etc. If the computer is powerful enough it can switch between different camera sources and audio sources. If you follow the tutorials on the OBS website it’s pretty quick and easy to get it up and running.
So how am I getting video into the laptop that’s running OBS? I already had a Blackmagic Mini Recorder which is an HDMI and SDI to thunderbolt input adapter and I shall be using this to feed the computer. There are many other options but the BM Mini Recorders are really cheap and most dealers stock them as well as Amazon. it’s HD only but for this I really don’t need 4K or UHD.
Taking things a step further I also have both an Atomos Sumo and an Atomos Shogun 7. Both of these monitor/recorders have the ability to act as a 4 channel vision switcher. The great thing about these compared to the Blackmagic Atem is that you can see all your sources on a single screen and you simply touch on the source that you wish to go live. A red box appears around that source and it’s output from the device.
So now I have the ability to stream a feed via OBS from the SDI or HDMI input on the Blackmagic Mini Recorder, fed from one of 4 sources switched by the Atomos Sumo or Shogun 7. A nice little micro studio setup. My sources will be my FS5 and FX9. I can use my Shogun as a video player. For workflow demos I will use another laptop or my main edit machine feeding the video output from DaVinci Resolve via a Blackmagic Mini Monitor which is similar to the mini recorder but the mini monitor is an output device with SDI and HDMI outputs. The final source will be the HDMI output of the edit computer so you can see the desktop.
Don’t forget audio. You can probably get away with very low quality video to get many messages across. But if the audio is hard to hear or difficult to understand then people won’t want to watch your stream. I’m going to be feeding a lavalier (tie clip) mic directly into the computer and OBS.
I think really my main reason for writing this was really to show that many of us probably already have most of the tools needed to put together a small streaming package. Perhaps you can offer this as a service to clients that need to now think about online training or meetings. I was lucky enough to have already had all the items listed in this article, the only extras I have had to but are an extra thunderbolt cable as I only had one. But even if you don’t have a Sumo or Shogun 7 you can still use OBS to switch between the camera on your laptop and any other external inputs. The OBS software is free and very powerful and this really is the keystone to making this all work.
I will be starting a number of online seminars and sessions in the coming weeks. I do have some tutorial videos that I need to finish editing first, but once that’s done expect to see lots of interesting online content from me. Do let me know what topics you would like to see covered and subject to a little bit of sponsorship I’ll see what I can do.
Stay well people. This will pass and then we can all get back on with life again.
In the last month or so it has become increasingly hard to find dealers or stores with 3rd party BP-U style batteries in stock.
After a lot of digging around and talking to dealers and battery manufacturers it became apparent that Sony were asking the manufacturers of BP-U style batteries to stop making and selling them or face legal action. The reason given being that the batteries impinge on Sony’s Intellectual Property rights.
Why Is This Happening Now?
It appears that the reason for this clamp down is because it was discovered that the design of some of these 3rd party batteries was such that the battery could be inserted into the camera in a way that instead of power flowing through the power pins to the camera, power was flowing through the data pins. This will burn out the circuit boards in the camera and the camera will no longer work.
Users of these damaged cameras, unaware that the problem was caused by the battery were sending them back to Sony for repair under warranty. I can imagine that many arguments would have then followed over who was to pay for these potentially very expensive repairs or camera replacements.
So it appears that to prevent further issues Sony is trying to stop potentially damaging batteries from being manufactured and sold.
This is good and bad. Of course no one wants to use a battery that could result in the need to replace a very expensive camera with a new one (and if you were not aware it was the battery you could also damage the replacement camera). But many of us, myself included, have been using 3rd party batteries so that we can have a D-Tap power connection on the battery to power other devices such as monitors.
Only Option – BP-U60T?
Sony don’t produce batteries with D-Tap outlets. They do make a battery with a hirose connector (BP-U60T), but that’s not what we really want and compared to the 3rd party batteries it’s very expensive and the capacity isn’t all that high.
So where do we go from here?
If you are going to continue to use 3rd party batteries, do be very careful about how you insert them and be warned that there is the potential for serious trouble. I don’t know how widespread the problem is.
We can hope perhaps that maybe Sony will either start to produce batteries with a D-Tap of their own. Or perhaps they can work with a range of chosen 3rd party battery manufacturers to find a way to produce safe batteries with D-Tap outputs under licence.
Almost all modern day video and electronic stills cameras have the ability to change the brightness of the images they record. The most common way to achieve this is through the addition of gain or through the amplification of the signal that comes from the sensor.
On older video cameras this amplification was expressed as dB (decibels) of gain. A brightness change of 6dB is the same as one stop of exposure or a doubling of the ISO rating. But you must understand that adding gain to raise the ISO rating of a camera is very different to actually changing the sensitivity of a camera.
The problem with increasing the amplification or adding gain to the sensor output is that when you raise the gain you increase the level of the entire signal that comes from the sensor. So, as well as increasing the levels of the desirable parts of the image, making it brighter, the extra gain also increases the amplitude of the noise, making that brighter too.
Imagine you are listening to an FM radio. The signal starts to get a bit scratchy, so in order to hear the music better you turn up the volume (increasing the gain). The music will get louder, but so too will the scratchy noise, so you may still struggle to hear the music. Changing the ISO rating of an electronic camera by adding gain is little different. When you raise the gain the picture does get brighter but the increase in noise means that the darkest things that can be seen by the camera remain hidden in the noise which has also increased in amplitude.
Another issue with adding gain to make the image brighter is that you will also normally reduce the dynamic range that you can record.
This is because amplification makes the entire signal bigger. So bright highlights that may be recordable within the recording range of the camera at 0dB or the native ISO may be exceed the upper range of the recording format when even only a small amount of gain is added, limiting the high end.
At the same time the increased noise floor masks any additional shadow information so there is little if any increase in the shadow range.
Reducing the gain doesn’t really help either as now the brightest parts of the image from the sensor are not amplified sufficiently to reach the cameras full output. Very often the recordings from a camera with -3dB or -6dB of gain will never reach 100%.
A camera with dual base ISO’s works differently.
Instead of adding gain to increase the sensitivity of the camera a camera with a dual base ISO sensor will operate the sensor in two different sensitivity modes. This will allow you to shoot at the low sensitivity mode when you have plenty of light, avoiding the need to add lots of ND filters when you want to obtain a shallow depth of field. Then when you are short of light you can switch the camera to it’s high sensitivity mode.
When done correctly, a dual ISO camera will have the same dynamic range and colour performance in both the high and low ISO modes and only a very small difference in noise between the two.
How dual sensitivity with no loss of dynamic range is achieved is often kept very secret by the camera and sensor manufacturers. Getting good, reliable and solid information is hard. Various patents describe different methods. Based on my own research this is a simplified description of how I believe Sony achieve two completely different sensitivity ranges on both the Venice and FX9 cameras.
The image below represents a single microscopic pixel from a CMOS video sensor. There will be millions of these on a modern sensor. Light from the camera lens passes first through a micro lens and colour filter at the top of the pixel structure. From there the light hits a part of the pixel called a photodiode. The photodiode converts the photons of light into electrons of electricity.
In order to measure the pixel output we have to store the electrons for the duration of the shutter period. The part of the pixel used to store the electrons is called the “image well” (in an electrical circuit diagram the image well would be represented as a capacitor and is often simply the capacitance of the the photodiode itself).
Then as more and more light hits the pixel, the photodiode produces more electrons. These pass into the image well and the signal increases. Once we reach the end of the shutter opening period the signal in the image well is read out, empty representing black and full representing very bright.
Consider what would happen if the image well, instead of being a single charge storage area was actually two charge storage areas and there is a way to select whether we use the combined image well storage areas or just one part of the image well.
When both areas are connected to the pixel the combined capacity is large. So it will take more electrons to fill it up, so more light is needed to produce the increased amount of electrons. This is the low sensitivity mode.
If part of the charge storage area is disconnected and all of the photodiodes output is directed into the remaining, now smaller storage area then it will fill up faster, producing a bigger signal more quickly. This is the high sensitivity mode.
What about noise?
In the low sensitivity mode with the bigger storage area any unwanted noise generated by the photodiode will be more diluted by the greater volume of electrons, so noise will be low. When the size of the storage area or image well is reduced the noise from the photodiode will be less diluted so the noise will be a little bit higher. But overall the noise will be much less that that which would be seen if a large amount of extra gain was added.
Note for the more technical amongst you: Strictly speaking the image well starts full. Electrons have a negative charge so as more electrons are added the signal in the image well is reduced until maximum brightness output is achieved when the image well is empty!!
As well as what I have illustrated above there may be other things going on such as changes to the amplifiers that boost the pixels output before it is passed to the converters that convert the pixel output from an analog signal to a digital one. But hopefully this will help explain why dual base ISO is very different to the conventional gain changes used to give electronic cameras a wide range of different ISO rating.
On the Sony Venice and the PXW-FX9 there is only a very small difference between the noise levels when you switch from the low base ISO to the high one. This means that you can pick and choose between either base sensitivity level depending on the type of scene you are shooting without having to worry about the image becoming unusable due to noise.
NOTE: This article is my own work and was prepared without any input from Sony. I believe that the dual ISO process illustrated above is at the core of how Sony achieve two different base sensitivities on the Venice and FX9 cameras. However I can not categorically guarantee this to be correct.
The simple answer as to whether you can shoot anamorphic on the FX9 or not, is no, you can’t. The FX9 certainly to start with, will not have an anamorphic mode and it’s unknown whether it ever will. I certainly wouldn’t count on it ever getting one (but who knows, perhaps if we keep asking for it we will get it).
But just because a camera doesn’t have a dedicated anamorphic mode it doesn’t mean you can’t shoot anamorphic. The main thing you won’t have is de-squeeze. So the image will be distorted and stretched in the viewfinder. But most external monitors now have anamorphic de-squeeze so this is not a huge deal and easy enough to work around.
1.3x or 2x Anamorphic?
With a 16:9 or 17:9 camera you can use 1.3x anamorphic lenses to get a 2:39 final image. So the FX9, like most 16:9 cameras will be suitable for use with 1.3x anamorphic lenses out of the box.
But for the full anamorphic effect you really want to shoot with 2x anamorphic lenses. A 2x anamorphic lens will give your footage a much more interesting look than a 1.3x anamorphic. But if you want to reproduce the classic 2:39 aspect ratio normally associated with anamorphic lenses and 35mm film then you need a 4:3 sensor rather than a 16:9 one – or do you?
Anamorphic on the PMW-F5 and F55.
It’s worth looking at shooting 2x Anamorphic on the Sony F5 and F55 cameras. These cameras have 17:9 sensors, so they are not ideal for 2x Anamorphic. However the cameras do have a dedicated Anamorphic mode. When shooting with a 2x Anamorphic lens because the 17:9 F55 sensor, like most super 35mm sensors, is not tall enough, after de-squeezing you will end up with a very narrow 3.55:1 aspect ratio. To avoid this very narrow final aspect ratio, once you have de-squeezed the image you need to crop the sides of the image by around 0.7x and then expand the cropped image to fill the frame. This not only reduces the resolution of the final output but also the usable field of view. But even with the resolution reduction as a result of the crop and zoom it was still argued that because the F55 starts from a 4K sensor that this was roughly the equivalent of Arri’s open gate 3.4K. However the loss of field of view still presents a problem for many productions.
What if I have Full Frame 16:9?
The FX9 has a 6K full frame sensor and a full frame sensor is bigger, not just wider but most importantly it’s taller than s35mm. Tall enough for use with a 2x s35 anamorphic lens! The FX9 sensor is approx 34mm wide and 19mm tall in FF6K mode.
In comparison the Arri 35mm 4:3 open gate sensor is area is 28mm x 18.1mm and we know this works very well with 2x Anamorphic lenses as this mimics the size of a full size 35mm cine film frame. The important bit here is the height – 18.1mm with the Arri open gate and 18.8mm for the FX9 in Full Frame Scan Mode.
Crunching the numbers.
If you do the maths – Start with the FX9 in FF mode and use a s35mm 2x anamorphic lens.
Because the image is 6K subsampled to 4K the resulting recording will have 4K resolution.
But you will need to crop the sides of the final recording by roughly 30% to remove the left/right vignette caused by using an anamorphic lens designed for 35mm movie film (the exact amount of crop will depend on the lens). This then results in a 2.8K ish resolution image depending on how much you need to crop.
4K Bayer doesn’t won’t give 4K resolution.
That doesn’t seem very good until you consider that a 4K 4:3 bayer sensor would only yield about 2.8K resolution anyway.
Arri’s s35mm cameras are open gate 3.2K bayer sensors so will result in an even lower resolution image, perhaps around 2.2K. Do remember that the original Arri ALEV sensor was designed when 2K was the norm for the cinema and HD TV was still new. The Arri super 35 cameras were for a long time the gold standard for Anamorphic because their sensor size and shape matches the size and shape of a full size 35mm movie film frame. But now cameras like Sony’s Venice that can shoot both 6K and 4K 4:3 and 6:5 are starting now taking over.
The FX9 in Full Frame scan mode will produce a great looking image with a 2x anamorphic lens without losing any of the field of view. The horizontal resolution won’t be 4K due to the left and right edge crop required, but the horizontal resolution should be higher than you would get from a 4K 16:9 sensor or a 3.2K 4:3 sensor. Unlike using a 16:9 4K sensor where both the horizontal and vertical resolution are compromised the FX9’s vertical resolution will be 4K and that’s important.
What about Netflix?
While Netflix normally insist on a minimum of a sensor with 4K of pixels horizontally for capture, they are permitting sensors with lower horizontal pixel counts to be used for anamorphic capture. Because the increased sensor height needed for 2x anamorphic means that there are more pixels vertically. The total usable pixel count when using the Arri LF with a typical 35mm 2x anamorphic lens is 3148 x 2636 pixels. Thats a total of 8 megapixels which is similar to the 8 megapixel total pixel count of a 4K 16:9 sensor with a spherical lens. The argument is that the total captured picture information is similar for both, so both should be, and are indeed allowed. The Arri format does lead to a final aspect ratio slightly wider than 2:39.
So could the FX9 get Netflix approval for 2x Anamorphic?
The FX9’s sensor has is 3168 pixel tall when shooting FF 16:9 as it’s pixel pitch is finer than the Arri LF sensor. When working with a 2x anamorphic super 35mm lens the image circle from the lens will cover around 4K x 3K of pixels, a total of 12 megapixels on the sensor when it’s operating in the 6K Full Frame scan mode. But then the FX9 will internally down scale this to that vignetted 4K recording that needs to be cropped.
6K down to 4K means that the 4K covered by the lens becomes roughly 2.7K. But then the 3.1K from the Arri when debayered will more than likely be even less than this, perhaps only 2.1K
But whether Netflix will accept the in camera down conversion is a very big question. The maths indicates that the resolution of the final output of the FX9 would be greater than that of the LF, even taking the necessary crop into account. But this would need to be tested and verified in practice. If the math is right, I see no reason why the FX9 won’t be able to meet Netflix’s minimum requirements for 2x anamorphic production. If this is a workflow you wish to pursue I would recommend taking the 10 bit 4:2:2 HDMI out to a ProRes recorder and record using the best codec you can until the FX9 gains the ability to output raw. Meeting the Netflix standard is speculation on my part, perhaps it never will get accepted for anamorphic, but to answer the original question –
– Can you shoot anamorphic with the FX9 – Absolutely, yes you can and the end result should be pretty good. But you’ll have to put up with a distorted image with the supplied viewfinder (for now at least).
This is BIG. Atomos have just announced a completely new range of monitors for HDR production. From 17″ to 55″ these new monitors will compliment their Atomos Sumo, Shogun, Shinobi and Ninja products to provide a complete suite of HDR monitors.
The new Neon displays are Dolby certified and for me this is particularly interesting and perfect timing as I am just about to do the post production on a couple of Dolby certified HDR productions.
I’m just about to leave for the Cinegear show over at Paramount Studios so I don’t have time to list all the amazing features here. So follow the link below to get the full low down on these 10 bit, million:1 contrast monitors.
This is a question that gets asked a lot. And if you are thinking about buying a new camera it has to be one the you need to think about. But in reality I don’t think 8K is a concern for most of us.
I recently had a conversation with a representative of a well known TV manufacturer. We discussed 8K and 8K TV’s. An interesting conclusion to the conversation was that this particular TV manufacturer wasn’t really expecting their to be a lot of 8K content anytime soon. The reason for selling 8K TV’s is the obvious one – In the consumers eyes. 8K is a bigger number than 4K, so it must mean that it is better. It’s any easy sell for the TV manufacturers, even though it’s arguable that most viewers will never be able to tell the difference between an 8K TV and a 4K one (lets face it most struggle to tell the difference between 4K and HD).
Instead of expecting 8K content this particular TV manufacturer will be focussing on high quality internal upscaling of 4K content to deliver an enhanced viewing experience.
It’s also been shown time and time again that contrast and Dynamic Range trump resolution for most viewers. This was one of the key reasons why it took a very long time for electronic film production to really get to the point where it could match film. A big part of the increase in DR for video cameras came from the move from the traditional 2/3″ video sensor to much bigger super 35mm sensors with bigger pixels. Big pixels are one of the keys to good dynamic range and the laws of physics that govern this are not likely to change any time soon.
This is part of the reason why Arri have stuck with the same sensor for so long. They know that reducing the pixel size to fit more into the same space will make it hard to maintain the excellent DR their cameras are known for. This is in part why Arri have chosen to increase the sensor size by combining sensors. It’s at least in part why Red and Sony have chosen to increase the size of their sensors beyond super 35mm as they increase resolution. The pixels on the Venice sensor are around the same size as most 4K s35 cameras. 6K was chosen as the maximum resolution because that allows this same pixel size to be used, no DR compromise, but it necessitates a full frame sensor and the use of high quality full frame lenses.
So, if we want 8K with great DR it forces us to use ever bigger sensors. Yes, you will get a super shallow DoF and this may be seen as an advantage for some productions. But what’s the point of a move to higher and higher resolutions if more and more of the image is out of focus due to a very shallow DoF? Getting good, pin sharp focus with ever bigger sensors is going to be a challenge unless we also dramatically increase light levels. This goes against the modern trend for lower illumination levels. Only last week I was shooting a short film with a Venice and it was a struggle to balance the amount of the subject that was in focus with light levels, especially at longer focal lengths. I don’t like shots of people where one eye is in focus but the other clearly not, it looks odd, which eye should you choose as the in-focus eye?
And what about real world textures? How many of the things that we shoot really contain details and textures beyond 4K? And do we really want to see every pore, wrinkle and blemish on our actors faces or sets? too much resolution on a big screen creates a form of hyper reality. We start to see things we would never ever normally see as the image and the textures become magnified and expanded. this might be great for a science documentary but is distracting for a romantic drama.
If resolution really, really was king then every town would have an IMAX theater and we would all be shooting IMAX.
Before 8K becomes normal and mainstream I believe HDR will be the next step. Consumers can see the benefits of HDR much more readily than 8K. Right now 4K is not really the norm, HD is. There is a large amount of 4K acquisition, but it’s not mainstream. The amount of HDR content being produced is still small. So first we need to see 4K become normal. When we get to the point that whenever a client rings the automatic assumption is that it’s a 4K shoot, so we won’t even bother to ask, that’s when we can consider 4K to be normal, but that’s not the case for most of us just yet. Following on from that the next step (IMHO) will be where for every project the final output will be 4K HDR. I see that as being at least a couple of years away yet.
After all that, then we might see a push for more 8K. At some point in the not too distant future 8K TV’s will be no more expensive than 4K ones. But I also believe that in-TV upscaling will be normal and possibly the preferred mode due to bandwidth restrictions. less compressed 4K upscaled to 8K may well look just as good if not better than an 8K signal that needs more compression.
8K may not become “normal” for a very long time. We have been able to easily shoot 4K for 6 years or more, but it’s only just becoming normal and Arri still have a tremendous following that choose to shoot at less than 4K for artistic reasons. The majority of Cinemas with their big screens are still only 2K, but audiences rarely complain of a lack of resolution. More and more content is being viewed on small phone or tablet screens where 4K is often wasted. It’s a story of diminishing returns, HD to 4K is a much bigger visual step than 4K to 8K and we still have to factor in how we maintain great DR.
So for the next few years at least, for the majority of us, I don’t believe 8K is actually desirable. many struggle with 4K workflows and the extra data and processing power needed compared to HD. An 8K frame is 4 times the size of a 4K frame. Some will argue that shooting in 8K has many benefits. This can be true if you main goal is resolution but in reality it’s only really very post production intensive projects where extensive re-framing, re-touching etc is needed that will benefit from shooting in 8K right now. It’s hard to get accurate numbers, but the majority of Hollywood movies still use a 2K digital intermediate and only around 20% of cinemas can actually project at more than 2K.
So in conclusion, in my humble opinion at least. 8K is more about the sales pitch than actual practical use and application. So people will use it – just because they can and it sounds impressive. But for most of us right now it simply isn’t necessary and it may well be a step too far.