All posts by alisterchapman

My take on the PXW-FX9

So I’m sure you are by now aware that Sony have just launched a new camera, the PXW-FX9. I’m not going to repeat all the information that’s already in the press release or on the Sony website here.

 

But instead I’m going to reflect a bit on what it’s actually like to work with, having been privileged enough to have spent a fair bit of time with various pre-production FX9’s (originally it was going to be all black and not the metallic grey that the production units will be).

Let’s be quite clear. The FX9 is not a souped up FS7 II. Although on the outside it may look similar, under the hood it is very, very different. For a start the full frame 6K sensor in the FX9 is completely new, designed specifically for this camera. What I find interesting about the sensor is that although the camera can’t do anamorphic and can only currently do 16:9 UHD (17:9 4K DCI will come in a later firmware update) it is a full height 4:3 sensor and it isn’t masked. So just maybe, anamorphic or other aspect ratios will be possible in the future. Talking to the engineers, anamorphic isn’t on the official road map, but it’s not a closed door.

 

My first thought was that the decision down sample from the full frame 6K to UHD and later 4K DCI is a little disappointing, as I am sure we would all love to have the ability to record in 6K as an option. But on the other, the way the down sampling helps to clean up the sensor output, reducing noise is very welcome.

You also have to remember that a camera like the FS7 that uses a 4K bayer sensor will not be producing an image with 4K resolution. Because of the way bayer sensors work, a 4K bayer sensor will result in a recording with luma resolution around 3K depending on what you are shooting. The chrom resolution will be even less than that. But start with a 6K sensor and the 4K recordings will actually be 4K resolution with better color resolution than possible form a 4K sensor. So the images from the FX9 do look sharper and have greater clarity than those from an FS7 because they are higher resolution. But the file size is exactly the same. No need to change your workflow, no need to store bigger files, but you have more recorded resolution and better color. Great for chroma key etc.

Perhaps one of the most striking differences in image quality between the FS7 and the FX9 is the lack of noise. When shooting S-Log3 the FX9 has much less noise at 4000 ISO than the FS7 at 2000 ISO. At 800 ISO the FX9 is just a little bit better again. There is less fixed pattern noise and less noise in the shadow areas. In practice what this means is that there is no need to offset the exposure when shooting log with the FX9 as there often is with the FS7. Like any camera using log you never want to be under exposed, but the FX9 works great at either of its base ISO for log producing clean largely noise free images.

This is a big deal because the FX9 also has a huge dynamic range, I’ve measured well over 14 stops using a DSC Xyla test chart and am not going to argue with Sony’s 15+ stop claim. I counted 16 steps on the chart from the FX9, but how useable the bottom 2 are is open to some debate. The FS7 only exhibited 14 steps when we measured the two cameras side by side and the difference between the 2 was clear to see, including all the extra noise in the FS7 images. In practice the combination of this huge dynamic range and low noise level means you get a greater usable highlight range than the FS7, FS5 or F5/F55 and you still retain an amazing shadow range. There’s no S-Log2 in the FX9 as S-Log2 can’t capture the cameras full dynamic range.

With the camera dealing so well with very big brightness and contrast ranges, what about color? While it’s possible to make almost any log camera look almost any way you wish, the question becomes – how easy is it to make it look nice? I’ve shoot quite a few short films with Sony’s Venice camera over the last 18 months and the footage from Venice is easy to work with, it’s hard to get it wrong with Venice. The FX9 is very, very similar. Straight out of the camera skin tones look good and contain lots of subtle texture and detail. When you use the s709 LUT highlights roll off in a pleasing, smooth manner. If you are given a choice between an FS7 and FX9 it will be an easy decision because the FX9 material is easier to work with in the grading suite. Take footage from the FX9 into ACES and it looks beautiful without any LUT or other correction.

One thing that really helps this is the ability to dial in any white balance you want, along with a tint shift, in the CineEI mode.

So far I have only been working with the class 300 XAVC files from the FX9. As many of my readers will know I am a big fan of 16 bit raw. So I am very excited about what this camera will be capable of delivering in the future when the 16 bit raw output is implemented. I think there is a bit of a question over “can you really call 6K down sampled to 4K raw – raw”? But, I think that provided it is still essentially the same data as produced by the sensor, just re-scaled, then yes, it is a kind of raw and it should bring amazing post production flexibility, provided it can be recorded in such a way that the file sizes remain manageable. Atomos have already announced that their Neon image processor is capable of handling the 16 bit raw at 4K and 120fps. So my guess would be that by the time the firmware updated needed to enable the raw becomes available there will be an affordable Shogun Neon recorder. Some have asked – what about using the R7 and X-OCN? Well that would be cool, but how many FX9 buyers want to spend $14K for an R7 with a couple of cards and a reader? Oh – hurry up Sony and Atomos – I want 16 bit, Full Frame, 120fps 4K now, not next year 🙂

One small down side of having to read out almost twice as many pixels when reading the sensor at 6K instead of 4K is that there is a bit more rolling shutter when using the 6K full frame mode. Of course nobody likes or wants this, me included. It isn’t terrible, the camera is still very usable in 6K, but you should be aware of it for any rapid pans or large amounts of horizontal motion. In the 4K super 35mm mode the rolling shutter is similar to the current FS7/F5 etc. The other restriction is the upper limit of 30fps in the 6K full frame mode. To address this, in a later firmware update, a 5K mode which uses 83% of the full frame sensor, half way between super 35mm and full frame, will be added. This will go up to 60fps when recording to UHD. I do like the fact that you can use the full frame readout for HD at up to 120fps. There is some pixel binning when in S&Q, but it looks like it’s being done really well and I’ve only really noticed artifacts on very bright specular highlights (and this is on pre-production cameras). More testing will be needed to see just how good this is. It certainly isn’t grainy like the FS7 is in S&Q.

Once again we see Sony’s variable ND filter system. This is the biggest variable filter they have done. When the ND filter isn’t engaged there is now an extra optical flat glass included between the lens and sensor to maintain a completely constant back focus distance.  Because the sensor is attached to the variable ND filter system and fitted with a heatsink to maintain a constant temperature it isn’t possible to use IBIS as the assembly would be too heavy to move fast enough to compensate for motion. Instead The FX9 has a metadata system that will use the cameras built in motion sensors to record the cameras motion. Then you will be able to use this metadata to stabilise your footage in post production. This will work in Catalyst Browse from the day the camera becomes available for sale and Sony are working with Adobe etc to have plugins available for the major NLE’s soon after. 

Even though the post production stabilization (which will be variable) needs to zoom into the image a bit, again it’s worth noting that because the full frame mode results in a recording with 4K higher resolution than say an F5 or FS7, even after the zoom in, the image still has higher resolution and better detail than most 4K bayer can deliver.

Talking of Catalyst Browse, there will also be a new version of Content Browser mobile for the FX9 that will allow you to remotely control the camera over wifi, better still the camera will provide a live video feed over the wifi link for monitoring on your phone or tablet. The latency isn’t terrible, around 4 frames. The camera body has wifi built in, no more need to add a dongle. If you want to stream over 4G or 5G then the new extension unit has a pair of USB ports for 2 mobile network dongles.

Ergonomically there have been some big improvements over the FS7. There are now many different ways to control the menu system (which is now laid out more like the Venice camera than the FS7). There is the joystick on the handgrip (which is now shaped more like the FS5 handgrip). There is a set of up/down, left/right, select push buttons on the side of the camera as well as my favourite which is a big jog dial knob that protrudes slightly from the front of the camera (ENG cameras used to have a knob like this and ot was great on them). This is just about big enough to be operated when wearing gloves.

Another improvement is the use of illuminated buttons for the buttons that select the various auto modes. When you select an auto function, such as auto gain, a light comes on to let you know it’s set to auto. Furthermore, you now have to press the button for a about 3 seconds to get it to switch into auto. This should help prevent accidental button bumps from putting the camera into a mode you don’t want to be in.

There’s no shortage of user assignable buttons on the FX9. Perhaps too many? The camera really is covered in buttons! But that does mean you can do some nice things like assign the high/low ISO range change to one of the buttons to switch instantly between base ISO’s.

The great news for those that shoot using CineEI and log is that LUT’s are available in S&Q when recording UHD up to 60fps. The bad news is that above 60fps, when you have to record at HD you can’t separate the LUT between monitor LUT and baking it in. However all is not lost because the camera has viewfinder gamma assist. This applies a vanilla Rec-709 LUT to the viewfinder. It’s only going to be on the viewfinder and it doesn’t change of you change the EI, but at least you don’t have to look at the S-Log image, you can still look at a correct 709 image. Given that because of it’s much lower noise levels I don’t feel that this camera needs the exposure offsets that the FS7 needs, this is not too bad a compromise. Most of the time you will be able to shoot at 800ISO/800EI or 4000ISO/4000EI, so the viewfinder gamma assist LUT will do the job – look in the viewfinder – if it looks right, it probably is right. Oh – and in addition, the S&Q HFR is much less noisy than from the FS7.

Another thing that will make S-Log3 shooters very happy is the ability to change the white balance beyond the 3 built in presets. You can dial in whatever white balance you want including a tint adjustment, just like Venice. You can also use a white or grey card to automatically set the white balance when shooting log. So getting rid of a green cast from dodgy LED lights will be much easier.

Then there’s the autofocus.

Damn you Sony – now I’m going to have to buy some new lenses! I have to admit, I have always looked down on autofocus as an inferior way to focus a video camera. Largely because I have never had a camera where the autofocus has worked as well as I would like. Sony’s  little PXW-Z90 does have a very impressive autofocus system, but with a smaller sensor that is easier to do. Canon have pretty good autofocus on some of their cameras too. But the FX9 has me rethinking how I will approach focus for many shoots. It really is incredibly impressive. It is a hybrid phase and contrast based system that has phase detection sites across almost the entire sensor. It has been designed specifically for video. It has eye detection and face recognition, so you can tell exactly which face in a crowd you want it to focus on. It doesn’t hunt, it just locks on and holds focus. It’s also fully programmable so you can adjust the hold and release sensitivity as well as the focus shift speed. This allows you to make the way the auto focus works look like it’s being done by a human. Often autofocus is too fast, too snappy. You can have that too if you want, but having the ability to slow it down a touch really helps it feel much more natural.

For so many applications this amazing autofocus system is going to be a godsend. Gimbal and Stedicam users will benefit for a start. Anyone shooting fast moving people will benefit. I can see it being a huge help for me when shooting up in the arctic with bulky gloves and mittens or a fogged up and frozen viewfinder! I can see the FX9 finding a place on big budget movie shoots for shots where conventional focus method would otherwise prove challenging. But, you will need Sony E-Mount lenses to get the very best out of it, hence in part why Sony are also releasing new E-mount cine style lenses.

One more note: The camera does have genlock and timecode in/out – on the camera body. You don’t need the extension unit for TC in and out.

In case you haven’t realised by now, I am quite excited by the FX9. It ticks a lot of boxes. You get a state of the art full frame sensor with 15 stops of dynamic range. You have dual base ISO’s of 800 and 4000. You get Venice like color science. So the images look beautiful right out of the camera. Less noise means no need to offset your exposure so you can record more highlight information and shooting is easier. You retain E-Mount versatility, once again you can put just about any lens you want on the camera via low cost adapters. But now in addition you also get an amazing, truly useful autofocus system.

No change on the codec front or media, so that keeps life simple. But 16 bit raw in the future for what should be amazing image quality and post production flexibility (you will need the new XDCA FX9 extension unit for raw). I don’t need to buy new base plates as existing FS7 plates will fit, as will most top plates. There is a small change on the top of the camera as every opening now has water and dust sealing gaskets around it – the FX9 is very well sealed against bad weather and dust. So some FS7 top plates may not fit around the hole where the handle plugs in.

It takes the same BP-U batteries, so I don’t need to buy different batteries. But it does use more power, around twice as much as the FS7. The penalty you have to pay for a bigger sensor with more pixels and more processing power for LUT’s in more modes.

The viewfinder is much improved. It still has the same square rods as the FS7 MkII, which won’t be to everyone’s taste. But the display is now sharper and that makes focussing much easier. The new screen is 720P (the FS7 is 540P I think). So it’s already clearer and sharper. But on top of that the peaking has been improved and better still the focus mag is now very good. Zoom in and it doesn’t go all blocky and muddy, it remains clear and sharp. 

There isn’t much not to like about the FX9 when you consider the price. If my clients could afford it I would love to have a Venice. But the reality is few of them can afford Venice. Besides, Venice is big and heavy. For my travels and adventures I think the FX9 is going to be a perfect fit and I can’t wait to shoot some more with one.

Advertisements

Atomos release new Neon range of HDR monitors with Dolby Vision.

This is BIG. Atomos have just announced a completely new range of monitors for HDR production. From 17″ to 55″ these new monitors will compliment their Atomos Sumo, Shogun, Shinobi and Ninja products to provide a complete suite of HDR monitors.

The new Neon displays are Dolby certified and for me this is particularly interesting and perfect timing as I am just about to do the post production on a couple of Dolby certified HDR productions.

I’m just about to leave for the Cinegear show over at Paramount Studios so I don’t have time to list all the amazing features here. So follow the link below to get the full low down on these 10 bit, million:1 contrast monitors.

https://www.atomos.com/neon

Venice to get even higher frame rates in V5 firmware.

VENICE-copy Venice to get even higher frame rates in V5 firmware.

Last night I attended the official opening of Sony’s new Digital Media Production center in LA. This is a very nice facility where Sony can show end users how to get the most from full end to end digital production, from camera to display. And a most impressive display it is as the facility has a huge permanent 26ft HDR C-Led equipped cinema.

One of the key announcements made at the event was details of what will be the 5th major firmware update for the Venice cameras. Due January 2020 version 5 will extend the cameras high frame rate capabilities as well as adding or improving on a number of existing options:

·       HFR Capabilities – Up to 90fps at 6K 2.39:1 and 72fps at 6K 17:9.

·       Apple ProRes 4444 – Record HD videos in the high picture quality on SxS PRO+ cards, without Sony’s AXS-R7 recorder. This is especially effective for HD VFX workflow.

·       180 Degree Rotation Monitor Out– Flip and flop images via viewfinder and SDI.

·       High Resolution Magnification via HD Monitor Out – Existing advanced viewfinder technology for clearer magnification is now extended to HD Monitor Out.

·       Improved User Marker Settings – Menu updates for easier selection of frame lines on viewfinder.

90fps in 6K means that a full 3x slow down will be possible for 6K 24fps projects. In addition to the above Sony now have a new IDT for Venice for ACES. so now VENICE has officially joined The F65, F55 and F55 in earning the ACES logo, meeting the specifications laid out in the ACES Product Partner Logo Program. I will post more details of this and how to get hold of the IDT over the weekend.

Do I need to worry about 8K?

This is a question that gets asked a lot. And if you are thinking about buying a new camera it has to be one the you need to think about. But in reality I don’t think 8K is a concern for most of us.

I recently had a conversation with a representative of a well known TV manufacturer. We discussed 8K and 8K TV’s. An interesting conclusion to the conversation was that this particular TV manufacturer wasn’t really expecting their to be a lot of 8K content anytime soon. The reason for selling 8K TV’s is the obvious one – In the consumers eyes. 8K is a bigger number than 4K, so it must mean that it is better. It’s any easy sell for the TV manufacturers, even though it’s arguable that most viewers will never be able to tell the difference between an 8K TV and a 4K one (lets face it most struggle to tell the difference between 4K and HD).

Instead of expecting 8K content this particular TV manufacturer will be focussing on high quality internal upscaling of 4K content to deliver an enhanced viewing experience.

It’s also been shown time and time again that contrast and Dynamic Range trump resolution for most viewers. This was one of the key reasons why it took a very long time for electronic film production to really get to the point where it could match film. A big part of the increase in DR for video cameras came from the move from the traditional 2/3″ video sensor to much bigger super 35mm sensors with bigger pixels. Big pixels are one of the keys to good dynamic range and the laws of physics that govern this are not likely to change any time soon.

This is part of the reason why Arri have stuck with the same sensor for so long. They know that reducing the pixel size to fit more into the same space will make it hard to maintain the excellent DR their cameras are known for. This is in part why Arri have chosen to increase the sensor size by combining sensors. It’s at least in part why Red and Sony have chosen to increase the size of their sensors beyond super 35mm as they increase resolution. The pixels on the Venice sensor are around the same size as most 4K s35 cameras. 6K was chosen as the maximum resolution because that allows this same pixel size to be used, no DR compromise, but it necessitates a full frame sensor and the use of high quality full frame lenses.

So, if we want 8K with great DR it forces us to use ever bigger sensors. Yes, you will get a super shallow DoF and this may be seen as an advantage for some productions. But what’s the point of a move to higher and higher resolutions if more and more of the image is out of focus due to a very shallow DoF? Getting good, pin sharp focus with ever bigger sensors is going to be a challenge unless we also dramatically increase light levels. This goes against the modern trend for lower illumination levels. Only last week I was shooting a short film with a Venice and it was a struggle to balance the amount of the subject that was in focus with light levels, especially at longer focal lengths. I don’t like shots of people where one eye is in focus but the other clearly not, it looks odd, which eye should you choose as the in-focus eye?

And what about real world textures? How many of the things that we shoot really contain details and textures beyond 4K? And do we really want to see every pore, wrinkle and blemish on our actors faces or sets? too much resolution on a big screen creates a form of hyper reality. We start to see things we would never ever normally see as the image and the textures become magnified and expanded. this might be great for a science documentary but is distracting for a romantic drama.

If resolution really, really was king then every town would have an IMAX theater and we would all be shooting IMAX. 

Before 8K becomes normal and mainstream I believe HDR will be the next step. Consumers can see the benefits of HDR much more readily than 8K. Right now 4K is not really the norm, HD is. There is a large amount of 4K acquisition, but it’s not mainstream. The amount of HDR content being produced is still small. So first we need to see 4K become normal. When we get to the point that whenever a client rings the automatic assumption is that it’s a 4K shoot, so we won’t even bother to ask, that’s when we can consider 4K to be normal, but that’s not the case for most of us just yet. Following on from that the next step (IMHO) will be where for every project the final output will be 4K HDR. I see that as being at least a couple of years away yet.

After all that, then we might see a push for more 8K. At some point in the not too distant future 8K TV’s will be no more expensive than 4K ones. But I also believe that in-TV upscaling will be normal and possibly the preferred mode due to bandwidth restrictions. less compressed 4K upscaled to 8K may well look just as good if not better than an 8K signal that needs more compression.

8K may not become “normal” for a very long time. We have been able to easily shoot 4K for 6 years or more, but it’s only just becoming normal and Arri still have a tremendous following that choose to shoot at less than 4K for artistic reasons. The majority of Cinemas with their big screens are still only 2K, but audiences rarely complain of a lack of resolution. More and more content is being viewed on small phone or tablet screens where 4K is often wasted. It’s a story of diminishing returns, HD to 4K is a much bigger visual step than 4K to 8K and we still have to factor in how we maintain great DR.

So for the next few years at least, for the majority of us, I don’t believe 8K is actually desirable. many struggle with 4K workflows and the extra data and processing power needed compared to HD. An 8K frame is 4 times the size of a 4K frame. Some will argue that shooting in 8K has many benefits. This can be true if you main goal is resolution but in reality it’s only really very post production intensive projects where extensive re-framing, re-touching etc is needed that will benefit from shooting in 8K right now. It’s hard to get accurate numbers, but the majority of Hollywood movies still use a 2K digital intermediate and only around 20% of cinemas can actually project at more than 2K.

So in conclusion, in my humble opinion at least. 8K is more about the sales pitch than actual practical use and application. So people will use it – just because they can and it sounds impressive. But for most of us right now it simply isn’t necessary and it may well be a step too far.

Beware multiple power supplies!!

From time to time someone will pop up on a forum or user group with tales of fried SDI boards, dead monitors or dead audio devices. Often the reason for the death of these units seems obscure. One day it all works fine, the next time the monitor is plugged in it stops working.

A common cause of these types of issue is the use of individual power supplies for each device. Most modern power supplies use a technology called “switch mode”. Most “wall wart” power supplies are switch mode. Computers use switch mode power supplies, they are probably the most common type of power supply in use today.

The problem with these power supplies is that the voltage they produce is not tied to a common earth or ground connection. A 12 volt power supply may have an output voltage that measures 12 volts across it’s positive and negative terminals, which is great. But the negative terminal might be many volts above “ground”. Used singly this is not normally a problem but if you use a couple of different power supplies with negative terminals floating at different voltages, if you connect them together current will flow from one to the other as the establish a common base voltage.

As an example if you have a monitor powered by one power supply and a camera powered by another, when you connect the monitor to the camera current may flow down the SDI or HDMI cable from one power supply to the other causing damage to the chips that process the SDI/HDMI signals.

Even if there is no damage this current can lead to audio hum or other electrical noise.

How can you prevent this?

First use only high quality power supplies. Wherever possible try to run everything off a single power supply. Powering the camera from a high capacity power supply and then feeding any connected accessories via D-Tap or Hirose outputs on the camera is good practice. Also powering everything by batteries helps. If you must use separate power supplies then connect everything together before connecting anything to the mains and before turning anything on. This should ensure that any current runs through the shield and ground paths in the cables rather than possibly travelling down the delicate signal part of a connection as you connect things together.

New Atomos Shogun 7 with Dolby Vision Out and 15 stop screen.

So this landed in my inbox today. Atomos are releasing what on paper at least is a truly remarkable new recorder and monitor, the Shogun 7.

For some time now the Atomos Inferno has been my go-to monitor. It’s just so flexible and the HDR screen is wonderful. But the new Shogun 7 looks to be quite a big upgrade.

image New Atomos Shogun 7 with Dolby Vision Out and 15 stop screen.

The screen is claimed to be able to display an astounding 1,000,000:1 contrast ratio and 15+ stops of dynamic range. That means you will be able to shoot in log with almost any camera and see the log output 1:1. No need to artificially reduce the display range, no more flat looking log or raw, just a real look at what you are actually shooting.

I’m off to NAB at the weekend and I will be helping out on the Atomos booth, so I will be able to take a good look at the Shogun 7. If it comes anywhere near to the specs in the press release it will be a must-have piece of kit whether you shoot on an FS5 or Venice!

Here’s the the press release:

Melbourne, Vic – 4 April, 2019:

The new Atomos Shogun 7 is the ultimate 7-inch HDR monitor, recorder and switcher. Precision-engineered for the film and video professional, it uses the very latest video technologies available. Shogun 7 features a truly ground-breaking HDR screen – the best of any production monitor in the world. See perfection on the all-new 1500nit daylight-viewable, 1920×1200 panel with an astounding 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. Shogun 7 will truly revolutionize the on-camera monitoring game.

Bringing the real world to your monitor

With Shogun 7 blacks and colors are rich and deep. Images appear to ‘pop’ with added dimensionality and detail. The incredible Atomos screen uses a unique combination of advanced LED and LCD technologies which together offer deeper, better blacks than rival OLED screens, but with the much higher brightness and vivid color performance of top-end LCDs. Objects appear more lifelike than ever, with complex textures and gradations beautifully revealed. In short, Shogun 7 offers the most detailed window into your image, truly changing the way you create visually.

The Best HDR just got better

A new 360 zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color, yielding jaw-dropping results. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered. For the first time you can enjoy on-screen the same dynamic range, palette of colors and shades that your camera sensor sees. 

On-set HDR redefined with real-time Dolby Vision HDR output

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” – the ultimate tool to see HDR live on-set and carry your creative intent from the camera through into HDR post production. Dolby have optimised their amazing target display HDR processing algorithm and which Atomos have running inside the Shogun 7. It brings real-time automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and magically, automatically, AtomOS 10 analyses the image, queries the TV, and applies the right color and brightness profiles for the maximum HDR experience on the display. Enjoy complete confidence that your camera’s HDR image is optimally set up and looks just the way you wanted it. It is an invaluable HDR on-set reference check for the DP, director, creatives and clients – making it a completely flexible master recording and production station.

“We set out to design the most incredibly high contrast and detailed display possible, and when it came off the production line the Shogun 7 exceeded even our expectations. This is why we call it a screen with “Unbelievable HDR”. With multi-camera switching, we know that this will be the most powerful tool we’ve ever made for our customers to tell their stories“, said Jeromy Young, CEO of Atomos.

blobid1_1554376631889 New Atomos Shogun 7 with Dolby Vision Out and 15 stop screen.

Ultimate recording

Shogun 7 records the best possible images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to reliable AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most Quad Link, Dual Link or Single Link SDI cinema cameras. With Shogun 7 every pixel is perfectly preserved with data rates of up to 1.8Gb/s.

Monitor and record professional XLR audio

Shogun 7 eliminates the need for a separate audio recorder. Add 48V stereo mics via an optional balanced XLR breakout cable. Select Mic or Line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. You can monitor the selected stereo track via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

AtomOS 10, touchscreen control and refined body

Atomos continues to refine the elegant and intuitive AtomOS operating system. Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the award-winning Ninja V. Icons and colors are designed to ensure that the operator can concentrate on the image when they need to. The completely new body of Shogun 7 has a sleek Ninja V like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting. 

AtomOS 10 on Shogun 7 has the full range of monitoring tools that users have come to expect from Atomos, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis. 

Portable multi-cam live switching and recording for Shogun 7 and Sumo 19

Shogun 7 is also the ultimate portable touch-screen controlled multi-camera switcher with asynchronous quad-ISO recording. Switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update. Sumo19 and Shogun 7 are the ideal devices to streamline your multi-camera live productions. 

Enjoy the freedom of asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, just import the xml file into your NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed. This amazing functionality makes Shogun 7 and Sumo19 the most flexible in-the-field switcher-recorder-monitors available.

Shogun 7 will be available in June 2019 priced at $US 1499/ €1499 plus local taxes from authorized Atomos dealers.

Shooting Anamorphic with the Fujinon MK’s and SLR Magic 65 Anamorphot.

There is something very special about the way anamorphic images look, something that’s not easy to replicate in post production. Sure you can shoot in 16:9 or 17:9 and crop down to the typical 2.35:1 aspect ratio and sure you can add some extra anamorphic style flares in post. But what is much more difficult to replicate is all the other distortions and the oval bokeh that are typical of an anamorphic lens.

Anamorphic lenses work by distorting the captured image. Squeezing or compressing it horizontally, stretching it vertically. The amount of squeeze that you will want to use will depend on the aspect ratio of the sensor or film frame. With full frame 35mm cameras or cameras with a 4:3 aspect ratio sensor or gate you would normally use an anamorphic lens that squeezes the image by 2 times. Most anamorphic cinema lenses are 2x anamorphic, that is the image is squeezed 2x horizontally. You can use these on cameras with a 16:9 or 17:9 super35mm sensor, but because a Super35 sensor already has a wide aspect ratio a 2x squeeze is much more than you need for that typical cinema style final aspect ratios of 2.39:1.

For most Super35mm cameras it is normally better to use a lens with a 1.33x squeeze. 1.33x squeeze on Super35 results in a final aspect ratio close to the classic cinema aspect ratio of 2.39:1.

Traditionally anamorphic lenses have been very expensive. The complex shape of the anamorphic lens elements are much harder to make than a normal spherical lens. However another option is to use an anamorphic adapter on the front of an existing lens to turn it into an anamorphic lens. SLR Magic who specialise in niche lenses and adapters have had a 50mm diameter 1.33x anamorphic adapter available for some time. I’ve used this with the FS7 and other cameras in the past, but the 50mm diameter of the adapter limits the range of lenses it can be used with (There is also a 50mm 2x anamorphot for full frame 4:3 aspect ratio sensors from SLR Magic).

Now SLR Magic have a new larger 65mm adapter. The 1.33-65 Anamorphot has a much larger lens element, so it can be used with a much wider range of lenses. In addition it has a calibrated focus scale on it’s focus ring. One thing to be aware of with adapters like these is that you have to focus both the adapter and the lens you are using it on. For simple shoots this isn’t too much of a problem. But if you are moving the camera a lot or the subject is moving around a lot, trying to focus both lenses together can be a challenge.

DSC_0103 Shooting Anamorphic with the Fujinon MK's and SLR Magic 65 Anamorphot.
The SLR Magic 1.33-65 Anamorphot anamorphic adapter.

Enter the PD Movie Dual Channel follow focus.

The PD Movie Dual follow focus is a motorised follow focus system that can control 2 focus motors at the same time. You can get both wired and wireless versions depending on your needs and budget. For the anamorphic shoot I had the wired version (I do personally own a single channel PD Movie wireless follow focus). Setup is quick and easy, you simply attach the motors to your rods, position the gears so they engage with the gear rings on the lens and the anamorphot and press a button to calibrate each motor. It takes just a few moments and then you are ready to go. Now when you turn the PD Movie focus control wheel both the taking lens and the anamorphot focus together.

I used the anamorphot on both the Fujinon MK18-55mm and the MK50-135mm. It works well with both lenses but you can’t use focal lengths wider than around 35mm without the adapter some causing vignetting. So on the 18-55 you can only really use around 35 to 55mm. I would note that the adapter does act a little like a wide angle converter, so even at 35mm the field of view is pretty wide. I certainly didn’t feel that I was only ever shooting at long focal lenghts.

DSC_0099 Shooting Anamorphic with the Fujinon MK's and SLR Magic 65 Anamorphot.
The full rig. PMW-F5 with R5 raw recorder. Fujinon MK 18-55 lens, SLR Magic Anamorphot and PD Movie dual focus system.

Like a lot of lens adapters there are some things to consider. You are putting a lot of extra glass in front of you main lens, so it will need some support. SLR magic do a nice support bracket for 15mm rods and this is actually essential as it stops the adapter from rotating and keeps it correctly oriented so that your anamorphic squeeze remains horizontal at all times. Also if you try to use too large an aperture the adapter will soften the image. I found that it worked best between f8 and f11, but it was possible to shoot at f5.6. If you go wider than this, away from the very center of the frame you get quite a lot of softening image softening. This might work for some projects where you really want to draw the viewer to the center of the frame or if you want a very stylised look, but it didn’t suit this particular project.

The out of focus bokeh has a distinct anamorphic shape, look and feel. As you pull focus the shape of the bokeh changes horizontally, this is one of the key things that makes anamorphic content look different to spherical. As the adapter only squeezes by 1.33 this is as pronounced as it would be if you shot with a 2x anamorphic. Of course the other thing most people notice about anamorphic images is lens flares that streak horizontally across the image. Intense light sources just off frame would produce blue/purple streaks across the image. If you introduce very small point light sources into the shot you will get a similar horizontal flare. If flares are your thing it works best if you have a very dark background. Overall the lens didn’t flare excessively, so my shots are not full of flares like a JJ Abrams movie. But when it did flare the effect is very pleasing. Watch the video linked above and judge for yourself.

Monitoring and De-Squeeze.

When you shoot anamorphic you normally record the horizontally squashed image and then in post production you de-squeeze the image by compressing it vertically. Squashing the image vertically results in a letterbox, wide screen style image and it’s called “De-Squeeze”. You can shoot anamorphic without de-sqeezing the image provided you don’t mind looking at images that are horizontally squashed in your viewfinder or on your monitor. But these days you have plenty of monitors and viewfinders that can “de-squeeze” the anamorphic image so that you can view it with the correct aspect ratio. The Glass Hub film was shot using a Sony PMW-F5 recording to the R5 raw recorder. The PMW-F5 has the ability to de-squeeze the image for the viewfinder built in. But I also used an Atomos Shogun Inferno to monitor as I was going to be producing HDR versions of the film. The Shogun Inferno has both 2x and 1.33x de-squeeze built in so I was able to take the distorted S-Log3 output from the camera and convert it to a HDR PQ image and de-squeeze it all at the same time in the Inferno. This made monitoring really easy and effective.

I used DaVinci Resolve for the post production. In the past I might have done my editing in Adobe Premiere and the grading in Resolve. But Resolve is now a very capable edit package, so I completed the project entirely in Resolve. I used the ACES colour managed workflow as ACES means I don’t need to worry about LUT’s and in addition ACES adds a really nice film like highlight roll off to the output. If you have never tried a colour managed workflow for log or raw material you really should!

The SLR Magic 65-1.33 paired with the Fujinon MK lenses provides a relatively low cost entry into the world of anamorphic shooting. You can shoot anywhere from around 30-35mm to 135mm. The PD Movie dual motor focus system means that there is no need to try to use both hands to focus both the anamorphot and the lens together. The anamorphot + lens behave much more like a quality dedicated anamorphic zoom lens, but at a fraction of the cost. While I wouldn’t use it to shoot everything the Anamorphot is a really useful tool for those times you want something different.

Picture Profile Settings For The PXW-Z280

Sony’s new PXW-Z280 is a great compact camcorder. Having now spent even more time with one I have been looking at how to best optimise it.

It should be remembered that this is a 4K camcorder. So Sony are packing a lot of pixels onto the 3 sensors. As a result the camera does exhibit a little bit of noise at 0dB gain. No camera is noise free and we have become spoilt by the large sensor super 35mm cameras with big sensors, big pixels and very low noise levels.

Use -3dB Gain to reduce noise.

So I did a little bit of work with various settings in the camera to see if I could minimise the noise. The first thing was to test the camera at -3dB gain. On many cameras using negative gain will reduce the cameras dynamic range due to a reduction in the highlight recording range. But on the Z280 using -3dB of gain does not seem to adversely effect the dynamic range, but it does significantly reduce the noise. I found the noise reduction to be much larger than I would normally expect from a -3dB gain reduction. So my advice is – where possible use -3dB gain. The Z280 is pretty sensitive anyway, especially in HD so -3dB (which is only half a stop) is not going to cause problems for most shoots.

I fell that the cameras standard detail corrections result in some over sharpening of the image. This is particularly noticeable in HD where there is some ringing (over correction that gives a black or white overshoot) on high contrast edges. Dialling back the detail levels just a little helps produce a more natural looking image. It will appear a touch less “sharp” but in my opinion the images look a bit more natural, less processed and noise is very slightly reduced. Below are my suggested detail settings:

Z280 Detail Settings For HD.

Detail -12, Crispening -15, Frequency +18 lower.

Z280 Detail Settings For UHD(QFHD).

Detail -5, Crispening -11, Frequency +16

White Clip and Knee.

In the SDR mode the Z280 has a range of standard Rec-709 type gammas as well as Hypergammas 1 – 4. Like many modern digital camcorders, by default, all the SDR gammas except HG1 and HG2 record at up to 109%. This might cause problems for those going direct to air for broadcast TV. For direct to air applications you may need to consider changing the white clip setting. The default is 109% but for direct to air broadcast you should change this to 100%.

If working with the STD5 gamma (Rec-709) and a 100% clip point you will also want to modify the knee settings. You can either use the default auto knee or turn the auto knee off and change the knee point to 87 and slope to +25 to bring the highlights down to fit better with a 100% clip point. HG1 and HG2 are broadcast safe gammas, so these are another option for direct to air.

Hypergamma.

As well as Rec-709 gamma the camera has Sony’s Hypergammas. If using the Hypergammas it should be noted that the optimum exposure will result in a slightly darker image than you would have with normal 709. As a guide you should have skin tones around 60% and a white card would be around 75% for the best results. Exposing skin tones at 70% or brighter can result in flat looking faces with reduced texture and detail, so watch your skin tones when shooting with the Hypergammas.

The Z280 has four Hypergammas.

HG1 3250G36. This takes a brightness range the equivalent to 325% and compresses it down to 100% (clips at 100%). Middle grey would be exposed at 36% (G36). This gives a nice reasonably contrasty image with bright mid range and a moderate extension of the highlight range.

HG2 4600G30. Takes a brightness range of 460% and compresses down to 100% (clips at 100%). Middle grey is exposed at 30% (G30). This has a darker mid range than HG1 but further extends the highlights. Generally HG1 works better for less challenging scenes or darker scenes while HG2 works for high contrast, bright scenes. Both HG1 and HG2 are broadcast safe.

HG3 3259G40. This takes a brightness range the equivalent to 325% and compresses it down to 109% (clips at 109%). Middle grey would be exposed at 409% (G40). This gives a nice contrasty image with reasonably bright mid range and a moderate extension of the highlight range.

HG4 4609G33. Takes a brightness range of 460% and compresses down to 109% (clips at 109%). Middle grey is exposed at 33% (G33). This has a darker mid range than HG3 but further extends the highlights. Generally HG3 works better for less challenging scenes or darker scenes while HG4 works for high contrast, bright scenes.

Color and The Matrix.

If you don’t like the standard Sony colors and want warmer skin tones do try using the SMPTE-240M color matrix. You will find skin tones a bit warmer with more red than the 709 matrix.

To change the saturation (amount of color) you need to turn on the User Matrix and then you can use the User Matrix Level control to increase or decrease the saturation.

Many people find the standard Sony look to be a little on the yellow side. So I have come up with some settings for the user matrix that reduces the yellow and warms the image just a touch.

AC NATURAL COLOR SETTINGS:

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: ITU-709. User Matrix: ON. Level: 0. Phase: 0.

R-G: +10. R-B: +8. G-R: -15. G-B: -9. B-R: -5. B-G: -15.

So here are some suggested Z280 Picture Profile settings for different looks:

Note that these picture profile are similar to some of my FS7 profiles, so they will help match the two cameras in a multi-camera shoot. Use each of the setting below with either the HD or UHD(QFHD) detail settings given above if you wish to reduce the sharpening.

AC-Neutral-HG3.

Designed as a pleasing general purpose look for medium to high contrast scenes. Provides a neutral look with slightly less yellow than the standard Sony settings. I recommend setting zebras to 60% for skin tones or exposing a white card at 72-78% for the best results.

Black: Master Black: -3.  Gamma: HG3 .  White Clip: OFF. 

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: ITU-709. User Matrix: ON. User Matrix Level: 0. Phase: 0.

R-G: +10. R-B: +8. G-R: -15. G-B: -9. B-R: -5. B-G: -15.

AC-Neutral-HG4.

Designed as a pleasing general purpose look for high contrast scenes. Provides a neutral look with slightly less yellow than the standard Sony settings. I recommend setting zebras to 58% for skin tones or exposing a white card at 70-75% for the best results.

Black: Master Black: -3.  Gamma: HG3 .  White Clip: OFF. 

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: ITU-709. User Matrix: ON. User Matrix Level: 0. Phase: 0.

R-G: +10. R-B: +8. G-R: -15. G-B: -9. B-R: -5. B-G: -15.

AC-FILMLIKE1

A high dynamic range look with film like color. Will produce a slightly flat looking image. Colours are tuned to be more film like with a very slight warm tint. I recommend settings zebras to 57% for skin tones and recording white at 70-75% for the most “filmic” look.

Black: Master Black: -3.  Gamma: HG3 .  White Clip: OFF. 

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: SMPTE WIDE. User Matrix: ON. User Matrix Level: +5. Phase: 0.

R-G: +11. R-B: +8. G-R: -12. G-B: -9. B-R: -3. B-G: -12.

AC-VIBRANT-HG3

These setting increase dynamic range over the standard settings but also increase the colour and vibrance. Designed to be used for when a good dynamic range and strong colours are needed direct from the camera. Suggested zebra level for skin tones is 63% and white at approx 72-78%.

Black: Master Black: -3.  Gamma: HG3.  White Clip: OFF.

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: ITU-709. User Matrix: ON. User Matrix Level: +25. Phase: -5.

R-G: +12. R-B: +8. G-R: -11. G-B: -7. B-R: -5. B-G: -17.

AC-VIBRANT-HG4

These setting increase dynamic range over the standard settings but also increase the colour and vibrance. HG4 has greater dynamic range than HG3 but is less bright, so this variation is best for brighter high dynamic range scenes. Designed to be used for when a good dynamic range and strong colours are needed direct from the camera. Suggested zebra level for skin tones is 60% and white at approx 70-75%.

Black: Master Black: -3.  Gamma: HG4.  White Clip: OFF.

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: ITU-709. User Matrix: ON. User Matrix Level: +25. Phase: -5.

R-G: +12. R-B: +8. G-R: -11. G-B: -7. B-R: -5. B-G: -17.

AC-Punchy Pop Video.

A punchy, contrasty look with strong but neutral colors. Maybe useful for a music video, party or celebration.

Black: Master Black: -3.  Gamma: STD5 .  Auto Knee Off. Knee level 87. White Clip: OFF. 

Matrix: ON. Adaptive Matrix: Off. Preset Matrix: ON. Preset Select: ITU-709. User Matrix: ON. User Matrix Level: 20. Phase: 0.

R-G: +10. R-B: +8. G-R: -15. G-B: -9. B-R: -5. B-G: -15.

Using different gamuts when shooting raw with the PXW-FS5

This topic comes up a lot. Whenever I have been in discussion with those that should know within Sony they have made it clear that the FS-Raw system is designed around S-Log2 for monitoring and post production etc. This stems from the fact that FS-Raw, the 12 bit linear raw from the FS700, FS7 and FS5 was first developed for the FS700 and that camera only had SGamut and S-Log2. S-Log3 didn’t come until a little later.

The idea is that if the camera is set to SGamut + S-Log2 it is optimised for the best possible performance. The raw signal is then passed to the raw recorder where it will be recorded. For a raw recorder that is going to convert the raw to ProRes or DNxHD the recorder converts the raw to S-SGamut + Log2 so that it will match any internal recordings.

Finally in post the grading software would take the FS-Raw and convert it to SGamut + S-Log2 for further grading. By keeping everything as SGamut and S-Log2 throughout the workflow your brightness levels, the look of the image and any LUT’s that you might use will be the same. Internal and external recordings will look the same. And this has been my experience. Use PP7 with SGamut and S-Log2 and the workflow works as expected.

What about the other Gamuts?

However: The FS5 also has SGamut3, SGamut3.cine and S-Log3 available in the picture profiles. When shooting Log many people prefer S-Log3 and SGamut3.cine. Some people find it easier to grade S-Log3 and there are more LUT’s available for S-Log3/SGamut3.cine than for SGamut and S-Log2. So there are many people that like to use PP8 or PP9 for internal S-Log.

However, switching the FS5’s gamma from S-log2 to S-log3 makes no difference to the raw output. And it won’t make your recorder convert the raw to ProRes/DNxHD as S-Log3 if that’s what you are hoping for. But changing the gamut does have an effect on the colors in the image.

But shouldn’t raw be just raw sensor data?

For me this is interesting, because if the camera is recording the raw sensor output, changing the Gamut shouldn’t really change what’s in the raw recording. So the fact that the image changes when you change the Gamut tells me that the camera is doing some form of processing or gain/gamma adjustment to the signal coming from the sensor. So to try and figure out what is happening and whether you should still always stick to SGamut I decided to do a little bit of testing. The testing was only done on an FS5 so the results are only applicable to the FS5. I can’t recall seeing these same changes with the FS7.

DSC Labs Chroma Tru Test Chart.

For the tests I used a DSC Labs Chroma-tru chart as this allows you to see how the colors and contrast in what you record changes both visually and with a vectorscope/waveform. As well as the chart that you shoot, you download a matching reference overlay file that you can superimpose over the clip in post to visually see any differences between the reference overlay and the way the shot has been captured and decoded. It is also possible to place another small reference chart directly in front of the monitor screen if you need to evaluate the monitor or any other aspects of your full end to end production system. It’s a very clever system and I like it because as well as being able to measure differences with scopes you can also see any differences quite clearly without any sophisticated measuring equipment.

Test workflow:

The chart was illuminated with a mix of mostly real daylight and a bit of 5600K daylight balanced light from a Stella LED lamp. I wanted a lot of real daylight to minimise any errors that could creep in from the spectrum of the LED light (The Stellas are very good but you can’t beat real daylight). The camera was set to 2000 ISO. The raw signal was passed from the camera to an Atomos Shogun Inferno where the clips were recorded as both ProRes Raw and also by using the recorders built in conversion to S-Log2 for internal recording as ProRes HQ. I did one pass of correctly exposed clips and a second pass where the clips were under exposed by 1 stop to asses noise levels. The lens was the 18-105mm kit lens, which without the cameras built in lens compensation does show a fair bit of barrel distortion as you will see!

The ProRes clips were evaluated in DaVinci Resolve using the DaVinci Color Managed workflow with the input colorspace set to S-Log2/Sgamut for every clip and output colorspace set to 709. I also had to set the input range of the ProRes clips to Full Range as this is what S-Log2 files always are. If I didn’t change the input range to Full Range the clips exhibited clipped and crushed black after conversion to 709, this confirms that the clips recorded by the Shogun were Full Range – which follows the S-Log specifications.

I did also take a look at the clips in Adobe Premiere and saw very similar results to Resolve.

I will do a separate report on my findings with the ProRes Raw in FCP as soon as I get time to check out the ProRes raw files properly.

So, what did I find?

In the images below the reference file has been overlaid on the very center of the clip. It can be a little hard to see. In a perfect system it would be impossible to see. But you can never capture the full contrast of the chart 1:1 and all cameras exhibit some color response imperfections. But the closer the center overlay is to the captured chart, the more accurate the system is. Note you can click on any of the capture examples to view a larger version.

Digi-ChromaMatch-Lt Using different gamuts when shooting raw with the PXW-FS5
This is the reference file (by the time it gets posted on my website as a jpeg I would no longer guarantee the colors etc. But when you look at the images below you will see this superimposed over the center of the clips.

Below is Picture Profile 6 (PP6) SGamut with S-Log2. It’s pretty good match. The camera didn’t quite capture the full contrast of the chart and that’s to be expected, reflections etc make it very difficult to get perfect blacks and shadow areas. But color wise it looks quite reasonable although the light blue’s are a little weak/pink.

SGamut_1.1.1 Using different gamuts when shooting raw with the PXW-FS5
SGamut and S-Log2

Below is Picture Profile 7 (PP7) SGamut3 with S-Log3. Straight away we can see that even though the camera was set to S-log3, the contrast is the same in the S-Log2 color managed workflow proving that the gamma of the recording is actually ProRes recording from the Shogun is S-log2, confirming what we already know which is that changing the log curve in the camera makes no difference to the raw recording and no difference to the raw to ProRes conversion in the recorder.

Note the extra noise in the greens. The greens appear to have more color, but they also appear a little darker. If you reduce the brightness of a color without altering the saturation the color appears to be deeper and I think that is what is happening here, it is a lightness change rather than just a saturation change. There is also more noise in the darker bars, grey and black really are quite noisy. Light blues have the same weak/pink appearance and there is a distinct green tint to the white, grey and black bars.

SGamut32_1.5.1 Using different gamuts when shooting raw with the PXW-FS5
SGamut3 with S-Log3

Below is when the camera was set to SGamut3.cine with S-Log3. Again we can see that the recording gamma is obviously S-Log2. The greens are still a touch stronger looking but now there is less noise in the greens. Cyan and reds are slightly lighter than SGamut and yellows appear a bit darker. This is also a little more noisy overall than SGamut, but not as bad as SGamut3. When you play the 3 clips, overall SGamut has the least noise, SGamut3.cine is next and then SGamut3 is clearly the noisiest. As with SGamut there is a distinct green tint to the white, grey and black bars.

SGamut3cine2_1.6.1 Using different gamuts when shooting raw with the PXW-FS5
SGamut3.cine with S-Log3

So that’s what the images look like, what do the scopes tell us. Again I will start with SGamut and we can see that the color response is pretty accurate. This suggests that Atomos do a good job of converting the raw to S-Log2/SGamut before it’s recorded and confirms what we already know which is this is that this is clearly how the system is designed to work. Note how the Red strip falls very close to the R box on the 2x vectorscope, yello almost in Y, green very close to G, Blue almost in B. Magenta isn’t so clever and this probably explains why the pinky blues at the top of the chart are not quite right. Do remember that all these test were done with the preset white balance so it’s not surprising to see some small offsets as the white balance won’t have been absolutely perfect. But that imperfection will be the same across all of my test examples.

Screenshot-2019-03-06-at-12.23.31 Using different gamuts when shooting raw with the PXW-FS5
SGamut + S-Log2

Below is SGamut3. The first thing I noticed was all the extra noise on the right side of the waveform where the greens are. The waveform also shows the difference in lightness compared to SGamut with different colors being reproduced at different brightness levels. The greens are being reproduced at a slightly lower luma level and this is probably why the greens appear more saturated. Also notice how much more fuzzy the vectorscope is, this is due to some extra chroma noise. There is a bit more red and magenta is closer to it’s target box, but all the other key colors are further from their boxes. Yellow and Green and Cyan are all a long way from their target boxes. Overall the color is much less accurate than SGamut and there is more chroma noise.

Screenshot-2019-03-06-at-12.23.53 Using different gamuts when shooting raw with the PXW-FS5

And finally below is SGamut3.cine. There is less noise on the green side of the waveform than SGamut and SGamut3 but we still have a slightly lower luma level for green, making green appear more saturated. Again overall color accuracy is not as good as SGamut. But the vector scope is still quite fuzzy due to chroma noise.

Screenshot-2019-03-06-at-12.24.24 Using different gamuts when shooting raw with the PXW-FS5

Under Exposure:

I just want to show you a couple of under exposed examples. These have had the under exposure corrected in post. Below is SGamut and as you can see it is a bit noisy when under exposed. That shouldn’t be a surprise, under expose and you will get noisy pictures.

SGamut_1.1.1-1 Using different gamuts when shooting raw with the PXW-FS5
SGamut with S-Log2 1 stop under (exposure corrected in post)

Below is SGamut3 and you can really see how much noisier this is than SGamut. I recommend clicking on the images to see a full screen version. You will see that as well as the noise in the greens there is more chroma noise in the blacks and greys. There also seems to be a stronger shift towards blue/green in the whites/greys in the under exposed SGamut3.

SGamut3_1.2.1 Using different gamuts when shooting raw with the PXW-FS5
SGamut3 with S-Log3 1 stop under (exposure corrected in post)

Conclusions:

Clearly changing the gamut makes a difference to the raw output signal. In theory this shouldn’t really happen. Raw is supposed to be the unprocessed sensor output. But these test show that there is a fair bit of processing going on in the FS5 before the raw is output. It’s already known that the white balance is baked in. This is quite easy to do as changing the white balance is largely just a matter of changing the gain on the pixels that represent red and blue relative to green. This can be done before the image is converted to a color image.

What I believe I am seeing in this test is something more complex than that. I’m seeing changes in the luminance and gain levels of different colors relative to each other. So what I suspect is happening is that the camera is making some independent adjustments to the gamma of the Red, Blue and Green pixels before the raw signal is output. This is probably a hang over from adjustments that need to be made when recording S-Log2 and S-Log3 internally rather than something being done to deliberately adjust the raw output. But I didn’t design the camera so I can’t be sure that this is really the case. Only Sony would know the truth.

Does it matter?

Yes and no. If you have been using SGamut3.cine and have been getting the results you want, then, no it doesn’t really matter. I would probably avoid SGamut3. It really is very noisy in the greens and shadows compared to the other two. I would be a little concerned by the green tint in the parts of the image that should be colour free in both SGamut3 and SGamut3.cine. That would make grading a little tougher than it should be.

So my advice remains unchanged and continues to match Sony’s recommendation. This is that you should use PP7 with SGamut and S-Log2 when outputting raw. That doesn’t mean you can’t use the other Gamuts and your milage may vary, but these tests do for me at least confirm my reasons for sticking with PP7.

Both Premiere and Resolve show the same behaviour. Next I want to take a look at what happens in FCP with the ProRes Raw clips. This could prove interesting as FCP decodes and converts the FS-Raw to S-Log3 and SGamut3.cine rather than S-Log2/Sgamut by default. Whether this will make any difference I don’t know. What I do know is that having a recorder that’s converting to S-Log2 for display and software that converts to S-Log3 is very confusing as you need different LUT’s for post and the recorder if you want to use LUT’s for your monitoring. But FCP will have to wait for another day. I have paying work to do first.

Sony’s Internal Recording Levels Are Correct.

There is a video on YouTube right now where the author claims that the Sony Alpha cameras don’t record correctly internally when shooting S-Log2 or S-Log3. The information contained in this video is highly miss-leading and the conclusion that the problem is with the way Sony record internally is incorrect. There really isn’t anything wrong with the way Sony do their recordings. Neither is there anything wrong with the HDMI output. While centered around the Alpha cameras the information below is also important for anyone that records S-Log2 or S-log3 externally with any other camera.

Some background: Within the video world there are 2 primary ranges that can be used to record a video signal.

Legal Range uses code value 16 for black and code value 235 for white (anything above CV235 is classed as a super-white and these can still be recorded but considered to be beyond 100%).

Full or Data Range uses code value 0 for black and code value 255 for white or 100%.

Most cameras and most video systems are based on legal range. ProRes recordings are almost always legal range. Most Sony cameras use legal range and do include super-whites for some of the curves such as Cinegammas or Hypergammas to gain a bit more dynamic range. The vast majority of video recordings use legal range. So most software defaults to legal range.

But very, very importantly – S-log2 and S-log is always full/data range.

Most of the time this doesn’t cause any issues. When you record internally in the camera the internal recordings have metadata that tells the playback, editing or grading software that the S-Log files have been recorded using full range. Because of this metadata the software will play the files back and process them at the correct levels. However if you record the S-Log with an external recorder the recorder doesn’t always know that what it is getting is full range and not legal range, it just records it, as it is, exactly as it comes out of the camera. That then causes a problem later on because the externally recorded file doesn’t have the right metadata to ensure that the full range S-Log material is handled correctly and most software will default to legal range if it knows no different.

Lets have a look at what happens when you import an internally recorded S-Log2 .mp4 file from a Sony A7S into Adobe Premiere:

Screenshot-2019-03-01-at-10.04.22 Sony's Internal Recording Levels Are Correct.
Internal S-Log2 in Premiere.

A few things to note here. One is Adobe’s somewhat funky scopes where the 8 bit code values don’t line up with the normally used IRE values used for video productions. Normally 8 bit code value 235 would be 100IRE or 100%, but for some reason Adobe have code value 255 lined up with 100%. My suspicion is that the scope % scale is not video % or IRE but instead RGB%. This is really confusing. A further complication is that Adobe have code value 0 as black, again, I think, but am not sure that this is RGB code value 0. In the world of video Black should be code value 16. But the scopes appear to work such that 0 is black and that 100 is full scale video out. Anything above 100 and below 0 will be clipped in any file you render out.

Looking at the scopes in the screen grab above, the top step on the grey scale chart is around code value 252. That is the code value you would expect it to be, that lines up just nicely with where the peak of an S-Log2 recording should be. This all looks correct, nothing goes above 100 or below 0 so nothing will be clipped.

So now lets look at an external ProRes recording, recorded at exactly the same time as the internal recording and see what Premier does with that:

Screenshot-2019-03-01-at-10.05.32 Sony's Internal Recording Levels Are Correct.
External ProRes in Adobe Premiere

OK, so we can see straight away something isn’t quite right here. In an 8 bit recording it should be impossible to have a code value higher that 255, but the scopes are suggesting that the recording has a peak code value of something around CV275. That is impossible, so alarm bells should be ringing. Something is not quite right here. In addition the S-Log2 appears to be going above 100, so that means if I were to simply export this as a new file, the top of the recording will be clipped and it won’t match the original. This is very clearly not right.

Now lets take a look at what happens in Adobe Premiere when you apply Sony’s standard S-Log2 to Rec-709 LUT to a correctly exposed internal recording:

Screenshot-2019-03-01-at-10.10.05 Sony's Internal Recording Levels Are Correct.
Internal S-Log2 with 709 LUT applied.

This all looks good and as expected. Blacks are sitting down just above the 0 line (which I think we can safely assume is black) and the whites of the picture are around code value 230 or 90, whatever that means. But they are certainly nice and bright and are not in the range that will be clipped. So I can believe this as being more or less correct and as expected.

So next I’m going to add the same standard LUT to the external recording to see what happens.

Screenshot-2019-03-01-at-10.11.24 Sony's Internal Recording Levels Are Correct.
External S-Log2 with standard 709 LUT applied.

OK, this is clearly not right. Our blacks now go below the 0 line and they look clipped. The highlights don’t look totally out of place, but clearly there is something going very, very wrong when we this normal LUT to this correctly exposed external recording. There is no way our blacks should be going below zero and they look crushed/clipped. The internal recording didn’t behave like this. So what is going on with the external recording?

To try and figure this out lets take a look at the same files in DaVinci Resolve. For a start I trust the scopes in Resolve much more and it is a far better programme for managing different types of files. First we will look at the internal S-Log2 recording:

Screenshot-2019-03-01-at-10.21.17-1 Sony's Internal Recording Levels Are Correct.
Internal S-Log2, all looks good.

Once again the levels of the internal S-Log2 recordings look absolutely fine. Our peak is around code value 1010 which would be 252 in 8 bit. Right where the brightest bits of an S-log2 file should be. Now lets take a look at the external recording.

Screenshot-2019-03-01-at-10.22.51 Sony's Internal Recording Levels Are Correct.
External ProRes S-Log2 (Full Range)

If you compare the two screen grabs above you can see that the levels are exactly the same. Our peak level is around CV1010/CV252, just where it should be and the blacks look the same also. The internal and external recordings have the same levels and look the same. There is no difference (other then perhaps less compression and fewer artefacts in the ProRes file). There is nothing wrong with either of these recordings and certainly nothing wrong with the way Sony record S-Log2 internally. This is absolutely what I expect to see.

BUT – I’ve been a little bit sneaky here. As I knew that the external recording was a full range recording I told DaVinci Resolve to treat it as a full range recording. In the media bin I right clicked on the clip and under “clip attributes” I changed the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.

This is what it looks like if you allow Resolve to guess at what range the S-Log2 full range clip is by leaving the input range setting to “auto”:

Screenshot-2019-03-01-at-10.24.46 Sony's Internal Recording Levels Are Correct.
External ProRes S-Log2 Auto Range

In the above image we can see how in Resolve the clip becomes clipped because in a legal range recording anything over CV235/CV940 would be an illegal super white. Resolve is scaling the clip and pushing anything in the original file that was above CV235/CV940 off the top of the scale. The scaling is incorrect because Resolve doesn’t know the clip is supposed to be full range and therefore not scaled. If we compare this to what Premiere did with the external recording it’s actually very similar. Premiere also scaled the clip, only Premiere will show all those “illegal” levels above it’s 100 line instead of clipping then as Resolve does. That’s why Premiere can have those “impossible” 8 bit code values going up to CV275.

Just to be complete here, I did also test the internal .mp4 recordings in Resolve switching between “auto” and “full” range and in both cases the levels stayed exactly the same. This shows that Resolve is correctly handling the internally record full range S-Log as full range.

What about if you add a LUT? Well you MUST tell Resolve to treat the S-Log2 ProRes clip as a full range clip otherwise the LUT will not be right, if your footage is S-Log3 you also have to tell Resolve that it is full range:

Screenshot-2019-03-01-at-13.09.16 Sony's Internal Recording Levels Are Correct.
Resolve: Internal recording with the standard 709 LUT applied, all is exactly as expected. Deep shadows and white right at the top of the range.
Screenshot-2019-03-01-at-13.10.10 Sony's Internal Recording Levels Are Correct.
Resolve: External recording with the standard 709 LUT applied, clip input range set to “full”. Everything is once again as you would expect. Deep shadows and white at the top of the range. Also not that it is near perfect match to the internal recording. No hue or color shift (Premiere introduces a color shift, more on that later).
Screenshot-2019-03-01-at-13.14.02 Sony's Internal Recording Levels Are Correct.
Resolve: External recording with the standard 709 LUT applied, clip input range set to “auto”. This is clearly not right. The highlights are clipped and the blacks are crushed and clipped. It is so important to get the input range right when working with LUT’s!!

CONCLUSIONS:

Both the internal and external recordings are actually exactly the same. Both have the same levels, both use FULL range. There is absolutely nothing wrong with Sony’s internal recordings. The problem stems from the way most software will assume that the ProRes files are legal range. But if it’s an S-Log2 or S-Log3 recording it will in fact be full (data) range. Handling a full range clip as legal range means that highlights will be too high/bright or clipped and blacks will be crushed. So it’s really important that your software handles the footage correctly. If you are shooting using S-Log3 this problem is harder to spot as S-Log3 has a peak recording level that is well with the legal range, so you often won’t realise it’s being scaled incorrectly as it won’t necessarily look clip. If you use LUT’s and your ProRes clips look crushed or highlights look clipped you need to check that the input scaling is correct. It’s really important to get this right.

Why is there no difference between the levels when you shoot with a Cinegamma? Well when you shoot with a cinegamma the internal recordings are legal range so the internal recordings get treated as legal range and so do the external recordings, so they don’t appear to be different (In the YouTube video that led to this post the author discovers that if you record with a normal profile first and then switch to a log profile while recording the internal and external files will match. But this is because now the internal recording has the incorrect metadata, so it too gets scaled incorrectly, so both the internal and external files are now wrong – but the the same).

Once again: There is nothing wrong with the internal recordings. The problem is with the way the external recordings are being handled. The external recordings haven’t been recorded incorrectly, they have been recorded as they should be. The problem is the edit software is incorrectly interpreting the external recordings. The external recordings don’t have the necessary metadata to mark the files as full range because the recorder is external to the camera and doesn’t know what it’s being sent by the camera. This is a common problem when using external recorders.

What can we do in Premiere to make Premiere work right with these files?

You don’t need to do anything in Premiere for the internal .mp4 recordings. They are handled correctly but Premiere isn’t handling the full/data range ProRes files correctly.

My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect. So if anyone knows of a better way to do this do please let me know.

Screenshot-2019-03-01-at-11.04.04 Sony's Internal Recording Levels Are Correct.
Using the legacy “fast color corrector” filter to transform the external recording to the correct range within Premiere.

Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction.