The only way to change the perspective of a shot is to change the position of the camera relative to the subject or scene. Just put a 1.5x wider lens on a s35camera and you have exactly the same angle of view as a Full Frame camera. It is an internet myth that Full Frame changes the perspective or the appearance of the image in a way that cannot be exactly replicated with other sensor or frame sizes. The only thing that changes perspective is how far you are from the subject. It’s one of those laws of physics and optics that can’t be broken. The only way to see more or less around an object is by changing your physical position.
The only thing changing the focal length or sensor size changes is magnification and you can change the magnification either by changing sensor size or focal length and the effect is exactly the same either way. So in terms of perspective, angle of view or field of view an 18mm s35 setup will produce an identical image to a 27mm FF setup. The only difference may be in DoF depending on the aperture where f4 on FF will provide the same DoF as f2.8 on s35. If both lenses are f4 then the FF image will have a shallower DoF.
Again though physics play a part here as if you want to get that shallower DoF from a FF camera then the lens FF lens will normally need to have the same aperture as the s35 lens. To do that the elements in the FF lens need to be bigger to gather twice as much light so that it can put the same amount of light as the s35 lens across the twice as large surface area of the FF sensor. So generally you will pay more for a comparable FF like for like aperture lens as a s35 lens. Or you simply won’t be able to get an equivalent in FF because the optical design becomes too complex, too big, too heavy or too costly.
This in particular is a big issue for parfocal zooms. At FF and larger imager sizes they can be fast or have a big zoom range, but to do both is very, very hard and typically requires some very exotic glass. You won’t see anything like the affordable super 35mm Fujinon MK’s in full frame, certainly not at anywhere near the same price. This is why for decades 2/3″ sensors and 16mm film before that, ruled the roost for TV news as lenses with big zoom ranges and large fast apertures were relatively affordable.
Perhaps one of the commonest complaints I see today with larger sensors is “why can’t I find an affordable fast, parfocal zoom with more than a 4x zoom range”. Such lenses do exist, for s35 you have lenses like the $22K Canon CN7 17-120mm T2.9, which is pretty big and pretty heavy. For Full Frame the nearest equivalent is the more expensive $40K Fujinon Premista 28-100 t2.9. which is a really big lens weighing in at almost 4kg. But look at the numbers: Both will give a very similar AoV on their respective sensors at the wide end but the much cheaper Canon has a greatly extended zoom range and will get a tighter shot than the Premista at the long end. Yes, the DoF will be shallower with the Premista, but you are paying almost double, it is a significantly heavier lens and it has a much reduced zoom ratio. So you may need both the $40K Premista 28-100 and the $40K Premista 80-250 to cover everything the Canon does (and a bit more). So as you can see, getting that extra shallow DoF may be very costly. And it’s not so much about the sensor, but more about the lens.
The History of large formats:
It is worth considering that back in the 50’s and 60’s we had VistaVision, a horizontal 35mm format the equivalent of 35mm FF, plus 65mm and a number of other larger than s35 formats. All in an effort to get better image quality.
VistaVision (The closet equivalent to 35mm Full Frame).
VistaVision didn’t last long, about 7 or 8 years because better quality film stocks meant that similar image quality could be obtained from regular s35mm film and shooting VistaVision was difficult due to the very shallow DoF and focus challenges, plus it was twice the cost of regular 35mm film. It did make a brief comeback in the 70’s for shooting special effects sequences where very high resolutions were needed. VistaVision was superseded by Cinemascope which uses 2x Anamorphic lenses and conventional vertical super 35mm film and Cinemascope was subsequently largely replaced by 35mm Panavision (the two being virtually the same thing and often used interchangeably).
At around the same time there were various 65mm (with 70mm projection) formats including Super Panavision, Ultra Panavision and Todd-AO These too struggled and very few films were made using 65mm film after the end of the 60’s. There was a brief resurgence in the 80’s and again recently there have been a few films, but production difficulties and cost has meant they tend to be niche productions.
Historically there have been many attempts to establish mainstream larger than s35 formats. But by and large audiences couldn’t tell the difference and even if they did they wouldn’t pay extra for them. Obviously today the cost implication is tiny compared to the extra cost of 65mm film or VistaVision. But the bottom line remains that normally the audience won’t actually be able to see any difference, because in reality there isn’t one, other than perhaps a marginal resolution increase. But it is harder to shoot FF than s35. Comparable lenses are more expensive, lens choices more limited, focus is more challenging at longer focal lengths or large apertures. If you get carried away with too large an aperture you get miniaturisation and cardboarding effects if you are not careful (these can occur with s35 too).
Can The Audience Tell – Does The Audience Care?
Cinema audiences have not been complaining that the DoF isn’t shallow enough, or that the resolution isn’t high enough (Arri’s success has proved that resolution is a minor image quality factor). But they are noticing focus issues, especially in 4K theaters.
So while FF and the other larger format are here to stay. Full Frame is not the be-all and end-all. Many, many people believe that FF has some kind of magic that makes the images different to smaller formats because they “read it on the internet so it must be true”. I think sometimes some things read on the internet create a placebo effect where when you read it enough times you will actually become convinced that the images are different, even when in fact they are not. Once they realise that actually it isn’t different, I’m quite sure many will return to s35 because that does seem to be the sweet spot where DoF and focus is manageable and IQ is plenty good enough. Only time will tell, but history suggest s35 isn’t going anywhere any time soon.
Today’s modern cameras give us the choice to shoot either FF or s35. Either can result in an identical image, it’s only a matter of aperture and focal length. So pick the one that you feel most comfortable with for you production. FF is nice, but it isn’t magic.
Really it’s all about the lens.
The really important thing is your lens choice. I believe that what most people put down as “the full frame effect” is nothing to do with the sensor size but the qualities of the lenses they are using. Full frame stills cameras have been around for a long time and as a result there is a huge range of very high quality glass to choose from (as well as cheaper budget lenses). In the photography world APS-C which is similar to super 35mm movie film has always been considered a lower cost or budget option and many of the lenses designed for APS-C have been built down to a price rather than up in quality. This makes a difference to the way the images may look. So often Full Frame lenses may offer better quality or a more pleasing look, just because the glass is better.
I recently shot a project using Sony’s Venice camera over 2 different shoots. For the shoot we used Full Frame and the Sigma Cine Primes. The images we got looked amazing. But then the second shoot where we needed at times to use higher frame rates we shot using super 35 with a mix of the Fujinon MK zooms and Sony G-Master lenses. Again the images looked amazing and the client and the end audience really can’t tell the footage from the first shoot with the footage from the second shoot.
Downsampling from 6K.
One very real benefit shooting 6K full frame does bring, with both the FX9 and Sony Venice (or any other 6K FF camera) is that when you shoot at 6K and downsample to 4K you will have a higher resolution image with better colour and in most cases lower noise than if you started at 4K. This is because the bayer sensors that all the current large sensor camera use don’t resolve 4K when shooting at 4K. To get 4K you need to start with 6K.
It’s a common problem. You are shooting a performance or event where LED lighting has been used to create dramatic coloured lighting effects. The intense blue from many types of LED stage lights can easily overload the sensor and instead of looking like a nice lighting effect the blue light becomes an ugly splodge of intense blue that spoils the footage.
Well there is a tool hidden away in the paint settings of many recent Sony cameras that can help. It’s called “adaptive matrix”.
When adaptive matrix is enabled, when the camera sees intense blue light such as the light from a blue LED light, the matrix adapts to this and reduces the saturation of the blue colour channel in the problem areas of the image. This can greatly improve the way such lights and lighting look. But be aware that if trying to shoot objects with very bright blue colours, perhaps even a bright blue sky, if you have the adaptive matrix turned on it may desaturate them. Because of this the adaptive matrix is normally turned off by default.
If you want to turn it on, it’s normally found in the cameras paint and matrix settings and it’s simply a case of setting adaptive matrix to on. I recommend that when you don’t actually need it you turn it back off again.
Most of Sony’s broadcast quality cameras produced in the last 5 years have the adaptive matrix function, that includes the FS7, FX9, Z280, Z450, Z750, F5/F55 and many others.
Last week I was at O-Video in Bucharest preparing for a workshop the following day. They are a full service dealer. We had an FX9 for the workshop and they had some very nice lenses. So with their help I decided to do a very quick comparison of the lenses we had. I was actually very surprised by the results. At the end of the day I definitely had a favourite lens. But I’m not going to tell you which one yet.
The 5 lenses we tested were: Rokinon Xeen, Cooke Panchro 50mm, Leitz (lecia) Thalia, Zeiss Supreme Radiance and the Sony 28-135mm zoom that can be purchased as part of a kit with the FX9.
I included a strong backlight in the shot to see how the different lenses dealt with flare from off-axis lights. 2 of the lenses produced very pronounced flare, so for those lenses you will see two frame grabs. One with the flare and one with the back light flagged off.
I used S-Cinetone on the FX9 and set the aperture to f2.8 for all of the lenses except the Sony 28-135mm. For that lens I added 6dB of gain to normalise the exposure, you should be able to figure out which of the examples is the Sony zoom.
One of the lenses was an odd focal length compared to all the others. Some of you might be able to work out which one that is, but again I’m not going to tell you just yet.
Anyway, enjoy playing guess the lens. This isn’t intended to be an in depth test. But it’s interesting to compare lenses when you have access to them. I’ll reveal which lens is which in a couple of weeks in the comments. You can click on each image to enlarge it.
Big thanks to everyone at O-Video Bucharest for making this happen.
A completely useless bit of trivia for you is that the “E” in E-mount stands for eighteen. 18mm is the E-mount flange back distance. That’s the distance between the sensor and the face of the lens mount. The fact the e-mount is only 18mm while most other DSLR systems have a flange back distance of around 40mm means thare are 20mm or more in hand that can be used for adapters to go between the camera body and 3rd party lenses with different mounts.
Here’s a little table of some common flange back distances:
The simple answer as to whether you can shoot anamorphic on the FX9 or not, is no, you can’t. The FX9 certainly to start with, will not have an anamorphic mode and it’s unknown whether it ever will. I certainly wouldn’t count on it ever getting one (but who knows, perhaps if we keep asking for it we will get it).
But just because a camera doesn’t have a dedicated anamorphic mode it doesn’t mean you can’t shoot anamorphic. The main thing you won’t have is de-squeeze. So the image will be distorted and stretched in the viewfinder. But most external monitors now have anamorphic de-squeeze so this is not a huge deal and easy enough to work around.
1.3x or 2x Anamorphic?
With a 16:9 or 17:9 camera you can use 1.3x anamorphic lenses to get a 2:39 final image. So the FX9, like most 16:9 cameras will be suitable for use with 1.3x anamorphic lenses out of the box.
But for the full anamorphic effect you really want to shoot with 2x anamorphic lenses. A 2x anamorphic lens will give your footage a much more interesting look than a 1.3x anamorphic. But if you want to reproduce the classic 2:39 aspect ratio normally associated with anamorphic lenses and 35mm film then you need a 4:3 sensor rather than a 16:9 one – or do you?
Anamorphic on the PMW-F5 and F55.
It’s worth looking at shooting 2x Anamorphic on the Sony F5 and F55 cameras. These cameras have 17:9 sensors, so they are not ideal for 2x Anamorphic. However the cameras do have a dedicated Anamorphic mode. When shooting with a 2x Anamorphic lens because the 17:9 F55 sensor, like most super 35mm sensors, is not tall enough, after de-squeezing you will end up with a very narrow 3.55:1 aspect ratio. To avoid this very narrow final aspect ratio, once you have de-squeezed the image you need to crop the sides of the image by around 0.7x and then expand the cropped image to fill the frame. This not only reduces the resolution of the final output but also the usable field of view. But even with the resolution reduction as a result of the crop and zoom it was still argued that because the F55 starts from a 4K sensor that this was roughly the equivalent of Arri’s open gate 3.4K. However the loss of field of view still presents a problem for many productions.
What if I have Full Frame 16:9?
The FX9 has a 6K full frame sensor and a full frame sensor is bigger, not just wider but most importantly it’s taller than s35mm. Tall enough for use with a 2x s35 anamorphic lens! The FX9 sensor is approx 34mm wide and 19mm tall in FF6K mode.
In comparison the Arri 35mm 4:3 open gate sensor is area is 28mm x 18.1mm and we know this works very well with 2x Anamorphic lenses as this mimics the size of a full size 35mm cine film frame. The important bit here is the height – 18.1mm with the Arri open gate and 18.8mm for the FX9 in Full Frame Scan Mode.
Crunching the numbers.
If you do the maths – Start with the FX9 in FF mode and use a s35mm 2x anamorphic lens.
Because the image is 6K subsampled to 4K the resulting recording will have 4K resolution.
But you will need to crop the sides of the final recording by roughly 30% to remove the left/right vignette caused by using an anamorphic lens designed for 35mm movie film (the exact amount of crop will depend on the lens). This then results in a 2.8K ish resolution image depending on how much you need to crop.
4K Bayer doesn’t won’t give 4K resolution.
That doesn’t seem very good until you consider that a 4K 4:3 bayer sensor would only yield about 2.8K resolution anyway.
Arri’s s35mm cameras are open gate 3.2K bayer sensors so will result in an even lower resolution image, perhaps around 2.2K. Do remember that the original Arri ALEV sensor was designed when 2K was the norm for the cinema and HD TV was still new. The Arri super 35 cameras were for a long time the gold standard for Anamorphic because their sensor size and shape matches the size and shape of a full size 35mm movie film frame. But now cameras like Sony’s Venice that can shoot both 6K and 4K 4:3 and 6:5 are starting now taking over.
The FX9 in Full Frame scan mode will produce a great looking image with a 2x anamorphic lens without losing any of the field of view. The horizontal resolution won’t be 4K due to the left and right edge crop required, but the horizontal resolution should be higher than you would get from a 4K 16:9 sensor or a 3.2K 4:3 sensor. Unlike using a 16:9 4K sensor where both the horizontal and vertical resolution are compromised the FX9’s vertical resolution will be 4K and that’s important.
What about Netflix?
While Netflix normally insist on a minimum of a sensor with 4K of pixels horizontally for capture, they are permitting sensors with lower horizontal pixel counts to be used for anamorphic capture. Because the increased sensor height needed for 2x anamorphic means that there are more pixels vertically. The total usable pixel count when using the Arri LF with a typical 35mm 2x anamorphic lens is 3148 x 2636 pixels. Thats a total of 8 megapixels which is similar to the 8 megapixel total pixel count of a 4K 16:9 sensor with a spherical lens. The argument is that the total captured picture information is similar for both, so both should be, and are indeed allowed. The Arri format does lead to a final aspect ratio slightly wider than 2:39.
So could the FX9 get Netflix approval for 2x Anamorphic?
The FX9’s sensor has is 3168 pixel tall when shooting FF 16:9 as it’s pixel pitch is finer than the Arri LF sensor. When working with a 2x anamorphic super 35mm lens the image circle from the lens will cover around 4K x 3K of pixels, a total of 12 megapixels on the sensor when it’s operating in the 6K Full Frame scan mode. But then the FX9 will internally down scale this to that vignetted 4K recording that needs to be cropped.
6K down to 4K means that the 4K covered by the lens becomes roughly 2.7K. But then the 3.1K from the Arri when debayered will more than likely be even less than this, perhaps only 2.1K
But whether Netflix will accept the in camera down conversion is a very big question. The maths indicates that the resolution of the final output of the FX9 would be greater than that of the LF, even taking the necessary crop into account. But this would need to be tested and verified in practice. If the math is right, I see no reason why the FX9 won’t be able to meet Netflix’s minimum requirements for 2x anamorphic production. If this is a workflow you wish to pursue I would recommend taking the 10 bit 4:2:2 HDMI out to a ProRes recorder and record using the best codec you can until the FX9 gains the ability to output raw. Meeting the Netflix standard is speculation on my part, perhaps it never will get accepted for anamorphic, but to answer the original question –
– Can you shoot anamorphic with the FX9 – Absolutely, yes you can and the end result should be pretty good. But you’ll have to put up with a distorted image with the supplied viewfinder (for now at least).
Last night I attended the official opening of Sony’s new Digital Media Production center in LA. This is a very nice facility where Sony can show end users how to get the most from full end to end digital production, from camera to display. And a most impressive display it is as the facility has a huge permanent 26ft HDR C-Led equipped cinema.
One of the key announcements made at the event was details of what will be the 5th major firmware update for the Venice cameras. Due January 2020 version 5 will extend the cameras high frame rate capabilities as well as adding or improving on a number of existing options:
· HFR Capabilities – Up to 90fps at 6K 2.39:1 and 72fps at 6K 17:9.
· Apple ProRes 4444 – Record HD videos in the high picture quality on SxS PRO+ cards, without Sony’s AXS-R7 recorder. This is especially effective for HD VFX workflow.
· 180 Degree RotationMonitor Out– Flip and flop images via viewfinder and SDI.
· High Resolution Magnificationvia HD Monitor Out – Existing advanced viewfinder technology for clearer magnification is now extended to HD Monitor Out.
· Improved User Marker Settings – Menu updates for easier selection of frame lines on viewfinder.
90fps in 6K means that a full 3x slow down will be possible for 6K 24fps projects. In addition to the above Sony now have a new IDT for Venice for ACES. so now VENICE has officially joined The F65, F55 and F55 in earning the ACES logo, meeting the specifications laid out in the ACES Product Partner Logo Program. I will post more details of this and how to get hold of the IDT over the weekend.
This is a question that gets asked a lot. And if you are thinking about buying a new camera it has to be one the you need to think about. But in reality I don’t think 8K is a concern for most of us.
I recently had a conversation with a representative of a well known TV manufacturer. We discussed 8K and 8K TV’s. An interesting conclusion to the conversation was that this particular TV manufacturer wasn’t really expecting their to be a lot of 8K content anytime soon. The reason for selling 8K TV’s is the obvious one – In the consumers eyes. 8K is a bigger number than 4K, so it must mean that it is better. It’s any easy sell for the TV manufacturers, even though it’s arguable that most viewers will never be able to tell the difference between an 8K TV and a 4K one (lets face it most struggle to tell the difference between 4K and HD).
Instead of expecting 8K content this particular TV manufacturer will be focussing on high quality internal upscaling of 4K content to deliver an enhanced viewing experience.
It’s also been shown time and time again that contrast and Dynamic Range trump resolution for most viewers. This was one of the key reasons why it took a very long time for electronic film production to really get to the point where it could match film. A big part of the increase in DR for video cameras came from the move from the traditional 2/3″ video sensor to much bigger super 35mm sensors with bigger pixels. Big pixels are one of the keys to good dynamic range and the laws of physics that govern this are not likely to change any time soon.
This is part of the reason why Arri have stuck with the same sensor for so long. They know that reducing the pixel size to fit more into the same space will make it hard to maintain the excellent DR their cameras are known for. This is in part why Arri have chosen to increase the sensor size by combining sensors. It’s at least in part why Red and Sony have chosen to increase the size of their sensors beyond super 35mm as they increase resolution. The pixels on the Venice sensor are around the same size as most 4K s35 cameras. 6K was chosen as the maximum resolution because that allows this same pixel size to be used, no DR compromise, but it necessitates a full frame sensor and the use of high quality full frame lenses.
So, if we want 8K with great DR it forces us to use ever bigger sensors. Yes, you will get a super shallow DoF and this may be seen as an advantage for some productions. But what’s the point of a move to higher and higher resolutions if more and more of the image is out of focus due to a very shallow DoF? Getting good, pin sharp focus with ever bigger sensors is going to be a challenge unless we also dramatically increase light levels. This goes against the modern trend for lower illumination levels. Only last week I was shooting a short film with a Venice and it was a struggle to balance the amount of the subject that was in focus with light levels, especially at longer focal lengths. I don’t like shots of people where one eye is in focus but the other clearly not, it looks odd, which eye should you choose as the in-focus eye?
And what about real world textures? How many of the things that we shoot really contain details and textures beyond 4K? And do we really want to see every pore, wrinkle and blemish on our actors faces or sets? too much resolution on a big screen creates a form of hyper reality. We start to see things we would never ever normally see as the image and the textures become magnified and expanded. this might be great for a science documentary but is distracting for a romantic drama.
If resolution really, really was king then every town would have an IMAX theater and we would all be shooting IMAX.
Before 8K becomes normal and mainstream I believe HDR will be the next step. Consumers can see the benefits of HDR much more readily than 8K. Right now 4K is not really the norm, HD is. There is a large amount of 4K acquisition, but it’s not mainstream. The amount of HDR content being produced is still small. So first we need to see 4K become normal. When we get to the point that whenever a client rings the automatic assumption is that it’s a 4K shoot, so we won’t even bother to ask, that’s when we can consider 4K to be normal, but that’s not the case for most of us just yet. Following on from that the next step (IMHO) will be where for every project the final output will be 4K HDR. I see that as being at least a couple of years away yet.
After all that, then we might see a push for more 8K. At some point in the not too distant future 8K TV’s will be no more expensive than 4K ones. But I also believe that in-TV upscaling will be normal and possibly the preferred mode due to bandwidth restrictions. less compressed 4K upscaled to 8K may well look just as good if not better than an 8K signal that needs more compression.
8K may not become “normal” for a very long time. We have been able to easily shoot 4K for 6 years or more, but it’s only just becoming normal and Arri still have a tremendous following that choose to shoot at less than 4K for artistic reasons. The majority of Cinemas with their big screens are still only 2K, but audiences rarely complain of a lack of resolution. More and more content is being viewed on small phone or tablet screens where 4K is often wasted. It’s a story of diminishing returns, HD to 4K is a much bigger visual step than 4K to 8K and we still have to factor in how we maintain great DR.
So for the next few years at least, for the majority of us, I don’t believe 8K is actually desirable. many struggle with 4K workflows and the extra data and processing power needed compared to HD. An 8K frame is 4 times the size of a 4K frame. Some will argue that shooting in 8K has many benefits. This can be true if you main goal is resolution but in reality it’s only really very post production intensive projects where extensive re-framing, re-touching etc is needed that will benefit from shooting in 8K right now. It’s hard to get accurate numbers, but the majority of Hollywood movies still use a 2K digital intermediate and only around 20% of cinemas can actually project at more than 2K.
So in conclusion, in my humble opinion at least. 8K is more about the sales pitch than actual practical use and application. So people will use it – just because they can and it sounds impressive. But for most of us right now it simply isn’t necessary and it may well be a step too far.
So this landed in my inbox today. Atomos are releasing what on paper at least is a truly remarkable new recorder and monitor, the Shogun 7.
For some time now the Atomos Inferno has been my go-to monitor. It’s just so flexible and the HDR screen is wonderful. But the new Shogun 7 looks to be quite a big upgrade.
The screen is claimed to be able to display an astounding 1,000,000:1 contrast ratio and 15+ stops of dynamic range. That means you will be able to shoot in log with almost any camera and see the log output 1:1. No need to artificially reduce the display range, no more flat looking log or raw, just a real look at what you are actually shooting.
I’m off to NAB at the weekend and I will be helping out on the Atomos booth, so I will be able to take a good look at the Shogun 7. If it comes anywhere near to the specs in the press release it will be a must-have piece of kit whether you shoot on an FS5 or Venice!
Here’s the the press release:
Melbourne, Vic – 4 April, 2019:
The new Atomos Shogun 7 is the ultimate 7-inch HDR monitor, recorder and switcher. Precision-engineered for the film and video professional, it uses the very latest video technologies available. Shogun 7 features a truly ground-breaking HDR screen – the best of any production monitor in the world. See perfection on the all-new 1500nit daylight-viewable, 1920×1200 panel with an astounding 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. Shogun 7 will truly revolutionize the on-camera monitoring game.
Bringing the real world to your monitor
With Shogun 7 blacks and colors are rich and deep. Images appear to ‘pop’ with added dimensionality and detail. The incredible Atomos screen uses a unique combination of advanced LED and LCD technologies which together offer deeper, better blacks than rival OLED screens, but with the much higher brightness and vivid color performance of top-end LCDs. Objects appear more lifelike than ever, with complex textures and gradations beautifully revealed. In short, Shogun 7 offers the most detailed window into your image, truly changing the way you create visually.
The Best HDR just got better
A new 360 zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color, yielding jaw-dropping results. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered. For the first time you can enjoy on-screen the same dynamic range, palette of colors and shades that your camera sensor sees.
On-set HDR redefined with real-time Dolby Vision HDR output
Atomos and Dolby have teamed up to create Dolby Vision HDR “live” – the ultimate tool to see HDR live on-set and carry your creative intent from the camera through into HDR post production. Dolby have optimised their amazing target display HDR processing algorithm and which Atomos have running inside the Shogun 7. It brings real-time automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and magically, automatically, AtomOS 10 analyses the image, queries the TV, and applies the right color and brightness profiles for the maximum HDR experience on the display. Enjoy complete confidence that your camera’s HDR image is optimally set up and looks just the way you wanted it. It is an invaluable HDR on-set reference check for the DP, director, creatives and clients – making it a completely flexible master recording and production station.
“We set out to design the most incredibly high contrast and detailed display possible, and when it came off the production line the Shogun 7 exceeded even our expectations. This is why we call it a screen with “Unbelievable HDR”. With multi-camera switching, we know that this will be the most powerful tool we’ve ever made for our customers to tell their stories“, said Jeromy Young, CEO of Atomos.
Shogun 7 records the best possible images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to reliable AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most Quad Link, Dual Link or Single Link SDI cinema cameras. With Shogun 7 every pixel is perfectly preserved with data rates of up to 1.8Gb/s.
Monitor and record professional XLR audio
Shogun 7 eliminates the need for a separate audio recorder. Add 48V stereo mics via an optional balanced XLR breakout cable. Select Mic or Line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. You can monitor the selected stereo track via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.
AtomOS 10, touchscreen control and refined body
Atomos continues to refine the elegant and intuitive AtomOS operating system. Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the award-winning Ninja V. Icons and colors are designed to ensure that the operator can concentrate on the image when they need to. The completely new body of Shogun 7 has a sleek Ninja V like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting.
AtomOS 10 on Shogun 7 has the full range of monitoring tools that users have come to expect from Atomos, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis.
Portable multi-cam live switching and recording for Shogun 7 and Sumo 19
Shogun 7 is also the ultimate portable touch-screen controlled multi-camera switcher with asynchronous quad-ISO recording. Switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update. Sumo19 and Shogun 7 are the ideal devices to streamline your multi-camera live productions.
Enjoy the freedom of asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, just import the xml file into your NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed. This amazing functionality makes Shogun 7 and Sumo19 the most flexible in-the-field switcher-recorder-monitors available.
Shogun 7 will be available in June 2019 priced at $US 1499/ €1499 plus local taxes from authorized Atomos dealers.
There is something very special about the way anamorphic images look, something that’s not easy to replicate in post production. Sure you can shoot in 16:9 or 17:9 and crop down to the typical 2.35:1 aspect ratio and sure you can add some extra anamorphic style flares in post. But what is much more difficult to replicate is all the other distortions and the oval bokeh that are typical of an anamorphic lens.
Anamorphic lenses work by distorting the captured image. Squeezing or compressing it horizontally, stretching it vertically. The amount of squeeze that you will want to use will depend on the aspect ratio of the sensor or film frame. With full frame 35mm cameras or cameras with a 4:3 aspect ratio sensor or gate you would normally use an anamorphic lens that squeezes the image by 2 times. Most anamorphic cinema lenses are 2x anamorphic, that is the image is squeezed 2x horizontally. You can use these on cameras with a 16:9 or 17:9 super35mm sensor, but because a Super35 sensor already has a wide aspect ratio a 2x squeeze is much more than you need for that typical cinema style final aspect ratios of 2.39:1.
For most Super35mm cameras it is normally better to use a lens with a 1.33x squeeze. 1.33x squeeze on Super35 results in a final aspect ratio close to the classic cinema aspect ratio of 2.39:1.
Traditionally anamorphic lenses have been very expensive. The complex shape of the anamorphic lens elements are much harder to make than a normal spherical lens. However another option is to use an anamorphic adapter on the front of an existing lens to turn it into an anamorphic lens. SLR Magic who specialise in niche lenses and adapters have had a 50mm diameter 1.33x anamorphic adapter available for some time. I’ve used this with the FS7 and other cameras in the past, but the 50mm diameter of the adapter limits the range of lenses it can be used with (There is also a 50mm 2x anamorphot for full frame 4:3 aspect ratio sensors from SLR Magic).
Now SLR Magic have a new larger 65mm adapter. The 1.33-65 Anamorphot has a much larger lens element, so it can be used with a much wider range of lenses. In addition it has a calibrated focus scale on it’s focus ring. One thing to be aware of with adapters like these is that you have to focus both the adapter and the lens you are using it on. For simple shoots this isn’t too much of a problem. But if you are moving the camera a lot or the subject is moving around a lot, trying to focus both lenses together can be a challenge.
Enter the PD Movie Dual Channel follow focus.
The PD Movie Dual follow focus is a motorised follow focus system that can control 2 focus motors at the same time. You can get both wired and wireless versions depending on your needs and budget. For the anamorphic shoot I had the wired version (I do personally own a single channel PD Movie wireless follow focus). Setup is quick and easy, you simply attach the motors to your rods, position the gears so they engage with the gear rings on the lens and the anamorphot and press a button to calibrate each motor. It takes just a few moments and then you are ready to go. Now when you turn the PD Movie focus control wheel both the taking lens and the anamorphot focus together.
I used the anamorphot on both the Fujinon MK18-55mm and the MK50-135mm. It works well with both lenses but you can’t use focal lengths wider than around 35mm without the adapter some causing vignetting. So on the 18-55 you can only really use around 35 to 55mm. I would note that the adapter does act a little like a wide angle converter, so even at 35mm the field of view is pretty wide. I certainly didn’t feel that I was only ever shooting at long focal lenghts.
Like a lot of lens adapters there are some things to consider. You are putting a lot of extra glass in front of you main lens, so it will need some support. SLR magic do a nice support bracket for 15mm rods and this is actually essential as it stops the adapter from rotating and keeps it correctly oriented so that your anamorphic squeeze remains horizontal at all times. Also if you try to use too large an aperture the adapter will soften the image. I found that it worked best between f8 and f11, but it was possible to shoot at f5.6. If you go wider than this, away from the very center of the frame you get quite a lot of softening image softening. This might work for some projects where you really want to draw the viewer to the center of the frame or if you want a very stylised look, but it didn’t suit this particular project.
The out of focus bokeh has a distinct anamorphic shape, look and feel. As you pull focus the shape of the bokeh changes horizontally, this is one of the key things that makes anamorphic content look different to spherical. As the adapter only squeezes by 1.33 this is as pronounced as it would be if you shot with a 2x anamorphic. Of course the other thing most people notice about anamorphic images is lens flares that streak horizontally across the image. Intense light sources just off frame would produce blue/purple streaks across the image. If you introduce very small point light sources into the shot you will get a similar horizontal flare. If flares are your thing it works best if you have a very dark background. Overall the lens didn’t flare excessively, so my shots are not full of flares like a JJ Abrams movie. But when it did flare the effect is very pleasing. Watch the video linked above and judge for yourself.
Monitoring and De-Squeeze.
When you shoot anamorphic you normally record the horizontally squashed image and then in post production you de-squeeze the image by compressing it vertically. Squashing the image vertically results in a letterbox, wide screen style image and it’s called “De-Squeeze”. You can shoot anamorphic without de-sqeezing the image provided you don’t mind looking at images that are horizontally squashed in your viewfinder or on your monitor. But these days you have plenty of monitors and viewfinders that can “de-squeeze” the anamorphic image so that you can view it with the correct aspect ratio. The Glass Hub film was shot using a Sony PMW-F5 recording to the R5 raw recorder. The PMW-F5 has the ability to de-squeeze the image for the viewfinder built in. But I also used an Atomos Shogun Inferno to monitor as I was going to be producing HDR versions of the film. The Shogun Inferno has both 2x and 1.33x de-squeeze built in so I was able to take the distorted S-Log3 output from the camera and convert it to a HDR PQ image and de-squeeze it all at the same time in the Inferno. This made monitoring really easy and effective.
I used DaVinci Resolve for the post production. In the past I might have done my editing in Adobe Premiere and the grading in Resolve. But Resolve is now a very capable edit package, so I completed the project entirely in Resolve. I used the ACES colour managed workflow as ACES means I don’t need to worry about LUT’s and in addition ACES adds a really nice film like highlight roll off to the output. If you have never tried a colour managed workflow for log or raw material you really should!
The SLR Magic 65-1.33 paired with the Fujinon MK lenses provides a relatively low cost entry into the world of anamorphic shooting. You can shoot anywhere from around 30-35mm to 135mm. The PD Movie dual motor focus system means that there is no need to try to use both hands to focus both the anamorphot and the lens together. The anamorphot + lens behave much more like a quality dedicated anamorphic zoom lens, but at a fraction of the cost. While I wouldn’t use it to shoot everything the Anamorphot is a really useful tool for those times you want something different.
I don’t normally get involved in stuff like this, but this has me quite angry. While an Oscar will still be awarded for Cinematography as well as Editing the presentations will take place during a commercial break (the same for Live Action Short, and Makeup and Hairstyling).
Cinematography and Editing are at the very heart of every movie. If we go back to the very beginnings of Cinema it was all about the Cinematography. The word Cinema is a shortened form of the word Cinematography which means to write or record movement. There were often no actors, no special effects, no sound, just moving images images. perhaps a shot of a train or people in a street. Since then Cinematography has continued to advance both artistically and technically. At the same time Editing has become as important as the script writing. The Cinematography and editing determine the mood, look, pace, style of the film.
As Guillermo del Torro has commented: “Cinematography and Editing are at the very heart of our craft, they are not inherited from a theatrical tradition or literary tradition, they are cinema itself”. I completely agree, so the presentations for Cinematography and Editing deserve the same coverage and respect as every other category. Cinematographers and editors are often overlooked by film goers, they rarely make the mainstream headlines in the same way that leading actors do. So really it is only fair that AMPAS should try to address this and give both the credit and coverage deserved by those men and women that make cinema possible.
Camera setup, reviews, tutorials and information for pro camcorder users from Alister Chapman.