Category Archives: Technology

Is This The Age Of The Small Camera Part 2

This is part 2 of my 2 part look at whether small cameras such as a Sony FX3 or A1 really can replace full size cinema cameras.

For this part of the article to make sense you will want to watch the YouTube clips that are linked here full screen at at the highest possible quality settings, Preferably 4K. Please don”t cheat, watch them in the order they are presented as I hope this will allow you to understand the points I am trying to make better.

Also, in the videos I have not put the different cameras that were tested side by side. You may ask why – well it’s because if you do watch a video online or a movie in a cinema you don’t see different cameras side by side on the same screen at the same time. A big point of all of this is that we are now at a place where the quality of even the smallest and cheapest  large sensor camera is likely going to be good enough to make a movie. It’s not necessarily a case of is camera A better than camera B, but the question is will the audience know or care which camera you used. There are 5 cameras and I have labelled them A through to E.

The footage presented here was captured during a workshop I did for Sony at Garage Studios in Dubai (if you need a studio space in Dubai they have some great low budget options). We weren’t doing carefully orchestrated  camera tests, but I did get the chance to quickly capture some side by side content.

So lets get into it.

THE FINAL GRADE:

In many regards I think this is the most important clip as this is how the audience would see the 5 cameras. It represents how they might look at the end of a production. I graded the cameras using ACES in DaVinci Resolve. 

Why ACES? Well, the whole point of ACES is to neutralise any specific camera “look”.  The ACES input transform takes the cameras footage and converts it to a neutral look that is meant to represent the scene as it actually was but with a film like highlight roll off added. From here the idea is that you can apply the same grade to almost any camera and the end result should look more or less the same. The look of different cameras is largely a result of differences in the electronic processing of the image in post production rather than large differences in the sensors. Most modern sensors capture a broadly similar range of colours with broadly similar dynamic range. So, provided you know the what recording levels represent what colour in the scene, it is pretty easy to make any camera look like any other, which is what ACES does.

The footage captured here was captured during a workshop, we weren’t specifically testing the different cameras in great depth. For the workshop the aim was to simply show how any of these cameras could work together. For simplicity and speed I manually set each camera to 5600K and as a result of the inevitable variations you get between different cameras, how each is calibrated and how each applies the white balance settings there were differences between in the  colour balance of each camera.

To neutralise these white balance differences the grading process started by using the colour chart to equalise the images from each camera using the “match” function in DaVinci Resolve. Then each camera has exactly the same grade applied – there are no grading differences, they are all graded in the same way.

Below are frame grabs from each camera with a slightly different grade to the video clips, again, they all look more or less the same.

A-graded_1.1.11-600x316 Is This The Age Of The Small Camera Part 2
The graded image from camera A. Click on the image to view the full resolution image.

 

B-graded_1.2.1-600x316 Is This The Age Of The Small Camera Part 2
The graded image from camera B. Click on the image to view the full resolution image.

 

C-graded_1.3.4-600x316 Is This The Age Of The Small Camera Part 2
The graded image from camera C. Click on the image to view the full resolution image.

 

D-Graded_1.4.1-600x316 Is This The Age Of The Small Camera Part 2
The graded image from camera D. Click on the image to view the full resolution image.

 

E-graded_1.5.1-600x316 Is This The Age Of The Small Camera Part 2
The graded image from camera E. Click on the image to view the full resolution image.



The first thing to take away from all of this then is that you can make any camera look like pretty much any other and a chart such as the “color checker video” and software that can read the chart and correct the colours according to the chart makes it much easier to do this.

To allow for issues with the quality of YouTube’s encoding etc here is a 400% crop of the same clips:

 

What I am expecting is that most people won’t actually see a great deal of difference between any of the cameras. The cheapest camera is $6K and the most expensive $75K, yet it’s hard to tell which is which or see much difference between them. Things that do perhaps stand out initially in the zoomed in image are the softness/resolution differences between the 4K and 8K cameras, but in the first un cropped clip this difference is much harder to spot and I don’t think an audience would notice especially if the one camera is used on it’s own so the viewer has nothing to directly compare it with. It is possible that there are also small focus differences between each camera, I did try to ensure each was equally well focussed but small errors may have crept in.

WHAT HAPPENS IF WE LIFT THE SHADOWS?

OK, so lets pixel peep a bit more and artificially raise the shadows so that we can see what’s going on in the darker parts of the image.

 

There are differences, but again there isn’t a big difference between any of the cameras. You certainly couldn’t call them huge and in all likelihood, even if for some reason you needed to raise or lift the shadows by an unusually large amount as done here (about 2.5 stops) the difference between “best” and “worst” isn’t large enough for it to be a situation where any one of these cameras would be deemed unusable compared to the others.

SO WHY DO YOU WANT A BETTER CAMERA?

So, if we are struggling to tell the difference between a $6K camera and a $75K one why do you want a “better” camera? What are the differences and why might they matter?

When I graded the footage from these cameras in the workshop it was actually quite difficult to find a way to “break” the footage from any of them. For the majority of grading processes that I tried  they all held up really well and I’d be happy to work with any of them, even the cameras using the highly compressed internal recordings held up well. But there are differences, they are not all the same and some are easier to work with than the others. 

The two cheapest cameras were a Sony FX3 and a Sony A1. I recorded using their built in codecs, XAVC-SI in the FX3 and XAVC-HS in the A1. These are highly compressed 10 bit codecs. The other cameras were all recorded using their internal raw codecs which are either 16 bit linear or 12 bit log. At some time I really do need to do a proper comparison of the internal XAVC form the FX3 and the ProResRaw that can be recorded externally. But it is hard to do a fully meaningful test as to get the ProResRaw into Resolve requires transcoding and a lot of other awkward steps. From my own experience the difference in what you can do with XAVC v ProResRaw is very small.

One thing that happens with most highly compressed codecs such as H264 (XAVC-SI) or H265(XAVC-HS) is a loss of some very fine textural information and the image breaking up into blocks of data. But as I am showing these clips via YouTube in a compressed state I needed to find a way to illustrate the subtle differences that I see when looking at the original material. So, to show the difference between the different sensors and codecs within these camera I decided to pick a colour using the Resolve colour picker and then turn that colour into a completely different one, in this case pink.

What this allows you to see is how precisely the picked colour is recorded and it also shows up some of the macro block artefacts. Additionally it gives an indication on how fine the noise is and the textural qualities of the recording. In this case  the finer the pink “noise” the better, as this is an indication of smaller, finer textural differences in the image. These smaller textural details would be helpful if chroma keying or perhaps for some types of VFX work. It might (and say might because I’m not convinced it always will) allow you to push a very extreme grade a little bit further.

I would guess that by now you are starting to figure out which camera is which – The cameras are an FX3, A1, Burano, Venice 2 and an ArriLF.

In this test you should be able to identify the highly compressed cameras from the raw cameras. The pink areas from the raw cameras are finer and less blocky, this is a good representation of the benefit of less compression and a deeper bit depth.

A-Compression_1.1.1-600x316 Is This The Age Of The Small Camera Part 2
Camera A. Click on the image to view the full resolution image.

 

B-Compression_1.2.1-600x316 Is This The Age Of The Small Camera Part 2
Compression and codec Camera B. Click on the image to view the full resolution image.

 

C-Compression_1.3.1-600x316 Is This The Age Of The Small Camera Part 2
Compression and codec Camera C. Click on the image to view the full resolution image.

 

D-compression_1.4.1-600x316 Is This The Age Of The Small Camera Part 2
Compression and codec Camera D. Click on the image to view the full resolution image.

 

E-Compression_1.4.1-600x316 Is This The Age Of The Small Camera Part 2
Compression and codec Camera E. Click on the image to view the full resolution image.



But even here the difference isn’t vast. It certainly, absolutely, exists. But at the same time  you could push ANY of these cameras around in post production and if you’ve shot well none of them are going to fall apart. 

As a side note I will say that I find grading linear raw footage such as the 16 bit X-OCN from a Venice or Burano more intuitive compared to working with compressed Log. As a result I find it a bit easier to get to where I want to be with the X-OCN than the XAVC. But this doesn’t mean I can’t get to the same place with either.

RESOLUTION MATTERS.

Not only is compression important but so too is resolution. To some degree increasing the resolution can make up for a lesser bit depth.  As these camera all use bayer sensors the chroma resolution will be somewhat less than the luma resolution. A 4K sensor such as the one in the FX3 or the Arri LF will have much lower chroma resolution than the 8K A1, Burano or Venice 2. If we look at the raised shadows clip again we can see some interesting things going on the the girls hair.

 

If you look closely camera D has a bit of blocky chroma noise in the shadows. I suspect this might be because this is one of the 4K sensor cameras and the lower chroma resolution means the chroma noise is a bit larger.

I expect that by now you have an idea of which camera is which, but here is the big reveal: A is the FX3, B is the Venice 2, C is Burano, D is an Arri LF, and E is the Sony A1.

What can we conclude from all of this: 

There are differences between codecs. A better codec with a greater bit depth will give you  more textural information. It is not necessarily simply that raw will always be better than YUV/YCbCr but because of raws compression efficiency it is possible to have very low levels of compression and a deep bit depth. So, if you are able to record with a better codec or greater bit depth why not do so. There are some textural benefits and there will be fewer compression artefacts. BUT this doesn’t mean you can’t get a great result from XAVC or another compressed codec.

If using a bayer sensor than using a sensor with more “K” than the delivery resolution can bring textural benefits.

There are differences in the sensors, but these differences are not really as great as many might expect. In terms of DR they are all actually very close, close enough that in the real world it isn’t going to make a substantial difference. As far as your audience is concerned I doubt they would know or care. Of course we have all seen the tests where you greatly under expose a camera and then bring the footage back to normal, and these can show differences. But that’s not how we shoot things. If you are serious about getting the best image that you can, then you will light to get the contrast and exposure that you want. What isn’t in this test is rolling shutter, but generally I rarely see issues with rolling shutter these days. But if you are worried about RS, then the Venice 2 is excellent and the best of the group tested here.

Assuming you have shot well there is no reason why an audience should find the image quality from the $6K FX3 unacceptable, even on a big screen. And if you were to mix and FX3 with a Venice 2 or Burano, again if you have used each camera equally well I doubt the audience would spot the difference.

BACK TO THE BEGINNING:

So this brings me back to where I started in part 1. I believe this is the age of the small camera – or at least there is no reason why you can’t use a camera like an FX3 or an A1 to shoot a movie. While many of my readers I am sure will focus on the technical details of the image quality of camera A against camera B, in reality these days it’s much more about the ergonomics and feature set as well as lens and lighting choices.

A small camera allows you to be quick and nimble, but a bigger camera may give you a lot more monitoring options as well as other things such as genlock. And….. if you can – having a better codec doesn’t hurt. So there is no – one fits all – camera that will be the right tool for every job.  

Day For Night With Infrared.

Many of you may have already seen articles about how DP Hoyte Van Hoytema used a Panavision System 65 film camera paired with an Alexa 65 modified to be sensitive to infrared light to shoot day for night on the film “Nope”. https://www.cined.com/filming-night-scenes-thinking-outside-the-box-on-the-film-nope/

Can You Make It Work?

Well, I was recently asked if I could come up with a rig to do the same using Sony cameras for an upcoming blockbuster feature with an A-list director being shot by a top DP.  This kind of challenge is something I enjoy immensely, so how could I not accept the challenge! I had some insight into how Hoyte Van Hoytema did it but I had none of the fine details and often its the fine details that make all the difference. And this was no exception. I discovered many small things that need to be just right if this process is to work well. There are a lot of things that can trip you up badly.

So a frantic couple of weeks ensued as I tried to learn everything I could about infrared photography and video and how it could be used to improve traditional day for night shooting. I don’t claim any originality in the process, but there is a lot of information missing about how it was actually done in Nope. I have shot with infrared before, so it wasn’t all new, but I had never used it this way before.

As I did a lot of  3D work when 3D was really big around 15 years ago, including designing award winning 3D rigs, I knew how to combine two cameras on the same optical axis. Even better I still had a suitable 3D rig, so at least that part of the equation was going to be easy (or at least that’s what I thought).

Building a “Test Mule”.

The next challenge was to create a low cost “test mule” camera before even considering what adaptations might be needed for a full blown digital cinema camera. To start with this needed to be cheap, but it also needed to be full frame and capable of taking a wide range of cinema lenses and sensitive to both visible and infrared light. So, I took an old A7S that had been gathering dust for a while, dismantled it and removed the infrared filter from the sensor.

IMG_0078-Large-600x450 Day For Night With Infrared.
A7S being modified for infrared (full spectrum).
IMG_0092-Large-600x450 Day For Night With Infrared.
Panavised and Infrared sensitive A7S with Panavision Primo lens.

 

As the DP wanted to test the process with Panavision lenses the camera was fitted with a PV70 mount and then collimated in it’s now heavily modified state (collimation has some interesting challenges when working with the very different wavelength of infrared light compared to visible). Now I could start to experiment, pairing the now infrared sensitive A7S with a second camera on the 3D rig. We soon found issues with this setup, but it allowed me to take the testing to the next stage before committing to modifying a more expensive camera for infrared.

IMG_0087-Large-600x450 Day For Night With Infrared.



This testing was needed to determine exactly what range of infrared light would produce the best results. The range of infrared you use is determined by filters added to the camera to cut the visible light and only pass certain parts of the infrared spectrum. There are many options, different filters work in slightly different ways. And not only do you need to test the infrared filters but you also need to consider how different neutral density filters might behave if you need to reduce the IR and visible light. Once I narrowed down the range of filters I wanted to test the next challenge was find very high quality filters that could either be fitted inside the camera body behind the lens or that were big enough (120mm +) for the Panavision lenses that were being considered for the film.

Once I had some filters to play with (I had 15 different IR filters) the next step was to start test shooting. I cheated here a bit. For some of the initial testing I used a pair of zoom lenses as I was pairing the A7S with several different cameras for the visible spectrum. The scan areas of the different sensors in the A7S and the visible light cameras were typically very slightly different sizes. So, a zoom lens was used to provide the same field of view from both cameras so that both could be more easily optically aligned on the 3D rig. You can get away with this, but it makes more work for post production as the distortions in each lens will be different and need correcting. For the film I knew we would need identical scan sizes and matched lenses, but that could come later once we knew how much camera modification would be needed. To start with I just needed to find out what filtration would be needed.

At this point I shot around 100 different filter and exposure tests that I then started to compare in post production. When you get it all just right the sky in the infrared image becomes very dark, almost black and highlights become very “peaky”. If you use the luminance from the infrared camera with its black sky and peaky highlights and then add in a bit of colour and textural detail from the visible camera it can create a pretty convincing day for night look. Because you have a near normal visible light exposure you can fine tune the mix of infrared and visible in post production to alter the brightness and colour of the final composite shot giving you a wide range of control over the day for night look.

So – now I know how to do it, the next step was to take it from the test mule to a pair of matching cinema quality cameras and lenses for a full scale test shoot. When you have two cameras on a 3D rig the whole setup can get very heavy, very fast. Therefore the obvious camera to adapt was a Sony Venice 2 with the 8K sensor as this can be made very compact by using the Rialto unit to split the sensor from the camera body – In fact one of the very first uses of Rialto was for 3D shooting on Avatar – The Way of Water.

With a bit of help from Panavision we adapted a Panavised Venice 2, making it full spectrum and then adding a carefully picked (based on my testing) special infrared filter into the cameras optical path. This camera was configured using a Rialto housing to keep it compact and light so that when placed on the 3D rig with the visible light Venice the weight remained manageable. The lenses used were Panavision PV70 Primo’s (if you want to use these lenses for infrared – speak to me first, there are some things you need to know).

IMG_0097-2-Large-600x450 Day For Night With Infrared.
3D rig with an Infrared capable Venice Rialto and normal Venice 2 with Panavision Primo lenses.



And then with the DP in attendance, with smoke and fog machines, lights and grip we tested. For the first few shot we had scattered clouds but soon the rain came and then it poured down for the rest of the day. Probably the worst possible weather conditions for a day for night shoot.  But that’s what we had and of course for the film itself there will be no guarantee of perfect weather.

IMG_0101-Large-600x450 Day For Night With Infrared.
Testing the complete day for night IR rig.

 

IMG_0122-Large-600x450 Day For Night With Infrared.
Testing how smoke behaves in infrared. Different types of smoke and haze and different types of lights behave very differently in infrared.

 

The large scale tests gave us an opportunity to test things like how different types of smoke and haze behave in infrared and also to take a look at interactions with different types of light sources.  With the right lights you can do some very interesting things when you are capturing both visible light and infrared opening up a whole new world of possibilities for creating unique looks in camera.

From there the footage went to the production companies post production facilities to produce dailies for the DP to view before being presented to the studios post production people. Once they understood the process and were happy with it there was a screening for the director along with a number of other tests for lighting and lenses.

IMG_0114-Large-600x450 Day For Night With Infrared.

Along the way I have learnt an immense amount about this process and how it works. What filters to use and when, how to adapt different cameras, how different lenses behave in the infrared spectrum (not all lenses can be used). Collimating adapted cameras for infrared is interesting as many of the usual test rigs will produce misleading or confusing results. I’ve also identified several other ways that a dual camera setup can be used to enhance shooing night scenes, both day for night and as well as at night, especially for effects heavy projects.  

At the time of writing it looks like most of the night scenes in this film will be shot at night, they have the budget and time to do this. But the director and DP have indicated that there are some scenes where they do wish to use the process (or a variation of it), but they are still figuring out some other details that will affect that decision.

Whether it gets used for this film or not I am now developing a purpose designed rig for day for night with infrared as I believe it will become a popular way to shoot night scenes. My cameras of choice for this will be a pair of Venice cameras. But other cameras can be used provided one can be adapted for IR and both can be synchronised together. I will have a pair of Sony F55’s, one modified for IR available for lower budget productions and a kit to reversibly adapt a Sony Venice. If you need a rig for day for night and someone that knows exactly how to do it, do get in touch! 

I’m afraid I can’t show you the test results, that content is private and belongs to the production. The 3D rig is being modified as you don’t need the ability to shoot with the cameras optically separated, removing the moving parts will make the rig more stable and easier to calibrate. Plus a new type of beam splitter mirror with better infrared transmission properties is on the way. As soon as I get an opportunity to shoot a new batch of test content with the adapted rig I will share it here.

XAVC-I or XAVC-L which to choose?

THE XAVC CODEC FAMILY

The XAVC family of codecs was introduced by Sony back in 2014.  Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.

XAVC-I

XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.


 

XAVC-L

XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).

Performance and Efficiency.

Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of  shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.

Motion Artefacts.

BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
 
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.

Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot. 

Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.

Why Do Cameras No Longer Have PAL/NTSC Options.

PAL and NTSC are very specifically broadcasting standards for standard definition television. PAL (Phase Alternating Line) and NTSC (National Television Standard Committee) are analog interlaced standards specifically for standard definition broadcasting and transmission. These standards are now only very, very rarely used for broadcasting.  And as most modern cameras are now high definition, digital and most commonly use progressive scan, these standards are no longer applicable to them.  

As a result you will now rarely see these as options in a modern video camera. In our now mostly digital and HD/UHD world the same standards are used whether you are in a country that used to be NTSC or used to be PAL. The only difference now is the frame rate. Countries that have  50Hz mains electricity and that used to be PAL countries predominantly use frame rates based on multiples of 25 frames per second. Countries that have 60Hz mains and that used to be NTSC countries use frame rates based around multiples of 29.97 frames per second. It is worth noting that where interlace is still in use the frame rate is half of the field rate. So, where someone talks about 60i (meaning 60 fields per second) in reality the frame rate will actually be 29.97 frames per second with each frame having two fields. Where someone mentions 50i the frame rate is 25 frames per second.

Most modern cameras rather than offering the ability to switch between PAL and NTSC now instead offer the ability to switch between 50 and 60Hz. Sometimes you may still see a “PAL Area” or “NTSC Area” option – note the use of the word “Area”. This isn’t switching the camera to PAL or NTSC, it is setting up the camera for areas that used to use PAL or used to use NTSC. 

My Exposure Looks Different On My LCD Compared To My Monitor!

This is a common problem and something people often complain about. It may be that the LCD screen of their camera and the brightness of the  image on their monitor don’t ever seem to quite match. Or after the shoot and once in the grading suite the pictures look brighter or darker than they did at the time of shooting.

A little bit of background info: Most of the small LCD screens used on video cameras are SDR Rec-709 devices. If you were to calibrate the screen correctly the brightness of white on the screen would be 100 Nits. It’s also important to note that this level is the level that is also used for monitors that are designed to be viewed in dimly lit rooms such as edit or grading suites as well as TV’s at home.

The issue with uncovered LCD screens and monitors is your perception of brightness changes according to the ambient viewing light levels. Indoors in a dark room the image on it will appear to be quite bright. Outside on a Sunny day it will appear to be much darker. It’s why all high end viewfinders have enclosed eyepieces, not just to help you focus on a small screen but also because that way you are always viewing the screen under the very same always dark viewing conditions. It’s why a video village on a film set will be in a dark tent. This allows you to then calibrate the viewfinder with white at the correct 100 NIT level and then when viewed in a dark environment your images will look correct.


If you are trying to use an unshaded LCD screen on a bright sunny day you may find you end up over exposing as you compensate for the brighter viewing conditions. Or if you also have an extra monitor that is either brighter or darker you may become confused as to which is the right one to base your exposure assessments on. Pick the wrong one and your exposure may be off.  My recommendation is to get a loupe for the LCD, then your exposure assessment will be much more consistent as you will then always be viewing the screen under the same near ideal conditions.

It’s also been suggested that perhaps the camera and monitor manufacturers should make more small, properly calibrated monitors. But I think a lot of people would be very disappointed with a proper calibrated but uncovered display where white would be 100 NITs as it would be too dim for most outside shoots. Great indoors in a dim room such as an edit or grading suite but unusably dim outside on a sunny day. Most smaller camera monitors are uncalibrated and place white 3 or 4 times brighter at 300 NIT’s or so to make them more easily viewable outside. But because there is no standard for this there can be great variation between different monitors making it hard to understand which one to trust depending on the ambient light levels.

SDI Failures and what YOU can do to stop it happening to you.

Updated 22/01/2024.

Sadly this is not an uncommon problem. Suddenly and seemingly for no apparent reason the SDI (or HDMI) output on your camera stops working. And this isn’t a new problem either, SDI and HDMI ports have been failing ever since they were first introduced. This issue affects all types of SDI and HDMI ports. But it is more likely with higher speed SDI ports such as 6G or 12G as they operate at higher frequencies and as a result the components used are more easily damaged as it is harder to protect them without degrading the high frequency performance.

Probably the most common cause of an SDI/HDMI port failure is the use of the now near ubiquitous D-Tap cable to power accessories connected to the camera. The D-Tap connector is sadly shockingly crudely designed. Not only is it possible to plug in many of the cheaper ones the wrong way around but with a standard D-Tap plug there is no mechanism to ensure that the negative or “ground” connection of the D-Tap cable makes or breaks before the live connection. There is a however a special but much more expensive D-Tap connector available that includes electronic protection against this very issue (although a great product, even these cannot totally provide protection from a poor ground connection) – see: https://lentequip.com/products/safetap

Imagine for a moment you are using a monitor that’s connected to your cameras SDI or HDMI port. You are powering the monitor via the D-Tap on the cameras battery as you always do and everything is working just fine. Then the battery has to be changed. To change the battery you have to unplug the D-Tap cable and as you pull the D-Tap out, the ground connection disconnects fractionally before the live connection. During that extremely brief moment there is still positive power going to the monitor but because the ground on the D-Tap is now disconnected the only ground route back to the battery becomes via the SDI cable through the camera. For a fraction of a second the SDI/HDMI cable becomes the power cable and that power surge blows the SDI driver chip.

After you have completed the battery swap, you turn everything back on and at first all appears good, but now you can’t get the SDI output to work. There’s no smoke, no burning smells, no obvious damage as it all happened in a tiny fraction of a second. The only symptom is a dead SDI.

And it’s not only D-Tap cables that can cause problems. A lot of the cheap DC barrel connectors have a center positive terminal that can connect before the outer barrel makes a good connection. There are many connectors where the positive can make before the negative.

You can also have problems if the connection between the battery and the camera isn’t perfect. A D-Tap connected directly to the battery might represent an easier route for power to flow back to the battery if there is any corrosion on the battery terminals or a loose batter plate or adapter.

It can also happen when powering the camera and monitor (or other SDI connected devices like a video transmitter) via separate mains adapters. The power outputs of most of the small, modern, generally plastic bodied switch mode type power adapters and chargers are not connected to ground. They have a positive and negative terminal that “floats” above ground at some unknown voltage. Each power supplies negative rail may be at a completely different voltage compared to ground.  So again an SDI cable connected between two devices, powered by different power supplies will act as the ground between them and power may briefly flow down the SDI cable as the SDI cables ground brings both power supply negative rails to the same common voltage. Failures this way are less common, but do still occur. 

For these reasons you should always connect all your power supplies, power cables, especially D-Tap or other DC power cables first. Avoid using adapters between the battery and the camera as each adapter plate is another possible cause of trouble.

Then while everything remains switched off the very last thing to connect should be the SDI or HDMI cables. Only when everything is connected should you turn anything on.

If unplugging or re-plugging a monitor (or anything else for that matter) turn everything off first. Do not connect or disconnect anything while any of the equipment is on.  Although to be honest the greatest risk is at the time you connect or disconnect any power cables such as when swapping a battery where you are using the D-Tap to power any accessories. So if changing batteries, switch EVERYTHING off first, then disconnect your SDI or HDMI cables before disconnecting the D-Tap or other power cables next. Seriously – you need to do this, disconnect the SDI or HDMI before changing the battery if the D-Tap cable has to be unplugged from the battery. Things are a little safer if any D-Tap cables are connected directly to the camera or a power plate that remains connected to the camera. This way you ca change the battery without needing to unplug the D-Tap cables and this does reduce the risk of issues.

Also inspect your cables regularly, check for damage to the pins and the cable, if you suspect it isn’t perfect – throw it away, don’t take the risk. 

(NOTE: It’s been brought to my attention that Red recommend that after connecting the power, but before connecting any SDI cables you should turn on any monitors etc. If the monitor comes on OK, this is evidence that the power is correctly connected. There is certainly some merit to this. However this only indicates that there is some power to the monitor, it does not ensure that the ground connection is 100% OK or that the ground voltages at the camera and monitor are the same. By all means power the monitor up to check it has power, then I still recommend that you turn it off again before connecting the SDI).
 
The reason Arri talk about shielded power cables is because most shielded power cables use connectors such as Lemo or Hirose where the body of the connector is grounded to the cable shield. This helps ensure that when plugging the power cable in it is the ground connection that is made first and the power connection after. Then when unplugging the power breaks first and ground after. When using properly constructed shielded power cables with Lemo or Hirose connectors it is much less likely that these issues will occur (but not impossible).

Is this an SDI/HDMI fault? No, not really. The fault lies in the choice of power cables that allow the power to make before the ground or the ground to break before the power breaks and a badly designed power connector often made as cheaply as possible.  Or the fault is with power supplies that have poor or no ground connection. Additionally you can put it down to user error. I know I’m guilty of rushing to change a battery and pulling a D-Tap connector without first disconnecting the SDI on many occasions, but so far I’ve mostly gotten away with it (I have blown an SDI on one of my Convergent Design Odysseys).

If you are working with an assistant or as part of a larger crew do make sure that everyone on set knows not to plug or unplug power cables or SDI cables without checking that it’s OK to do so – and always unplug the SDI/HDMI before disconnecting or removing anything else.
 
How many of us have set up a camera, powered it up, got a picture in the viewfinder and then plugged an SDI cable between the camera and a monitor that doesn’t have a power connection yet or already on and plugged in to some other power supply? Don’t do it! Plug and unplug in the right order – ALL power cables and power supplies first, check power is going to the camera, check power is going to the monitor, then turn it all off first, finally plug in the SDI.

Accsoon CineEye 2S

accsoon_cineeye2s_cineeye_2_5g_wireless_1611053137_1617027 Accsoon CineEye 2SWireless video transmitters are nothing new and there are lots of different units on the market. But the Accsoon CineEye 2S stands out from the crowd for a number of reasons.

First is the price, at only £220/$300 USD it’s very affordable for a SDI/HDMI wireless transmitter. But one thing to understand is that it is just a transmitter, there is no reciever. Instead you use a phone or tablet to receive the signal and act as your monitor. You can connect up to 4 devices at the same time and the latency is very low.  Given that you can buy a reasonably decent Android tablet or used iPad for £100/$140 these days, it still makes an affordable and neat solution without the need to worry about cables, batteries or cages at the receive end. And most people have an iPhone or Android phone anyway. The Accsoon app includes waveform and histogram display, LUT’s, peaking and all the usual functions you would find on most pro monitors. So it saves tying up an expensive monitor just for a directors preview. You can also record on the tablet/phone giving the ability for the director or anyone else linked to it to independently play back takes as he/she wishes while you use the camera for other things.

1611053189_IMG_1474630 Accsoon CineEye 2S

Next is the fact that it doesn’t have any fans. So there is no additional noise to worry about when using it. It’s completely silent. Some other units can get quite noisy.

And the best bit: If you are using an iPhone or iPad with a mobile data connection the app can stream your feed to YouTube, Facebook or any similar RMTP service. With Covid still preventing travel for many this is a great solution for an extremely portable streaming solution for remote production previews etc. The quality of the stream is great (subject to your data connection) and you don’t need any additional dongles or adapters, it just works! 

Watch the video, which was streamed live to YouTube with the CineEye 2S  for more information. At 09.12 I comment that it uses 5G – What I mean is that it has 5Ghz WiFi as well as 2.5Ghz Wifi for the connection between the CineEye and the phone or tablet. 5Ghz WiFi is preferred where possible for better quality connections and better range. https://accsoonusa.com/cineeye/

 

Checking SD Cards Before First Use.

sd-card-copy Checking SD Cards Before First Use.With the new FX6 making use of SD cards to record higher bit rate codecs the number of gigabytes of SD card media that many user will will be getting through is going to be pretty high. The more gigabytes of memory that you use, the more the chance of coming across a duff memory cell somewhere on your media.

Normally solid state media will avoid using any defective memory areas. As a card ages and is used more, more cells will become defective and the card will identify these and it should avoid them next time. This is all normal, until eventually the memory cell failure rate gets too high and the card becomes unusable – typically after hundreds or even thousands of cycles.

However – the card needs to discover where any less than perfect  memory cells are and there is a chance that some of the these duff cells could remain undiscovered in a card that’s never been completely filled before. I very much doubt that every SD card sold is tested to its full capacity, the vast volume of cards made and time involved makes this unlikely.

For this reason I recommend that you consider testing any new SD cards using software such as H2Testw for windows machines or SDSpeed for Mac’s. However be warned to fully test a large card can take a very, very long time.

As an alternative you could simply place the card in the camera and record on it until its full. Use the highest frame rate and largest codec the card will support to fill the card as quickly as possible. I would break the recording up into a few chunks. Once the recording has finished check for corruption by playing the clips back using Catalyst Browse or your chosen edit software.

This may seem like a lot of extra work, but I think it’s worth it for piece of mind before you use your new media on an important job.

Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

I know this is something A LOT of people have been asking for. For a long time it has always seemed odd that only the Shogun 7 was capable of recording raw from the FX9 and then the FX6 while the the little Ninja V could record almost exactly the same raw form the A7SIII.

Well the engineers at Atomos have finally figured out how to pass raw via the AtomX SDI adapter to the Ninja V. The big benefit of course being the compact size of the Ninja V.

There are a couple of ways of getting the kit you need to do this.

If you already have a Ninja V (they are GREAT little monitor recorders, I’ve taken mine all over the world, from the arctic to Arabian deserts) you simply need to buy an AtomX SDI adapter and once you have that buy a raw licence from the Atomos website for $99.00.

If you don’t have the Ninja V then you can buy a bundle called the “Pro Kit” that includes everything you need including a Ninja V with the raw licence pre-installed, The AtomX SDI adapter, a D-Tap power adapter cable, a mains power supply and a sun hood. The cost of this kit will be around $950 USD or £850 GBP + tax, which is a great price.

On top of that you will need to buy suitable fast SSD’s.

Like the Shogun 7 the Ninja V can’t record the 16 bit raw from the FX6 or FX9 directly, so Atomos take the 16 bit linear raw and convert it using a visually lossless process to 12 bit log raw. 12 bit log raw is a really nice raw format and the ProResRaw codec helps keep the files sizes nice and manageable.

This is a really great solution for recording raw from the FX6 and FX9. Plus if you already have an A7SIII you can use the Ninja V to record via HDMI from that too.

Here’s the press release from Atomos:

c6c43288-9b4b-4ac7-8889-16c01dbb6300 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
f8cd773f-872b-4e63-a7f3-a53142662664 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
The Atomos Ninja V Pro Kit is here to equip you with increased
professional I/O, functionality and accessories.

The Ninja V Pro Kit has been designed to bridge the gap between compact cinema and mirrorless cameras that can output RAW via HDMI or SDI. Pro Kit also pushes these cameras’ limits, recording up to 12-bit RAW externally on the Ninja’s onboard SSD. Additionally, Pro Kit provides the ability to cross covert signals providing a versatile solution for monitoring, play out and review.
 
7ae96b67-7f27-4f54-b637-5bed0cb685d5 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
What comes in the Pro Kit?
  • Ninja V Monitor-Recorder with pre-activated RAW over SDI
  • AtomX SDI Module
  • Locking DC to D-Tap cable to power from camera battery
  • AtomX 5″ Sunhood
  • DC/Mains power with international adaptor kit
Ninja V Pro Kit offers a monitor and recording package to cover a wide range of workflows.
097d35da-ecfb-4acd-a43f-45a0098e9907 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

Why choose Ninja V Pro Kit?
  • More powerful and versatile I/O for Ninja V – Expand your Ninja V’s capability with the Pro Kit with the ability to provide recordings in edit-ready codecs or as proxy files from RED or ARRI cameras.
  • Accurate and reliable daylight viewable HDR or SDR – To ensure image integrity, the AtomX 5″ Sunhood is included and increases perceived brightness under challenging conditions or can be used to dial out ambient light to increase the view in HDR
  • HDMI-to-SDI cross conversion – HDMI or SDI connections can be cross converted, 4K to HD down converts RAW to video signals to connect to other systems without the need for additional converters.
  • Record ProRes RAW via SDI to selected cameras*:
a9004691-3a6a-43eb-a2e3-bcced1e9fb74 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
  • Three ways to power your Ninja:
    – DC power supply – perfect for in the studio.
    – DTap cable – perfect for on-set, meaning your rig can run from a single power source.
    – Optional NPF battery or any four-cell NPF you might have in your kit bag. 
c3522982-3d97-464b-976f-50cdf918f2b9 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

The ProRes RAW Advantage
ProRes RAW is now firmly established as the new standard for RAW video capture, with an ever-growing number of supported HDMI and SDI cameras. ProRes RAW combines the visual and workflow benefits of RAW video with the incredible real-time performance of ProRes. The format gives filmmakers enormous latitude when adjusting the look of their images and extending brightness and shadow detail, making it ideal for HDR workflows. Both ProRes RAW and the higher bandwidth, less compressed ProRes RAW HQ are supported. Manageable file sizes speed up and simplify file transfer, media management, and archiving. ProRes RAW is fully supported in Final Cut Pro, Adobe Premiere Pro, Avid Media Composer 2020.10 update, along with a collection of other apps including ASSIMILATE SCRATCH, Colorfront, FilmLight Baselight, and Grass Valley Edius.
 
38031027-d6e9-4d58-b3aa-8672ead94c51 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

Existing Ninja V and AtomX SDI module owners
While the Pro Kit offers a complete bundle, existing Ninja V owners can enhance their equipment to the same level by purchasing the AtomX SDI module for $199, and the New RAW over SDI and HDMI RAW to SDI video feature can also be added to the Ninja V via separate activation key from the Atomos website for $99. 

Existing AtomX SDI module owners will receive the SDI < > HDMI cross conversion for 422 video inputs in the 10.61 firmware update for Ninja V update. You will also be able to benefit from RAW over SDI recording with the purchase of the SDI RAW activation key. This feature will be available from the Atomos website in February 2021.
 
ecd0a301-71e2-42ac-877a-74b52bea63a0 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
 
Special Offer for Pro Kit buyers
The first batch of Ninja V Pro Kits will include a FREE Atomos CONNECT in the box.
Connect allows you to start streaming at up to 1080p60 directly from your Ninja V!
Learn more about Connect here.
 

Availability
The Ninja V Pro Kit is available to purchase from your local reseller.
Find your local reseller here.

$949 USD
EX LOCAL TAXES

*Selected cameras only – RAW outputs from Sony’s FS range (FS700, FS5, FS7) are NOT supported on Ninja V with AtomX SDI Module and RAW Upgrade. Support for these cameras is ONLY available on Shogun 7.