I’ve been working with Accsoon for a couple of years now and their products always represent great value and do what they are supposed to do very well. One thing in particular that I find immensely useful is the ability to use their wireless video transmitters as an access point that you can connect a camera to to extend it’s internal wifi range. This is particularly beneficial with Sony’s Cinema Line cameras as it can greatly extend the range and reliability of the remote camera control via Sony’s control and transfer app. See: https://youtu.be/iSC9i0Frz-Y
Accsoon have just announced a new addition to their line up with the new CineView 2 SDI (it does also have HDMI). I expect this will provide even better range and stability when used as an access point, plus of course it can transmit a very high quality, low latency HD video feed
The key points are below:
– A massive 1500ft/450m range with impressive stability and latency
– Equipped with HD-SDI and HDMI inputs/outputs for versatile connectivity. Has 4K 60P loop through on the TX unit.
– Features next-generation 1080P wireless video transmission
– Backwards compatible with previous CineView models
This is part 2 of my 2 part look at whether small cameras such as a Sony FX3 or A1 really can replace full size cinema cameras.
For this part of the article to make sense you will want to watch the YouTube clips that are linked here full screen at at the highest possible quality settings, Preferably 4K. Please don”t cheat, watch them in the order they are presented as I hope this will allow you to understand the points I am trying to make better.
Also, in the videos I have not put the different cameras that were tested side by side. You may ask why – well it’s because if you do watch a video online or a movie in a cinema you don’t see different cameras side by side on the same screen at the same time. A big point of all of this is that we are now at a place where the quality of even the smallest and cheapest large sensor camera is likely going to be good enough to make a movie. It’s not necessarily a case of is camera A better than camera B, but the question is will the audience know or care which camera you used. There are 5 cameras and I have labelled them A through to E.
The footage presented here was captured during a workshop I did for Sony at Garage Studios in Dubai (if you need a studio space in Dubai they have some great low budget options). We weren’t doing carefully orchestrated camera tests, but I did get the chance to quickly capture some side by side content.
So lets get into it.
THE FINAL GRADE:
In many regards I think this is the most important clip as this is how the audience would see the 5 cameras. It represents how they might look at the end of a production. I graded the cameras using ACES in DaVinci Resolve.
Why ACES? Well, the whole point of ACES is to neutralise any specific camera “look”. The ACES input transform takes the cameras footage and converts it to a neutral look that is meant to represent the scene as it actually was but with a film like highlight roll off added. From here the idea is that you can apply the same grade to almost any camera and the end result should look more or less the same. The look of different cameras is largely a result of differences in the electronic processing of the image in post production rather than large differences in the sensors. Most modern sensors capture a broadly similar range of colours with broadly similar dynamic range. So, provided you know the what recording levels represent what colour in the scene, it is pretty easy to make any camera look like any other, which is what ACES does.
The footage captured here was captured during a workshop, we weren’t specifically testing the different cameras in great depth. For the workshop the aim was to simply show how any of these cameras could work together. For simplicity and speed I manually set each camera to 5600K and as a result of the inevitable variations you get between different cameras, how each is calibrated and how each applies the white balance settings there were differences between in the colour balance of each camera.
To neutralise these white balance differences the grading process started by using the colour chart to equalise the images from each camera using the “match” function in DaVinci Resolve. Then each camera has exactly the same grade applied – there are no grading differences, they are all graded in the same way.
Below are frame grabs from each camera with a slightly different grade to the video clips, again, they all look more or less the same.
The first thing to take away from all of this then is that you can make any camera look like pretty much any other and a chart such as the “color checker video” and software that can read the chart and correct the colours according to the chart makes it much easier to do this.
To allow for issues with the quality of YouTube’s encoding etc here is a 400% crop of the same clips:
What I am expecting is that most people won’t actually see a great deal of difference between any of the cameras. The cheapest camera is $6K and the most expensive $75K, yet it’s hard to tell which is which or see much difference between them. Things that do perhaps stand out initially in the zoomed in image are the softness/resolution differences between the 4K and 8K cameras, but in the first un cropped clip this difference is much harder to spot and I don’t think an audience would notice especially if the one camera is used on it’s own so the viewer has nothing to directly compare it with. It is possible that there are also small focus differences between each camera, I did try to ensure each was equally well focussed but small errors may have crept in.
WHAT HAPPENS IF WE LIFT THE SHADOWS?
OK, so lets pixel peep a bit more and artificially raise the shadows so that we can see what’s going on in the darker parts of the image.
There are differences, but again there isn’t a big difference between any of the cameras. You certainly couldn’t call them huge and in all likelihood, even if for some reason you needed to raise or lift the shadows by an unusually large amount as done here (about 2.5 stops) the difference between “best” and “worst” isn’t large enough for it to be a situation where any one of these cameras would be deemed unusable compared to the others.
SO WHY DO YOU WANT A BETTER CAMERA?
So, if we are struggling to tell the difference between a $6K camera and a $75K one why do you want a “better” camera? What are the differences and why might they matter?
When I graded the footage from these cameras in the workshop it was actually quite difficult to find a way to “break” the footage from any of them. For the majority of grading processes that I tried they all held up really well and I’d be happy to work with any of them, even the cameras using the highly compressed internal recordings held up well. But there are differences, they are not all the same and some are easier to work with than the others.
The two cheapest cameras were a Sony FX3 and a Sony A1. I recorded using their built in codecs, XAVC-SI in the FX3 and XAVC-HS in the A1. These are highly compressed 10 bit codecs. The other cameras were all recorded using their internal raw codecs which are either 16 bit linear or 12 bit log. At some time I really do need to do a proper comparison of the internal XAVC form the FX3 and the ProResRaw that can be recorded externally. But it is hard to do a fully meaningful test as to get the ProResRaw into Resolve requires transcoding and a lot of other awkward steps. From my own experience the difference in what you can do with XAVC v ProResRaw is very small.
One thing that happens with most highly compressed codecs such as H264 (XAVC-SI) or H265(XAVC-HS) is a loss of some very fine textural information and the image breaking up into blocks of data. But as I am showing these clips via YouTube in a compressed state I needed to find a way to illustrate the subtle differences that I see when looking at the original material. So, to show the difference between the different sensors and codecs within these camera I decided to pick a colour using the Resolve colour picker and then turn that colour into a completely different one, in this case pink.
What this allows you to see is how precisely the picked colour is recorded and it also shows up some of the macro block artefacts. Additionally it gives an indication on how fine the noise is and the textural qualities of the recording. In this case the finer the pink “noise” the better, as this is an indication of smaller, finer textural differences in the image. These smaller textural details would be helpful if chroma keying or perhaps for some types of VFX work. It might (and say might because I’m not convinced it always will) allow you to push a very extreme grade a little bit further.
I would guess that by now you are starting to figure out which camera is which – The cameras are an FX3, A1, Burano, Venice 2 and an ArriLF.
In this test you should be able to identify the highly compressed cameras from the raw cameras. The pink areas from the raw cameras are finer and less blocky, this is a good representation of the benefit of less compression and a deeper bit depth.
But even here the difference isn’t vast. It certainly, absolutely, exists. But at the same time you could push ANY of these cameras around in post production and if you’ve shot well none of them are going to fall apart.
As a side note I will say that I find grading linear raw footage such as the 16 bit X-OCN from a Venice or Burano more intuitive compared to working with compressed Log. As a result I find it a bit easier to get to where I want to be with the X-OCN than the XAVC. But this doesn’t mean I can’t get to the same place with either.
RESOLUTION MATTERS.
Not only is compression important but so too is resolution. To some degree increasing the resolution can make up for a lesser bit depth. As these camera all use bayer sensors the chroma resolution will be somewhat less than the luma resolution. A 4K sensor such as the one in the FX3 or the Arri LF will have much lower chroma resolution than the 8K A1, Burano or Venice 2. If we look at the raised shadows clip again we can see some interesting things going on the the girls hair.
If you look closely camera D has a bit of blocky chroma noise in the shadows. I suspect this might be because this is one of the 4K sensor cameras and the lower chroma resolution means the chroma noise is a bit larger.
I expect that by now you have an idea of which camera is which, but here is the big reveal: A is the FX3, B is the Venice 2, C is Burano, D is an Arri LF, and E is the Sony A1.
What can we conclude from all of this:
There are differences between codecs. A better codec with a greater bit depth will give you more textural information. It is not necessarily simply that raw will always be better than YUV/YCbCr but because of raws compression efficiency it is possible to have very low levels of compression and a deep bit depth. So, if you are able to record with a better codec or greater bit depth why not do so. There are some textural benefits and there will be fewer compression artefacts. BUT this doesn’t mean you can’t get a great result from XAVC or another compressed codec.
If using a bayer sensor than using a sensor with more “K” than the delivery resolution can bring textural benefits.
There are differences in the sensors, but these differences are not really as great as many might expect. In terms of DR they are all actually very close, close enough that in the real world it isn’t going to make a substantial difference. As far as your audience is concerned I doubt they would know or care. Of course we have all seen the tests where you greatly under expose a camera and then bring the footage back to normal, and these can show differences. But that’s not how we shoot things. If you are serious about getting the best image that you can, then you will light to get the contrast and exposure that you want. What isn’t in this test is rolling shutter, but generally I rarely see issues with rolling shutter these days. But if you are worried about RS, then the Venice 2 is excellent and the best of the group tested here.
Assuming you have shot well there is no reason why an audience should find the image quality from the $6K FX3 unacceptable, even on a big screen. And if you were to mix and FX3 with a Venice 2 or Burano, again if you have used each camera equally well I doubt the audience would spot the difference.
BACK TO THE BEGINNING:
So this brings me back to where I started in part 1. I believe this is the age of the small camera – or at least there is no reason why you can’t use a camera like an FX3 or an A1 to shoot a movie. While many of my readers I am sure will focus on the technical details of the image quality of camera A against camera B, in reality these days it’s much more about the ergonomics and feature set as well as lens and lighting choices.
A small camera allows you to be quick and nimble, but a bigger camera may give you a lot more monitoring options as well as other things such as genlock. And….. if you can – having a better codec doesn’t hurt. So there is no – one fits all – camera that will be the right tool for every job.
Well, I was recently asked if I could come up with a rig to do the same using Sony cameras for an upcoming blockbuster feature with an A-list director being shot by a top DP. This kind of challenge is something I enjoy immensely, so how could I not accept the challenge! I had some insight into how Hoyte Van Hoytema did it but I had none of the fine details and often its the fine details that make all the difference. And this was no exception. I discovered many small things that need to be just right if this process is to work well. There are a lot of things that can trip you up badly.
So a frantic couple of weeks ensued as I tried to learn everything I could about infrared photography and video and how it could be used to improve traditional day for night shooting. I don’t claim any originality in the process, but there is a lot of information missing about how it was actually done in Nope. I have shot with infrared before, so it wasn’t all new, but I had never used it this way before.
As I did a lot of 3D work when 3D was really big around 15 years ago, including designing award winning 3D rigs, I knew how to combine two cameras on the same optical axis. Even better I still had a suitable 3D rig, so at least that part of the equation was going to be easy (or at least that’s what I thought).
Building a “Test Mule”.
The next challenge was to create a low cost “test mule” camera before even considering what adaptations might be needed for a full blown digital cinema camera. To start with this needed to be cheap, but it also needed to be full frame and capable of taking a wide range of cinema lenses and sensitive to both visible and infrared light. So, I took an old A7S that had been gathering dust for a while, dismantled it and removed the infrared filter from the sensor.
As the DP wanted to test the process with Panavision lenses the camera was fitted with a PV70 mount and then collimated in it’s now heavily modified state (collimation has some interesting challenges when working with the very different wavelength of infrared light compared to visible). Now I could start to experiment, pairing the now infrared sensitive A7S with a second camera on the 3D rig. We soon found issues with this setup, but it allowed me to take the testing to the next stage before committing to modifying a more expensive camera for infrared.
This testing was needed to determine exactly what range of infrared light would produce the best results. The range of infrared you use is determined by filters added to the camera to cut the visible light and only pass certain parts of the infrared spectrum. There are many options, different filters work in slightly different ways. And not only do you need to test the infrared filters but you also need to consider how different neutral density filters might behave if you need to reduce the IR and visible light. Once I narrowed down the range of filters I wanted to test the next challenge was find very high quality filters that could either be fitted inside the camera body behind the lens or that were big enough (120mm +) for the Panavision lenses that were being considered for the film.
Once I had some filters to play with (I had 15 different IR filters) the next step was to start test shooting. I cheated here a bit. For some of the initial testing I used a pair of zoom lenses as I was pairing the A7S with several different cameras for the visible spectrum. The scan areas of the different sensors in the A7S and the visible light cameras were typically very slightly different sizes. So, a zoom lens was used to provide the same field of view from both cameras so that both could be more easily optically aligned on the 3D rig. You can get away with this, but it makes more work for post production as the distortions in each lens will be different and need correcting. For the film I knew we would need identical scan sizes and matched lenses, but that could come later once we knew how much camera modification would be needed. To start with I just needed to find out what filtration would be needed.
At this point I shot around 100 different filter and exposure tests that I then started to compare in post production. When you get it all just right the sky in the infrared image becomes very dark, almost black and highlights become very “peaky”. If you use the luminance from the infrared camera with its black sky and peaky highlights and then add in a bit of colour and textural detail from the visible camera it can create a pretty convincing day for night look. Because you have a near normal visible light exposure you can fine tune the mix of infrared and visible in post production to alter the brightness and colour of the final composite shot giving you a wide range of control over the day for night look.
So – now I know how to do it, the next step was to take it from the test mule to a pair of matching cinema quality cameras and lenses for a full scale test shoot. When you have two cameras on a 3D rig the whole setup can get very heavy, very fast. Therefore the obvious camera to adapt was a Sony Venice 2 with the 8K sensor as this can be made very compact by using the Rialto unit to split the sensor from the camera body – In fact one of the very first uses of Rialto was for 3D shooting on Avatar – The Way of Water.
With a bit of help from Panavision we adapted a Panavised Venice 2, making it full spectrum and then adding a carefully picked (based on my testing) special infrared filter into the cameras optical path. This camera was configured using a Rialto housing to keep it compact and light so that when placed on the 3D rig with the visible light Venice the weight remained manageable. The lenses used were Panavision PV70 Primo’s (if you want to use these lenses for infrared – speak to me first, there are some things you need to know).
And then with the DP in attendance, with smoke and fog machines, lights and grip we tested. For the first few shot we had scattered clouds but soon the rain came and then it poured down for the rest of the day. Probably the worst possible weather conditions for a day for night shoot. But that’s what we had and of course for the film itself there will be no guarantee of perfect weather.
The large scale tests gave us an opportunity to test things like how different types of smoke and haze behave in infrared and also to take a look at interactions with different types of light sources. With the right lights you can do some very interesting things when you are capturing both visible light and infrared opening up a whole new world of possibilities for creating unique looks in camera.
From there the footage went to the production companies post production facilities to produce dailies for the DP to view before being presented to the studios post production people. Once they understood the process and were happy with it there was a screening for the director along with a number of other tests for lighting and lenses.
Along the way I have learnt an immense amount about this process and how it works. What filters to use and when, how to adapt different cameras, how different lenses behave in the infrared spectrum (not all lenses can be used). Collimating adapted cameras for infrared is interesting as many of the usual test rigs will produce misleading or confusing results. I’ve also identified several other ways that a dual camera setup can be used to enhance shooing night scenes, both day for night and as well as at night, especially for effects heavy projects.
At the time of writing it looks like most of the night scenes in this film will be shot at night, they have the budget and time to do this. But the director and DP have indicated that there are some scenes where they do wish to use the process (or a variation of it), but they are still figuring out some other details that will affect that decision.
Whether it gets used for this film or not I am now developing a purpose designed rig for day for night with infrared as I believe it will become a popular way to shoot night scenes. My cameras of choice for this will be a pair of Venice cameras. But other cameras can be used provided one can be adapted for IR and both can be synchronised together. I will have a pair of Sony F55’s, one modified for IR available for lower budget productions and a kit to reversibly adapt a Sony Venice. If you need a rig for day for night and someone that knows exactly how to do it, do get in touch!
I’m afraid I can’t show you the test results, that content is private and belongs to the production. The 3D rig is being modified as you don’t need the ability to shoot with the cameras optically separated, removing the moving parts will make the rig more stable and easier to calibrate. Plus a new type of beam splitter mirror with better infrared transmission properties is on the way. As soon as I get an opportunity to shoot a new batch of test content with the adapted rig I will share it here.
Sony’s Cinema Line cameras all have the ability to record metadata from inertia and gyroscope sensors about the way the camera moves while shooting. This metadata can then be used to stabilise your footage in post production. The stabilisation that this can provide is normally very good and tends to look a lot more natural than using post production stabilisation that looks at the footage and tries to hold it steady. However, until recently the only way to make use of this metadata was via Sony’s Catalyst Browse software.
Now however an Open Source project known as GyroFlow has made it possible to use the Sony metadata in FCP-X and DaVinci Resolve via an OpenFX plugin and a FCP-X plug-in. In addition there is a standalone GyroFlow application that can stabilise the footage and then export a stabilised version of the clip.
GyroFlow is a collaborative Open Source project, so different developers are working on different aspects and plug-ins, so it is a bit more disjointed than a lot of commercial products. But, it is free and it will get better, so why not give it a try. The main website for the project is here: http://gyroflow.xyz/
The XAVC family of codecs was introduced by Sony back in 2014. Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.
XAVC-I
XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.
XAVC-L
XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).
Performance and Efficiency.
Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.
Motion Artefacts.
BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.
Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot.
Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.
PAL and NTSC are very specifically broadcasting standards for standard definition television. PAL (Phase Alternating Line) and NTSC (National Television Standard Committee) are analog interlaced standards specifically for standard definition broadcasting and transmission. These standards are now only very, very rarely used for broadcasting. And as most modern cameras are now high definition, digital and most commonly use progressive scan, these standards are no longer applicable to them.
As a result you will now rarely see these as options in a modern video camera. In our now mostly digital and HD/UHD world the same standards are used whether you are in a country that used to be NTSC or used to be PAL. The only difference now is the frame rate. Countries that have 50Hz mains electricity and that used to be PAL countries predominantly use frame rates based on multiples of 25 frames per second. Countries that have 60Hz mains and that used to be NTSC countries use frame rates based around multiples of 29.97 frames per second. It is worth noting that where interlace is still in use the frame rate is half of the field rate. So, where someone talks about 60i (meaning 60 fields per second) in reality the frame rate will actually be 29.97 frames per second with each frame having two fields. Where someone mentions 50i the frame rate is 25 frames per second.
Most modern cameras rather than offering the ability to switch between PAL and NTSC now instead offer the ability to switch between 50 and 60Hz. Sometimes you may still see a “PAL Area” or “NTSC Area” option – note the use of the word “Area”. This isn’t switching the camera to PAL or NTSC, it is setting up the camera for areas that used to use PAL or used to use NTSC.
This is a common problem and something people often complain about. It may be that the LCD screen of their camera and the brightness of the image on their monitor don’t ever seem to quite match. Or after the shoot and once in the grading suite the pictures look brighter or darker than they did at the time of shooting.
A little bit of background info: Most of the small LCD screens used on video cameras are SDR Rec-709 devices. If you were to calibrate the screen correctly the brightness of white on the screen would be 100 Nits. It’s also important to note that this level is the level that is also used for monitors that are designed to be viewed in dimly lit rooms such as edit or grading suites as well as TV’s at home.
The issue with uncovered LCD screens and monitors is your perception of brightness changes according to the ambient viewing light levels. Indoors in a dark room the image on it will appear to be quite bright. Outside on a Sunny day it will appear to be much darker. It’s why all high end viewfinders have enclosed eyepieces, not just to help you focus on a small screen but also because that way you are always viewing the screen under the very same always dark viewing conditions. It’s why a video village on a film set will be in a dark tent. This allows you to then calibrate the viewfinder with white at the correct 100 NIT level and then when viewed in a dark environment your images will look correct.
If you are trying to use an unshaded LCD screen on a bright sunny day you may find you end up over exposing as you compensate for the brighter viewing conditions. Or if you also have an extra monitor that is either brighter or darker you may become confused as to which is the right one to base your exposure assessments on. Pick the wrong one and your exposure may be off. My recommendation is to get a loupe for the LCD, then your exposure assessment will be much more consistent as you will then always be viewing the screen under the same near ideal conditions.
It’s also been suggested that perhaps the camera and monitor manufacturers should make more small, properly calibrated monitors. But I think a lot of people would be very disappointed with a proper calibrated but uncovered display where white would be 100 NITs as it would be too dim for most outside shoots. Great indoors in a dim room such as an edit or grading suite but unusably dim outside on a sunny day. Most smaller camera monitors are uncalibrated and place white 3 or 4 times brighter at 300 NIT’s or so to make them more easily viewable outside. But because there is no standard for this there can be great variation between different monitors making it hard to understand which one to trust depending on the ambient light levels.
Sadly this is not an uncommon problem. Suddenly and seemingly for no apparent reason the SDI (or HDMI) output on your camera stops working. And this isn’t a new problem either, SDI and HDMI ports have been failing ever since they were first introduced. This issue affects all types of SDI and HDMI ports. But it is more likely with higher speed SDI ports such as 6G or 12G as they operate at higher frequencies and as a result the components used are more easily damaged as it is harder to protect them without degrading the high frequency performance.
Probably the most common cause of an SDI/HDMI port failure is the use of the now near ubiquitous D-Tap cable to power accessories connected to the camera. The D-Tap connector is sadly shockingly crudely designed. Not only is it possible to plug in many of the cheaper ones the wrong way around but with a standard D-Tap plug there is no mechanism to ensure that the negative or “ground” connection of the D-Tap cable makes or breaks before the live connection. There is a however a special but much more expensive D-Tap connector available that includes electronic protection against this very issue (although a great product, even these cannot totally provide protection from a poor ground connection) – see: https://lentequip.com/products/safetap
Imagine for a moment you are using a monitor that’s connected to your cameras SDI or HDMI port. You are powering the monitor via the D-Tap on the cameras battery as you always do and everything is working just fine. Then the battery has to be changed. To change the battery you have to unplug the D-Tap cable and as you pull the D-Tap out, the ground connection disconnects fractionally before the live connection. During that extremely brief moment there is still positive power going to the monitor but because the ground on the D-Tap is now disconnected the only ground route back to the battery becomes via the SDI cable through the camera. For a fraction of a second the SDI/HDMI cable becomes the power cable and that power surge blows the SDI driver chip.
After you have completed the battery swap, you turn everything back on and at first all appears good, but now you can’t get the SDI output to work. There’s no smoke, no burning smells, no obvious damage as it all happened in a tiny fraction of a second. The only symptom is a dead SDI.
And it’s not only D-Tap cables that can cause problems. A lot of the cheap DC barrel connectors have a center positive terminal that can connect before the outer barrel makes a good connection. There are many connectors where the positive can make before the negative.
You can also have problems if the connection between the battery and the camera isn’t perfect. A D-Tap connected directly to the battery might represent an easier route for power to flow back to the battery if there is any corrosion on the battery terminals or a loose batter plate or adapter.
It can also happen when powering the camera and monitor (or other SDI connected devices like a video transmitter) via separate mains adapters. The power outputs of most of the small, modern, generally plastic bodied switch mode type power adapters and chargers are not connected to ground. They have a positive and negative terminal that “floats” above ground at some unknown voltage. Each power supplies negative rail may be at a completely different voltage compared to ground. So again an SDI cable connected between two devices, powered by different power supplies will act as the ground between them and power may briefly flow down the SDI cable as the SDI cables ground brings both power supply negative rails to the same common voltage. Failures this way are less common, but do still occur.
For these reasons you should always connect all your power supplies, power cables, especially D-Tap or other DC power cables first. Avoid using adapters between the battery and the camera as each adapter plate is another possible cause of trouble.
Then while everything remains switched off the very last thing to connect should be the SDI or HDMI cables. Only when everything is connected should you turn anything on.
If unplugging or re-plugging a monitor (or anything else for that matter) turn everything off first. Do not connect or disconnect anything while any of the equipment is on. Although to be honest the greatest risk is at the time you connect or disconnect any power cables such as when swapping a battery where you are using the D-Tap to power any accessories. So if changing batteries, switch EVERYTHING off first, then disconnect your SDI or HDMI cables before disconnecting the D-Tap or other power cables next. Seriously – you need to do this, disconnect the SDI or HDMI before changing the battery if the D-Tap cable has to be unplugged from the battery. Things are a little safer if any D-Tap cables are connected directly to the camera or a power plate that remains connected to the camera. This way you ca change the battery without needing to unplug the D-Tap cables and this does reduce the risk of issues.
Also inspect your cables regularly, check for damage to the pins and the cable, if you suspect it isn’t perfect – throw it away, don’t take the risk.
(NOTE: It’s been brought to my attention that Red recommend that after connecting the power, but before connecting any SDI cables you should turn on any monitors etc. If the monitor comes on OK, this is evidence that the power is correctly connected. There is certainly some merit to this. However this only indicates that there is some power to the monitor, it does not ensure that the ground connection is 100% OK or that the ground voltages at the camera and monitor are the same. By all means power the monitor up to check it has power, then I still recommend that you turn it off again before connecting the SDI).
The reason Arri talk about shielded power cables is because most shielded power cables use connectors such as Lemo or Hirose where the body of the connector is grounded to the cable shield. This helps ensure that when plugging the power cable in it is the ground connection that is made first and the power connection after. Then when unplugging the power breaks first and ground after. When using properly constructed shielded power cables with Lemo or Hirose connectors it is much less likely that these issues will occur (but not impossible).
Is this an SDI/HDMI fault? No, not really. The fault lies in the choice of power cables that allow the power to make before the ground or the ground to break before the power breaks and a badly designed power connector often made as cheaply as possible. Or the fault is with power supplies that have poor or no ground connection. Additionally you can put it down to user error. I know I’m guilty of rushing to change a battery and pulling a D-Tap connector without first disconnecting the SDI on many occasions, but so far I’ve mostly gotten away with it (I have blown an SDI on one of my Convergent Design Odysseys).
If you are working with an assistant or as part of a larger crew do make sure that everyone on set knows not to plug or unplug power cables or SDI cables without checking that it’s OK to do so – and always unplug the SDI/HDMI before disconnecting or removing anything else.
How many of us have set up a camera, powered it up, got a picture in the viewfinder and then plugged an SDI cable between the camera and a monitor that doesn’t have a power connection yet or already on and plugged in to some other power supply? Don’t do it! Plug and unplug in the right order – ALL power cables and power supplies first, check power is going to the camera, check power is going to the monitor, then turn it all off first, finally plug in the SDI.
Wireless video transmitters are nothing new and there are lots of different units on the market. But the Accsoon CineEye 2S stands out from the crowd for a number of reasons.
First is the price, at only £220/$300 USD it’s very affordable for a SDI/HDMI wireless transmitter. But one thing to understand is that it is just a transmitter, there is no reciever. Instead you use a phone or tablet to receive the signal and act as your monitor. You can connect up to 4 devices at the same time and the latency is very low. Given that you can buy a reasonably decent Android tablet or used iPad for £100/$140 these days, it still makes an affordable and neat solution without the need to worry about cables, batteries or cages at the receive end. And most people have an iPhone or Android phone anyway. The Accsoon app includes waveform and histogram display, LUT’s, peaking and all the usual functions you would find on most pro monitors. So it saves tying up an expensive monitor just for a directors preview. You can also record on the tablet/phone giving the ability for the director or anyone else linked to it to independently play back takes as he/she wishes while you use the camera for other things.
Next is the fact that it doesn’t have any fans. So there is no additional noise to worry about when using it. It’s completely silent. Some other units can get quite noisy.
And the best bit: If you are using an iPhone or iPad with a mobile data connection the app can stream your feed to YouTube, Facebook or any similar RMTP service. With Covid still preventing travel for many this is a great solution for an extremely portable streaming solution for remote production previews etc. The quality of the stream is great (subject to your data connection) and you don’t need any additional dongles or adapters, it just works!
Watch the video, which was streamed live to YouTube with the CineEye 2S for more information. At 09.12 I comment that it uses 5G – What I mean is that it has 5Ghz WiFi as well as 2.5Ghz Wifi for the connection between the CineEye and the phone or tablet. 5Ghz WiFi is preferred where possible for better quality connections and better range. https://accsoonusa.com/cineeye/
With the new FX6 making use of SD cards to record higher bit rate codecs the number of gigabytes of SD card media that many user will will be getting through is going to be pretty high. The more gigabytes of memory that you use, the more the chance of coming across a duff memory cell somewhere on your media.
Normally solid state media will avoid using any defective memory areas. As a card ages and is used more, more cells will become defective and the card will identify these and it should avoid them next time. This is all normal, until eventually the memory cell failure rate gets too high and the card becomes unusable – typically after hundreds or even thousands of cycles.
However – the card needs to discover where any less than perfect memory cells are and there is a chance that some of the these duff cells could remain undiscovered in a card that’s never been completely filled before. I very much doubt that every SD card sold is tested to its full capacity, the vast volume of cards made and time involved makes this unlikely.
For this reason I recommend that you consider testing any new SD cards using software such as H2Testw for windows machines or SDSpeed for Mac’s. However be warned to fully test a large card can take a very, very long time.
As an alternative you could simply place the card in the camera and record on it until its full. Use the highest frame rate and largest codec the card will support to fill the card as quickly as possible. I would break the recording up into a few chunks. Once the recording has finished check for corruption by playing the clips back using Catalyst Browse or your chosen edit software.
This may seem like a lot of extra work, but I think it’s worth it for piece of mind before you use your new media on an important job.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Name
Domain
Purpose
Expiry
Type
wpl_user_preference
www.xdcam-user.com
WP GDPR Cookie Consent Preferences
1 year
HTTP
YSC
youtube.com
YouTube session cookie.
54 years
HTTP
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Name
Domain
Purpose
Expiry
Type
VISITOR_INFO1_LIVE
youtube.com
YouTube cookie.
6 months
HTTP
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Name
Domain
Purpose
Expiry
Type
__utma
xdcam-user.com
Google Analytics long-term user and session tracking identifier.
2 years
HTTP
__utmc
xdcam-user.com
Legacy Google Analytics short-term technical cookie used along with __utmb to determine new users sessions.
54 years
HTTP
__utmz
xdcam-user.com
Google Analytics campaign and traffic source tracking cookie.
6 months
HTTP
__utmt
xdcam-user.com
Google Analytics technical cookie used to throttle request rate.
Session
HTTP
__utmb
xdcam-user.com
Google Analytics short-term functional cookie used to determine new users and sessions.
Session
HTTP
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Name
Domain
Purpose
Expiry
Type
__cf_bm
onesignal.com
Generic CloudFlare functional cookie.
Session
HTTP
NID
translate-pa.googleapis.com
Google unique id for preferences.
6 months
HTTP
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Name
Domain
Purpose
Expiry
Type
_ir
api.pinterest.com
---
Session
---
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.