Category Archives: Technology

Day For Night With Infrared.

Many of you may have already seen articles about how DP Hoyte Van Hoytema used a Panavision System 65 film camera paired with an Alexa 65 modified to be sensitive to infrared light to shoot day for night on the film “Nope”. https://www.cined.com/filming-night-scenes-thinking-outside-the-box-on-the-film-nope/

Can You Make It Work?

Well, I was recently asked if I could come up with a rig to do the same using Sony cameras for an upcoming blockbuster feature with an A-list director being shot by a top DP.  This kind of challenge is something I enjoy immensely, so how could I not accept the challenge! I had some insight into how Hoyte Van Hoytema did it but I had none of the fine details and often its the fine details that make all the difference. And this was no exception. I discovered many small things that need to be just right if this process is to work well. There are a lot of things that can trip you up badly.

So a frantic couple of weeks ensued as I tried to learn everything I could about infrared photography and video and how it could be used to improve traditional day for night shooting. I don’t claim any originality in the process, but there is a lot of information missing about how it was actually done in Nope. I have shot with infrared before, so it wasn’t all new, but I had never used it this way before.

As I did a lot of  3D work when 3D was really big around 15 years ago, including designing award winning 3D rigs, I knew how to combine two cameras on the same optical axis. Even better I still had a suitable 3D rig, so at least that part of the equation was going to be easy (or at least that’s what I thought).

Building a “Test Mule”.

The next challenge was to create a low cost “test mule” camera before even considering what adaptations might be needed for a full blown digital cinema camera. To start with this needed to be cheap, but it also needed to be full frame and capable of taking a wide range of cinema lenses and sensitive to both visible and infrared light. So, I took an old A7S that had been gathering dust for a while, dismantled it and removed the infrared filter from the sensor.

IMG_0078-Large-600x450 Day For Night With Infrared.
A7S being modified for infrared (full spectrum).
IMG_0092-Large-600x450 Day For Night With Infrared.
Panavised and Infrared sensitive A7S with Panavision Primo lens.

 

As the DP wanted to test the process with Panavision lenses the camera was fitted with a PV70 mount and then collimated in it’s now heavily modified state (collimation has some interesting challenges when working with the very different wavelength of infrared light compared to visible). Now I could start to experiment, pairing the now infrared sensitive A7S with a second camera on the 3D rig. We soon found issues with this setup, but it allowed me to take the testing to the next stage before committing to modifying a more expensive camera for infrared.

IMG_0087-Large-600x450 Day For Night With Infrared.



This testing was needed to determine exactly what range of infrared light would produce the best results. The range of infrared you use is determined by filters added to the camera to cut the visible light and only pass certain parts of the infrared spectrum. There are many options, different filters work in slightly different ways. And not only do you need to test the infrared filters but you also need to consider how different neutral density filters might behave if you need to reduce the IR and visible light. Once I narrowed down the range of filters I wanted to test the next challenge was find very high quality filters that could either be fitted inside the camera body behind the lens or that were big enough (120mm +) for the Panavision lenses that were being considered for the film.

Once I had some filters to play with (I had 15 different IR filters) the next step was to start test shooting. I cheated here a bit. For some of the initial testing I used a pair of zoom lenses as I was pairing the A7S with several different cameras for the visible spectrum. The scan areas of the different sensors in the A7S and the visible light cameras were typically very slightly different sizes. So, a zoom lens was used to provide the same field of view from both cameras so that both could be more easily optically aligned on the 3D rig. You can get away with this, but it makes more work for post production as the distortions in each lens will be different and need correcting. For the film I knew we would need identical scan sizes and matched lenses, but that could come later once we knew how much camera modification would be needed. To start with I just needed to find out what filtration would be needed.

At this point I shot around 100 different filter and exposure tests that I then started to compare in post production. When you get it all just right the sky in the infrared image becomes very dark, almost black and highlights become very “peaky”. If you use the luminance from the infrared camera with its black sky and peaky highlights and then add in a bit of colour and textural detail from the visible camera it can create a pretty convincing day for night look. Because you have a near normal visible light exposure you can fine tune the mix of infrared and visible in post production to alter the brightness and colour of the final composite shot giving you a wide range of control over the day for night look.

So – now I know how to do it, the next step was to take it from the test mule to a pair of matching cinema quality cameras and lenses for a full scale test shoot. When you have two cameras on a 3D rig the whole setup can get very heavy, very fast. Therefore the obvious camera to adapt was a Sony Venice 2 with the 8K sensor as this can be made very compact by using the Rialto unit to split the sensor from the camera body – In fact one of the very first uses of Rialto was for 3D shooting on Avatar – The Way of Water.

With a bit of help from Panavision we adapted a Panavised Venice 2, making it full spectrum and then adding a carefully picked (based on my testing) special infrared filter into the cameras optical path. This camera was configured using a Rialto housing to keep it compact and light so that when placed on the 3D rig with the visible light Venice the weight remained manageable. The lenses used were Panavision PV70 Primo’s (if you want to use these lenses for infrared – speak to me first, there are some things you need to know).

IMG_0097-2-Large-600x450 Day For Night With Infrared.
3D rig with an Infrared capable Venice Rialto and normal Venice 2 with Panavision Primo lenses.



And then with the DP in attendance, with smoke and fog machines, lights and grip we tested. For the first few shot we had scattered clouds but soon the rain came and then it poured down for the rest of the day. Probably the worst possible weather conditions for a day for night shoot.  But that’s what we had and of course for the film itself there will be no guarantee of perfect weather.

IMG_0101-Large-600x450 Day For Night With Infrared.
Testing the complete day for night IR rig.

 

IMG_0122-Large-600x450 Day For Night With Infrared.
Testing how smoke behaves in infrared. Different types of smoke and haze and different types of lights behave very differently in infrared.

 

The large scale tests gave us an opportunity to test things like how different types of smoke and haze behave in infrared and also to take a look at interactions with different types of light sources.  With the right lights you can do some very interesting things when you are capturing both visible light and infrared opening up a whole new world of possibilities for creating unique looks in camera.

From there the footage went to the production companies post production facilities to produce dailies for the DP to view before being presented to the studios post production people. Once they understood the process and were happy with it there was a screening for the director along with a number of other tests for lighting and lenses.

IMG_0114-Large-600x450 Day For Night With Infrared.

Along the way I have learnt an immense amount about this process and how it works. What filters to use and when, how to adapt different cameras, how different lenses behave in the infrared spectrum (not all lenses can be used). Collimating adapted cameras for infrared is interesting as many of the usual test rigs will produce misleading or confusing results. I’ve also identified several other ways that a dual camera setup can be used to enhance shooing night scenes, both day for night and as well as at night, especially for effects heavy projects.  

At the time of writing it looks like most of the night scenes in this film will be shot at night, they have the budget and time to do this. But the director and DP have indicated that there are some scenes where they do wish to use the process (or a variation of it), but they are still figuring out some other details that will affect that decision.

Whether it gets used for this film or not I am now developing a purpose designed rig for day for night with infrared as I believe it will become a popular way to shoot night scenes. My cameras of choice for this will be a pair of Venice cameras. But other cameras can be used provided one can be adapted for IR and both can be synchronised together. I will have a pair of Sony F55’s, one modified for IR available for lower budget productions and a kit to reversibly adapt a Sony Venice. If you need a rig for day for night and someone that knows exactly how to do it, do get in touch! 

I’m afraid I can’t show you the test results, that content is private and belongs to the production. The 3D rig is being modified as you don’t need the ability to shoot with the cameras optically separated, removing the moving parts will make the rig more stable and easier to calibrate. Plus a new type of beam splitter mirror with better infrared transmission properties is on the way. As soon as I get an opportunity to shoot a new batch of test content with the adapted rig I will share it here.

XAVC-I or XAVC-L which to choose?

THE XAVC CODEC FAMILY

The XAVC family of codecs was introduced by Sony back in 2014.  Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.

XAVC-I

XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.


 

XAVC-L

XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).

Performance and Efficiency.

Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of  shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.

Motion Artefacts.

BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
 
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.

Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot. 

Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.

Why Do Cameras No Longer Have PAL/NTSC Options.

PAL and NTSC are very specifically broadcasting standards for standard definition television. PAL (Phase Alternating Line) and NTSC (National Television Standard Committee) are analog interlaced standards specifically for standard definition broadcasting and transmission. These standards are now only very, very rarely used for broadcasting.  And as most modern cameras are now high definition, digital and most commonly use progressive scan, these standards are no longer applicable to them.  

As a result you will now rarely see these as options in a modern video camera. In our now mostly digital and HD/UHD world the same standards are used whether you are in a country that used to be NTSC or used to be PAL. The only difference now is the frame rate. Countries that have  50Hz mains electricity and that used to be PAL countries predominantly use frame rates based on multiples of 25 frames per second. Countries that have 60Hz mains and that used to be NTSC countries use frame rates based around multiples of 29.97 frames per second. It is worth noting that where interlace is still in use the frame rate is half of the field rate. So, where someone talks about 60i (meaning 60 fields per second) in reality the frame rate will actually be 29.97 frames per second with each frame having two fields. Where someone mentions 50i the frame rate is 25 frames per second.

Most modern cameras rather than offering the ability to switch between PAL and NTSC now instead offer the ability to switch between 50 and 60Hz. Sometimes you may still see a “PAL Area” or “NTSC Area” option – note the use of the word “Area”. This isn’t switching the camera to PAL or NTSC, it is setting up the camera for areas that used to use PAL or used to use NTSC. 

My Exposure Looks Different On My LCD Compared To My Monitor!

This is a common problem and something people often complain about. It may be that the LCD screen of their camera and the brightness of the  image on their monitor don’t ever seem to quite match. Or after the shoot and once in the grading suite the pictures look brighter or darker than they did at the time of shooting.

A little bit of background info: Most of the small LCD screens used on video cameras are SDR Rec-709 devices. If you were to calibrate the screen correctly the brightness of white on the screen would be 100 Nits. It’s also important to note that this level is the level that is also used for monitors that are designed to be viewed in dimly lit rooms such as edit or grading suites as well as TV’s at home.

The issue with uncovered LCD screens and monitors is your perception of brightness changes according to the ambient viewing light levels. Indoors in a dark room the image on it will appear to be quite bright. Outside on a Sunny day it will appear to be much darker. It’s why all high end viewfinders have enclosed eyepieces, not just to help you focus on a small screen but also because that way you are always viewing the screen under the very same always dark viewing conditions. It’s why a video village on a film set will be in a dark tent. This allows you to then calibrate the viewfinder with white at the correct 100 NIT level and then when viewed in a dark environment your images will look correct.


If you are trying to use an unshaded LCD screen on a bright sunny day you may find you end up over exposing as you compensate for the brighter viewing conditions. Or if you also have an extra monitor that is either brighter or darker you may become confused as to which is the right one to base your exposure assessments on. Pick the wrong one and your exposure may be off.  My recommendation is to get a loupe for the LCD, then your exposure assessment will be much more consistent as you will then always be viewing the screen under the same near ideal conditions.

It’s also been suggested that perhaps the camera and monitor manufacturers should make more small, properly calibrated monitors. But I think a lot of people would be very disappointed with a proper calibrated but uncovered display where white would be 100 NITs as it would be too dim for most outside shoots. Great indoors in a dim room such as an edit or grading suite but unusably dim outside on a sunny day. Most smaller camera monitors are uncalibrated and place white 3 or 4 times brighter at 300 NIT’s or so to make them more easily viewable outside. But because there is no standard for this there can be great variation between different monitors making it hard to understand which one to trust depending on the ambient light levels.

SDI Failures and what YOU can do to stop it happening to you.

Sadly this is not an uncommon problem. Suddenly and seemingly for no apparent reason the SDI output on your camera stops working. And this isn’t a new problem either, SDI ports have been failing ever since they were first introduced. This issue affect all types of SDI ports. But it is more likely with higher speed SDI ports such as 6G or 12G as they operate at higher frequencies and as a result the components used are more easily damaged as it is harder to protect them without degrading the high frequency performance.

Probably the most common cause of an SDI port failure is the use of the now near ubiquitous D-Tap cable to power accessories connected to the camera. The D-Tap connector is sadly shockingly crudely designed. Not only is it possible to plug in many of the cheaper ones the wrong way around but with a standard D-Tap plug there is no mechanism to ensure that the negative or “ground” connection of the D-Tap cable makes or breaks before the live connection. There is a however a special but much more expensive D-Tap connector available that includes electronic protection against this very issue – see: https://lentequip.com/products/safetap

Imagine for a moment you are using a monitor that’s connected to your cameras SDI port. You are powering the monitor via the D-Tap on the cameras battery as you always do and everything is working just fine. Then the battery has to be changed. To change the battery you have to unplug the D-Tap cable and as you pull the D-Tap out, the ground connection disconnects fractionally before the live connection. During that moment there is still positive power going to the monitor but because the ground on the D-Tap is now disconnected the only ground route back to the battery becomes via the SDI cable through the camera. For a fraction of a second the SDI cable becomes the power cable and that power surge blows the SDI driver chip.

After you have completed the battery swap, you turn everything back on and at first all appears good, but now you can’t get the SDI output to work. There’s no smoke, no burning smells, no obvious damage as it all happened in a tiny fraction of a second. The only symptom is a dead SDI.

And it’s not only D-Tap cables that can cause problems. A lot of the cheap DC barrel connectors have a center positive terminal that can connect before the outer barrel makes a good connection. There are many connectors where the positive can make before the negative.

It can also happen when powering the camera and monitor (or other SDI connected devices like a video transmitter) via separate mains adapters. The power outputs of most of the small, modern, generally plastic bodied switch mode type power adapters and chargers are not connected to ground. They have a positive and negative terminal that “floats” above ground at some unknown voltage. Each power supplies negative rail may be at a completely different voltage compared to ground.  So again an SDI cable connected between two devices, powered by different power supplies will act as the ground between them and power may briefly flow down the SDI cable as the SDI cables ground brings both power supply negative rails to the same common voltage. Failures this way are less common, but do still occur. 

For these reasons you should always connect all your power supplies, power cables and especially D-Tap or other DC power cables first. Then while everything remains switched off connect the SDI cables. Only when everything is connected should you turn anything on. If unplugging or re-plugging a monitor (or anything else for that matter) turn everything off first. Do not connect or disconnect anything while any of the equipment is on.  Although to be honest the greatest risk is at the time you connect or disconnect any power cables such as when swapping a battery where you are using the D-Tap to power any accessories. So if changing batteries, switch EVERYTHING off first, then disconnect your SDI cables before disconnecting the D-Tap or other power cables next.

(NOTE: It’s been brought to my attention that Red recommend that after connecting the power, but before connecting any SDI cables you should turn on any monitors etc. If the monitor comes on OK, this is evidence that the power is correctly connected. There is certainly some merit to this. However this only indicates that there is some power to the monitor, it does not ensure that the ground connection is 100% OK or that the ground voltages at the camera and monitor are the same. By all means power the monitor up to check it has power, then I still recommend that you turn it off again before connecting the SDI).
 
The reason Arri talk about shielded power cables is because most shielded power cables use connectors such as Lemo or Hirose where the body of the connector is grounded to the cable shield. This helps ensure that when plugging the power cable in it is the ground connection that is made first and the power connection after. Then when unplugging the power breaks first and ground after. When using properly constructed shielded power cables with Lemo or Hirose connectors it is much less likely that these issues will occur (but not impossible).

Is this an SD fault? No, not really. The fault lies in the choice of power cables that allow the power to make before the ground or the ground to break before the power breaks.  Or the fault is with power supplies that have poor or no ground connection. Additionally you can put it down to user error. I know I’m guilty of rushing to change a battery and pulling a D-Tap connector without first disconnecting the SDI on many occasions, but so far I’ve mostly gotten away with it (I have blown an SDI on one of my Convergent Design Odysseys).

If you are working with an assistant or as part of a larger crew do make sure that everyone on set knows not to plug or unplug power cables or SDI cables without checking that it’s OK to do so. How many of us have set up a camera, powered it up, got a picture in the viewfinder and then plugged an SDI cable between the camera and a monitor that doesn’t have a power connection yet or already on and plugged in to some other power supply? Don’t do it! Plug and unplug in the right order – ALL power cables and power supplies first, check power is going to the camera, check power is going to the monitor, then turn it all off first, finally plug in the SDI.

Accsoon CineEye 2S

accsoon_cineeye2s_cineeye_2_5g_wireless_1611053137_1617027 Accsoon CineEye 2SWireless video transmitters are nothing new and there are lots of different units on the market. But the Accsoon CineEye 2S stands out from the crowd for a number of reasons.

First is the price, at only £220/$300 USD it’s very affordable for a SDI/HDMI wireless transmitter. But one thing to understand is that it is just a transmitter, there is no reciever. Instead you use a phone or tablet to receive the signal and act as your monitor. You can connect up to 4 devices at the same time and the latency is very low.  Given that you can buy a reasonably decent Android tablet or used iPad for £100/$140 these days, it still makes an affordable and neat solution without the need to worry about cables, batteries or cages at the receive end. And most people have an iPhone or Android phone anyway. The Accsoon app includes waveform and histogram display, LUT’s, peaking and all the usual functions you would find on most pro monitors. So it saves tying up an expensive monitor just for a directors preview. You can also record on the tablet/phone giving the ability for the director or anyone else linked to it to independently play back takes as he/she wishes while you use the camera for other things.

1611053189_IMG_1474630 Accsoon CineEye 2S

Next is the fact that it doesn’t have any fans. So there is no additional noise to worry about when using it. It’s completely silent. Some other units can get quite noisy.

And the best bit: If you are using an iPhone or iPad with a mobile data connection the app can stream your feed to YouTube, Facebook or any similar RMTP service. With Covid still preventing travel for many this is a great solution for an extremely portable streaming solution for remote production previews etc. The quality of the stream is great (subject to your data connection) and you don’t need any additional dongles or adapters, it just works! 

Watch the video, which was streamed live to YouTube with the CineEye 2S  for more information. At 09.12 I comment that it uses 5G – What I mean is that it has 5Ghz WiFi as well as 2.5Ghz Wifi for the connection between the CineEye and the phone or tablet. 5Ghz WiFi is preferred where possible for better quality connections and better range. https://accsoonusa.com/cineeye/

 

Checking SD Cards Before First Use.

sd-card-copy Checking SD Cards Before First Use.With the new FX6 making use of SD cards to record higher bit rate codecs the number of gigabytes of SD card media that many user will will be getting through is going to be pretty high. The more gigabytes of memory that you use, the more the chance of coming across a duff memory cell somewhere on your media.

Normally solid state media will avoid using any defective memory areas. As a card ages and is used more, more cells will become defective and the card will identify these and it should avoid them next time. This is all normal, until eventually the memory cell failure rate gets too high and the card becomes unusable – typically after hundreds or even thousands of cycles.

However – the card needs to discover where any less than perfect  memory cells are and there is a chance that some of the these duff cells could remain undiscovered in a card that’s never been completely filled before. I very much doubt that every SD card sold is tested to its full capacity, the vast volume of cards made and time involved makes this unlikely.

For this reason I recommend that you consider testing any new SD cards using software such as H2Testw for windows machines or SDSpeed for Mac’s. However be warned to fully test a large card can take a very, very long time.

As an alternative you could simply place the card in the camera and record on it until its full. Use the highest frame rate and largest codec the card will support to fill the card as quickly as possible. I would break the recording up into a few chunks. Once the recording has finished check for corruption by playing the clips back using Catalyst Browse or your chosen edit software.

This may seem like a lot of extra work, but I think it’s worth it for piece of mind before you use your new media on an important job.

Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

I know this is something A LOT of people have been asking for. For a long time it has always seemed odd that only the Shogun 7 was capable of recording raw from the FX9 and then the FX6 while the the little Ninja V could record almost exactly the same raw form the A7SIII.

Well the engineers at Atomos have finally figured out how to pass raw via the AtomX SDI adapter to the Ninja V. The big benefit of course being the compact size of the Ninja V.

There are a couple of ways of getting the kit you need to do this.

If you already have a Ninja V (they are GREAT little monitor recorders, I’ve taken mine all over the world, from the arctic to Arabian deserts) you simply need to buy an AtomX SDI adapter and once you have that buy a raw licence from the Atomos website for $99.00.

If you don’t have the Ninja V then you can buy a bundle called the “Pro Kit” that includes everything you need including a Ninja V with the raw licence pre-installed, The AtomX SDI adapter, a D-Tap power adapter cable, a mains power supply and a sun hood. The cost of this kit will be around $950 USD or £850 GBP + tax, which is a great price.

On top of that you will need to buy suitable fast SSD’s.

Like the Shogun 7 the Ninja V can’t record the 16 bit raw from the FX6 or FX9 directly, so Atomos take the 16 bit linear raw and convert it using a visually lossless process to 12 bit log raw. 12 bit log raw is a really nice raw format and the ProResRaw codec helps keep the files sizes nice and manageable.

This is a really great solution for recording raw from the FX6 and FX9. Plus if you already have an A7SIII you can use the Ninja V to record via HDMI from that too.

Here’s the press release from Atomos:

c6c43288-9b4b-4ac7-8889-16c01dbb6300 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
f8cd773f-872b-4e63-a7f3-a53142662664 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
The Atomos Ninja V Pro Kit is here to equip you with increased
professional I/O, functionality and accessories.

The Ninja V Pro Kit has been designed to bridge the gap between compact cinema and mirrorless cameras that can output RAW via HDMI or SDI. Pro Kit also pushes these cameras’ limits, recording up to 12-bit RAW externally on the Ninja’s onboard SSD. Additionally, Pro Kit provides the ability to cross covert signals providing a versatile solution for monitoring, play out and review.
 
7ae96b67-7f27-4f54-b637-5bed0cb685d5 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
What comes in the Pro Kit?
  • Ninja V Monitor-Recorder with pre-activated RAW over SDI
  • AtomX SDI Module
  • Locking DC to D-Tap cable to power from camera battery
  • AtomX 5″ Sunhood
  • DC/Mains power with international adaptor kit
Ninja V Pro Kit offers a monitor and recording package to cover a wide range of workflows.
097d35da-ecfb-4acd-a43f-45a0098e9907 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

Why choose Ninja V Pro Kit?
  • More powerful and versatile I/O for Ninja V – Expand your Ninja V’s capability with the Pro Kit with the ability to provide recordings in edit-ready codecs or as proxy files from RED or ARRI cameras.
  • Accurate and reliable daylight viewable HDR or SDR – To ensure image integrity, the AtomX 5″ Sunhood is included and increases perceived brightness under challenging conditions or can be used to dial out ambient light to increase the view in HDR
  • HDMI-to-SDI cross conversion – HDMI or SDI connections can be cross converted, 4K to HD down converts RAW to video signals to connect to other systems without the need for additional converters.
  • Record ProRes RAW via SDI to selected cameras*:
a9004691-3a6a-43eb-a2e3-bcced1e9fb74 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
  • Three ways to power your Ninja:
    – DC power supply – perfect for in the studio.
    – DTap cable – perfect for on-set, meaning your rig can run from a single power source.
    – Optional NPF battery or any four-cell NPF you might have in your kit bag. 
c3522982-3d97-464b-976f-50cdf918f2b9 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

The ProRes RAW Advantage
ProRes RAW is now firmly established as the new standard for RAW video capture, with an ever-growing number of supported HDMI and SDI cameras. ProRes RAW combines the visual and workflow benefits of RAW video with the incredible real-time performance of ProRes. The format gives filmmakers enormous latitude when adjusting the look of their images and extending brightness and shadow detail, making it ideal for HDR workflows. Both ProRes RAW and the higher bandwidth, less compressed ProRes RAW HQ are supported. Manageable file sizes speed up and simplify file transfer, media management, and archiving. ProRes RAW is fully supported in Final Cut Pro, Adobe Premiere Pro, Avid Media Composer 2020.10 update, along with a collection of other apps including ASSIMILATE SCRATCH, Colorfront, FilmLight Baselight, and Grass Valley Edius.
 
38031027-d6e9-4d58-b3aa-8672ead94c51 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.

Existing Ninja V and AtomX SDI module owners
While the Pro Kit offers a complete bundle, existing Ninja V owners can enhance their equipment to the same level by purchasing the AtomX SDI module for $199, and the New RAW over SDI and HDMI RAW to SDI video feature can also be added to the Ninja V via separate activation key from the Atomos website for $99. 

Existing AtomX SDI module owners will receive the SDI < > HDMI cross conversion for 422 video inputs in the 10.61 firmware update for Ninja V update. You will also be able to benefit from RAW over SDI recording with the purchase of the SDI RAW activation key. This feature will be available from the Atomos website in February 2021.
 
ecd0a301-71e2-42ac-877a-74b52bea63a0 Atomos Adds Raw Over SDI For The Ninja V via the AtomX.
 
Special Offer for Pro Kit buyers
The first batch of Ninja V Pro Kits will include a FREE Atomos CONNECT in the box.
Connect allows you to start streaming at up to 1080p60 directly from your Ninja V!
Learn more about Connect here.
 

Availability
The Ninja V Pro Kit is available to purchase from your local reseller.
Find your local reseller here.

$949 USD
EX LOCAL TAXES

*Selected cameras only – RAW outputs from Sony’s FS range (FS700, FS5, FS7) are NOT supported on Ninja V with AtomX SDI Module and RAW Upgrade. Support for these cameras is ONLY available on Shogun 7.

Accsoon Cineeye 2S Wireless Video Link with Streaming Function

So there are now quite a lot of these devices appearing on the market. I have a Hollyland Mars 400 kit and it works really well. But this one caught my eye because it includes the ability to stream to platforms such as YouTube using RTMP.
In these days of remote production being able to stream the cameras output to a remote client or producer could prove very useful.

I haven’t seen one in person and I don’t know the company, so no idea if it’s actually any good. But certainly on paper it’s really interesting. 

Here’s the info from the press release. 

 

4ea1d055e1c53d66df52fadef6c585cb Accsoon Cineeye 2S Wireless Video Link with Streaming Function

large-5670a05a205877021e33f12e53136126 Accsoon Cineeye 2S Wireless Video Link with Streaming Function
large-bc2872a444bb846886da7a79105e751b Accsoon Cineeye 2S Wireless Video Link with Streaming Function
large-674a2318d09a629ce7735414fd29f239 Accsoon Cineeye 2S Wireless Video Link with Streaming Function
large-b79eeb88e786297bf5c9c62100b8634a Accsoon Cineeye 2S Wireless Video Link with Streaming Function
large-66691526b2488b48538670f1cdf55130 Accsoon Cineeye 2S Wireless Video Link with Streaming Function
large-135545da2f66ca2d3efed39c58997487 Accsoon Cineeye 2S Wireless Video Link with Streaming Function
large-88ec9a63243d7f6cf966b225b890d987 Accsoon Cineeye 2S Wireless Video Link with Streaming Function