Category Archives: cinematography

Should You Use In Camera Noise Reduction Or Not?

This is another common question on many user groups. It comes up time and time again. But really there is no one clear cut answer. In a perfect world we would never need to add any noise reduction, but we don’t live and shoot in a perfect world. Often a camera might be a little noisy or you may be shooting with a lot less light than you would really like, so in camera NR might need to be considered.

You need to consider carefully whether you should use in camera NR or not. There will be some cases where you want in camera NR and other times when you don’t.

Post Production NR.
An important consideration is that adding post production NR on top of in-camera NR is never the best route to go down. NR on top of NR will often produce ugly blocky artefacts. If you ever want to add NR in post production it is almost always better not to also add in camera NR. Post production NR has many advantages as you can more precisely control the type and amount you add depending on what the shot needs. When using proper grading software such as DaVinci Resolve you can use power windows or masks to only add NR to the parts of the image that need it.

Before someone else points it out I will add here that it is almost always impossible to turn off all in camera NR. There will almost certainly be some NR added at the sensor that you can not turn off. In addition most recording codecs will apply some noise reduction to avoid wasting data recording the noise, again this can’t be turned off. Generally higher bit rate, less compressed codecs apply less NR. What I am talking about here is the additional NR that can be set to differing levels within the cameras settings that is in addition to the NR that occurs at the sensor or in the codec.

Almost every NR process, as well as reducing the visibility of noise will introduce other image artefacts. Most NR process work by taking an average value for groups of pixels or an average value for the same pixel over a number of frames. This averaging tends to not only reduce the noise but also reduce fine details and textures. Faces and skin tones may appear smoothed and unnatural if excessively noise reduced. Smooth surfaces such as walls or the sky may get broken up into subtle bands or steps. Sometimes these artefacts won’t be seen in the cameras viewfinder or on a small screen and only become apparent on a bigger TV or monitor. Often the banding artefacts seen on walls etc are a result of excessive NR rather than a poor codec etc (although the two are often related as a weak codec may have to add a lot of NR to a noisy shot keep the bit rate down).

If you are shooting log then any minor artefacts in the log footage from in camera noise reduction may be magnified when you start grading and boosting the contrast. So, generally speaking when shooting log it is always best to avoid adding in camera NR. The easiest way to avoid noise when shooting with log is to expose a bit brighter so that in the grade you are never adding gain. Take gain away in post production to compensate for a brighter exposure and you take away much of the noise – without giving up those fine textures and details that make skin tones look great. If shooting log, really the only reason an image will be noisy is because it hasn’t been exposed bright enough. Even scenes that are meant to look dark need to be exposed well. Scenes with large dark areas need good contrast between at least some brighter parts so that the dark areas appear to be very dark compared to the bright highlights. Without any highlights it’s always tempting to bring up the shadows to give some point of reference. Add a highlight such as a light fixture or a lit face or object and there is no need to then bring up the shadows, they can remain dark, contrast is king when it comes to dark and night scenes.

If, however you are shooting for “direct to air” or content that won’t be graded and needs to look as good as possible directly from the camera then a small amount of in camera NR can be beneficial. But you should test the cameras different levels to see how much difference each level makes while also observing what happens to subtle textures and fine details. There is no free lunch here. The more NR you use the more fine details and textures you will lose and generally the difference in the amount of noise that is removed between the mid and high setting is quite small. Personally I tend to avoid using high and stick to low or medium levels. As always good exposure is the best way to avoid noise. Keep your gain and ISO levels low, add light if necessary or use a faster lens, this is much more effective than cranking up the NR.

Sony Launches Venice II

Screenshot-2021-11-12-at-10.02.54-e1636988898904-600x436 Sony Launches Venice II
Sony Venice II

 

Today Sony launched Venice II. Perhaps not one of the very best kept secrets with many leaks in the last few weeks, but finally we officially know that it’s called Venice II and it has an 8K (8.6K maximum) sensor recording 16 bit linear X-OCN or ProRes to 2 built in AXS card slots.

The full information about the new camera is here. https://pro.sony/en_GB/products/digital-cinema-cameras/venice2

Venice II is in essence the original Venice camera and the AXS-R7 all built into a single unit.  But to achieve this the ability to use SxS cards has been dropped, Venice II only works with AXS cards. The XAVC-I codec is also gone. The new camera is only marginally longer than the original Venice camera body.

Screenshot-2021-11-12-at-10.11.55-e1636988985171-600x395 Sony Launches Venice II

As well as X-OCN (the equivalent of a compressed raw recording) Venice II can also record 4K ProRes HQ and 4K ProRes 444. Because the sensor is an 8.6K sensor that 4K 444 will be “real” 444 with a real Red, Green and Blue sample at every position in the image. This will be a great format for those not wishing to use X-OCN. But why not use X-OCN? the files are very compact and full of 16 bit goodness. I find X-OCN just as easy to work with as ProRes.

One thing that Venice II can’t do is record proxies. Apparently user feedback is that these are rarely used. I guess in a film style workflow where you have an on set DIT station it’s easy for proxies to be created on set. Or you can create proxies in most edit applications when you ingest the main files, but I do wonder if proxies are something some people will miss if they only have X-OCN files to work from.

New Sensor:

Screenshot-2021-11-12-at-10.05.41-e1636989166442-600x248 Sony Launches Venice II

There has been a lot of speculation that the sensor used in Venice II is the same as the sensor in the Sony A1 mirrorless camera, after all the pixel count is exactly the same. We already know that the A1 sensor is a very nice and very capable sensor. So IF it were to be the same sensor but paired with significantly more and better processing power and an appropriate feature set for digital cinema production it would not be anything to complain about. But it is unlikely that it is the very same sensor. It might be based on the A1 sensor (and the original Venice sensor is widely speculated to be based on the A9 sensor) but one thing you don’t want on these sensors are the phase detection sites used for autofocus.

When you expand these very high quality images on to very big screens, even the smallest of image imperfections can become an issue. The phase detection pixels and the wires that interconnect them can form a very, very faint fixed pattern within the image. In a still photograph you would probably never see this. In a highly compressed image, compression artefacts might hide it (although both the FX6 and FX9 exhibit some fixed pattern noise that might in part be caused by the AF sites). But on a giant screen, with a moving image this faint fixed pattern may be perceptible to audiences and that just isn’t acceptable for a flagship cinema camera. So, I am led to believe that the sensors used in both the original Venice and Venice II do not have any AF phase detection pixels or wire interconnects. Which means these can not the very same sensors as found in the A1 or A9. They are most likely specifically made for Venice.
Also most stills camera based sensors are only able to be read at 12 bit when used for video, again perhaps a key difference is that when used with the cooling system in the Venice cameras these sensors can be read at 16 bit at video frame rates rather than 12 or 14 bits.

The processing hardware in Venice II has been significantly upgraded from the original Venice. This was necessary to support the data throughput need to shoot at 8.6K and 60fps as well as the higher resolution SDI outputs and much improved LUT processing.  Venice II can also be painted live on set via both wiFi and Ethernet. So the very similar exterior appearances do in fact hide the fact that this really is a completely new camera.

Screenshot-2021-11-12-at-10.09.29-e1636989513212-600x318 Sony Launches Venice II

My Highlights:

I am not going to repeat all the information in the press releases or on the Sony website here. But what I will say is I like what I see. Integrating the R7 into the Venice II body makes the overall package smaller. There are no interconnections to go wrong. The increase in dynamic range to 16 stops, largely thanks to a lower noise floor is very welcome. There was nothing wrong with the original Venice, but this new sensor is just that bit better.

The default dynamic range split gives the same +6 stops as most of Sony’s current cameras but goes down to -10 stops.  But with the very low noise floor that this sensor has rating the camera higher than the rated  800 base ISO to gain a bit of extra headroom shouldn’t be an issue. Sample footage from Venice II shows that the way the highlights do reach their limits is very nice.

The LUT processing has been improved and now you can have 3D LUTs in 4K on SDI’s 1&2 which are 12G and in HD at the same time on SDI’s 3&4 which are 3G – as well as on the monitor out and in the VF. This is actually quite a significant upgrade, the original Vence is a little bit lacking in  the way it handles LUTs. The ART look system is retained if you want even higher quality previews than that possible with 33x LUTs. There is also built in ACES support with a new RRT, this makes the camera extremely easy to use for ACES workflows and the 16 bit linear X-OCN is a great fit for ACES.

Screenshot-2021-11-12-at-10.15.31-e1636989310790-531x500 Sony Launches Venice II

It retains the ability to remove the sensor head so it can be used on the end of an extension cable. Venice II can use either the original 6K Venice sensor or the new 8K sensor, however a new extension cable which won’t be available until until some time in 2023 is needed before the head can be separated, so Venice 1 will still have a place for some considerable time to come.

Screenshot-2021-11-12-at-10.05.00-e1636989382866-600x292 Sony Launches Venice II
Venice only takes the original 6K sensor but Venice II can take either the original 6K sensor or the new 8K sensor.



Moving the dual ISO from 500/2500 to 800/3200 brings Venice II’s lower base ISO up to the same level as the majority of other Cinema cameras. I know that some found 500 ISO slightly odd to work with. This will just make it easier to work alongside other similarly rated cameras.

Another interesting consideration is that you can shoot at 5.8K pixels with a Super 35mm sized scan. This means that the 4K Super 35mm material will have greater resolution than the original Venice or many other S35 cameras that only use 4K of pixels at S35. There is a lot of very beautiful super 35mm cine glass available and being able to shoot using classic cinema glass and get a nice uplift in the image resolution is going to be really nice. Additionally there will be some productions where the shallower DoF of Full Frame may not be desirable or where the 8.6K files are too big and unnecessary. I can see Venice II being a very nice option for those wishing to shoot Super 35.

But where does this leave existing Venice owners? 

For a start the price of Venice 1 is not going to change. Sony are not dropping the cost. This new Venice is an upgrade over the original and more expensive (but the price does include the high frame rate options). Although my suspicion is that Venice II will not be significantly more expensive that the cost of the current Venice + R7 + HFR licence. Sony want this camera to sell well, so they won’t want to make it significantly more as then many would just stick with Venice 1. The original remains a highly capable camera that produces beautiful images and if you don’t need 8.6K the reasons to upgrade are fewer. The basic colour science of both cameras remains the same, so there is no reason why both can’t be used together on the same projects. Venice 1 can work with lower cost SxS cards and XAVC-I if you need very small files and a very simple workflow, Venice II pushes you to a AXS card based workflow and AXS cards are very expensive. 

If you have productions that need the Rialto system and the ability to un-dock the sensor, then this isn’t going to be available for Venice II for some time. So original Venice cameras will still be needed for Rialto applications (it will be 2023 before Rialto for Venice II becomes available).

Of course it always hurts when a new camera comes out, but I don’t think existing Venice owners should be too concerned.  If customers really felt they needed 8.6K then they would have already likely been lost to a Red camera and the Red ecosystem. But at least now that there is an 8K Venice option that might help keep the original Venice viable for second unit, Rialto (for now at least) or secondary roles within productions shooting primarily in 8K.

I like everything I see about Venice II, but it doesn’t make Venice 1 any less of a camera.


My Exposure Looks Different On My LCD Compared To My Monitor!

This is a common problem and something people often complain about. It may be that the LCD screen of their camera and the brightness of the  image on their monitor don’t ever seem to quite match. Or after the shoot and once in the grading suite the pictures look brighter or darker than they did at the time of shooting.

A little bit of background info: Most of the small LCD screens used on video cameras are SDR Rec-709 devices. If you were to calibrate the screen correctly the brightness of white on the screen would be 100 Nits. It’s also important to note that this level is the level that is also used for monitors that are designed to be viewed in dimly lit rooms such as edit or grading suites as well as TV’s at home.

The issue with uncovered LCD screens and monitors is your perception of brightness changes according to the ambient viewing light levels. Indoors in a dark room the image on it will appear to be quite bright. Outside on a Sunny day it will appear to be much darker. It’s why all high end viewfinders have enclosed eyepieces, not just to help you focus on a small screen but also because that way you are always viewing the screen under the very same always dark viewing conditions. It’s why a video village on a film set will be in a dark tent. This allows you to then calibrate the viewfinder with white at the correct 100 NIT level and then when viewed in a dark environment your images will look correct.


If you are trying to use an unshaded LCD screen on a bright sunny day you may find you end up over exposing as you compensate for the brighter viewing conditions. Or if you also have an extra monitor that is either brighter or darker you may become confused as to which is the right one to base your exposure assessments on. Pick the wrong one and your exposure may be off.  My recommendation is to get a loupe for the LCD, then your exposure assessment will be much more consistent as you will then always be viewing the screen under the same near ideal conditions.

It’s also been suggested that perhaps the camera and monitor manufacturers should make more small, properly calibrated monitors. But I think a lot of people would be very disappointed with a proper calibrated but uncovered display where white would be 100 NITs as it would be too dim for most outside shoots. Great indoors in a dim room such as an edit or grading suite but unusably dim outside on a sunny day. Most smaller camera monitors are uncalibrated and place white 3 or 4 times brighter at 300 NIT’s or so to make them more easily viewable outside. But because there is no standard for this there can be great variation between different monitors making it hard to understand which one to trust depending on the ambient light levels.

Chroma Key and Greenscreen – Should I use S-Log3 or might Rec-709 be Better?

I’ve covered this before, but as this came up again in an online discussion I thought I would write about it again. For decades when I was doing a lot of corporate video work we shot greenscreen and chroma key with analoge or 8 bit, limited dynamic range, standard definition cameras and generally got great results (it was very common to use a bluescreen as blue spill doesn’t look as bad on skin tones as green). So now when we have cameras with much greater dynamic ranges and 10 bit recording is it better to shoot for greenscreen using S-Log3 (or any other log curve for that matter) or perhaps Rec-709?

Before going further I will say that there is no yes-no, right-wrong, answer to this question. I will also add that Rec-709 gets a bad rap because people don’t really understand how gamma curves/transfer functions actually work and how modern grading software is able to re-map the aquisition transfer function to almost any other transfer function. If you use a colour managed workflow in DaVinci Resolve it is very easy to take a Rec-709 recording and map it to S-Log3 so that you can apply the same grades to the 709 as you would to material originated using S-Log3. Of course the 709 recording may not have as much dynamic range as an S-Log3 recording, but it will “look” more or less the same.

Comming back to shooting greenscreen and chromakey:

S-Log3:
?
Shoot using 10 bit S-log3 and you have 791 code values available (95-886) to record 14/15 stops of dynamic range. so on average across the entire curve each stop has around 55 code values. Between Middle Grey and +2 stops there are approx  155 code values – this region is important as this is where the majority of skin tones and the key background are likely to fall.

Rec-709:
?
Shoot using vanilla Rec-709 and you are using 929 code values (90-1019) to record 6/7 stops so each stop has on average across the entire curve has around 125 code values. Between Middle Grey and +2 stops there are going to be around 340 code values.
?
That is not an insignificant difference, it’s not far off the  difference between shooting with 10 bit or 12 bit.
?
If you were to ask someone whether it is better to shoot using 10 bit or 12 bit I am quite sure the automatic answer would be 12 bit because the general concensus is – more bits is always better.
?
A further consideration is that the Sony cameras operate at a lower ISO when shooting with standard gammas and as a result you will have an improved signal to noise ratio using 709 than when using S-log3 and this can also make it easier to achieve a good, clean, key.
?
However you do also need to think about what it is you are shooting and how it will be used. If you are shooting greenscreen in a studio then you should have full control over your lighting and in most cases 6 or 7 stops is all you need, so Rec-709 should be able to capture everything comfortably well. If you are shooting outside with less control over the light perhaps Rec-709 won’t have sufficient range.
?
If the background plates have been shot using S-Log3 then some people don’t like keying 709 into S-Log3. However a colour managed workflow can deal with this very easily. We should consider that 709 and S-Log3 in a workflow where grading is a big part should not be though of as “looks” but simply as transfer functions or maps of what brightness/saturation seen by the camera is recorded at what code value. Handle these transfer functions correctly via a colour managed workflow and both will “look” the same and both will grade the same within their respective capture limits.
?
For an easy workflow you might chose to shoot the greenscreen elements using log with the same settings as the plates. There is nothing wrong with this, it works, it is a very commonly used workflow but it isn’t necessarily always going to be optimum. A lot of people will put a lot of emphasis on using raw or greater bit depths to maximise the quality of their keying, but overlook gamma choice altogether, simply because “Rec-709” is almost a dirty word these days.
?
If you have more control, and want absolutely the best possible key, you might be better off using Rec-709. As you will have more data per stop which makes it easier for the keying software to identify edges and less noise. If using Rec-709 you want to chose a version of Rec-709 where you can turn off the camera’s knee as this will prevent the 709 curve from crushing the highlights which can make them difficult to grade. In a studio situation you shouldn’t need to use a heavy knee.

I suggest you experiment and test for yourself and not every situation will be the same, sometimes S-Log3 will be the right choice, other times Rec-709.
?

Pixels and Resolution are not the same thing.

Before the large sensor resolution most professional video cameras used 3 sensors, one each for red, green and blue. And each of those sensors normally had as many pixels as the resolution of the recording format. So you had enough pixels in each colour for full resolution in each colour.

Then along came large sensor cameras where the only way to make it work was by using a single sensor (the optical prism would be too big to accomodate any existing lens system). So now you have to have all your pixels on one sensor divided up between red, green and blue.

Almost all of camera manufacturers ignored the inconvenient truth that a colour sensor with 4K of pixels won’t deliver 4K of resolution.  We were sold these new 4K cameras. But the 4K doesn’t mean 4K resolution, it means 4K of pixels. To be fair to the manufactures, they didn’t claim 4K resolution, but they were also quite happy to let end users think that that’s what the 4K meant.

My reason for writing about this topic again is because I just had someone on my facebook feed discussing how wonderful it was to be shooting at 6K with a new camera as this would give lots of space for reframing for 4K. 

The nature of what he wrote – “shooting at 6K” –  implies shooting at 6K resolution. But he isn’t, his 6K sensor is probably delivering around 4K resolution and he won’t have any room for reframing if he wants to end up with a 4K resolution final image. Now again, in the name of fairness, shooting with 6K of pixels is going to be better than shooting with 4K of pixels if you do choose to reframe. But we really, really need to be careful about how we use terms like 4K or 6K. What do we really mean, what are we really talking about. Because the more we muddle pixels with resolution the less clear it will be what we are actually recording. Eventually no one will really understand that the two are different and the differences really do matter.

What Benefits Do I Gain By Using CineEI?

This is a question that comes up a lot. Especially from those migrating to a camera with a CineEI mode from a camera without one. It perhaps isn’t obvious why you would want to use a shooting mode that has no way of adding gain to the recordings.

If using the CineEI mode shooting S-log3 at the base ISO, with no offsets or anything else then there is very little difference between what you record in Custom mode at the base ISO and CineEI at the base EI.

But we have to think about what the CineEI mode is all about. It’s all about image quality. You would normally chose  to shoot S-Log3 when you want to get the highest possible quality image and CineEI is all about quality.

The CineEI mode allows you to view via your footage via a LUT so that you can get an appreciation of how the footage will look after grading. Also when monitoring and exposing via the LUT because the dynamic range of the LUT is narrower, your exposure will be more accurate  and consistent because bad exposure looks more obviously bad. This makes grading easier. One of the keys to easy grading is consistent footage, footage where the exposure is shifting or the colours changing (don’t use ATW with Log!!) can be very hard to grade.

Then once you are comfortable exposing via a LUT you can start to think about using EI offsets to make the LUT brighter or darker. When the LUT is darker you open the aperture or reduce the ND to return the LUT to a normal looking image and vice versa with a brighter LUT.  This then changes the brightness of the S-log3 recordings and you use this offsetting process  to shift the highlight/shadow range as well as noise levels to suit the types of scenes you are shooting. Using a low EI (which makes the LUT darker) plus correct LUT exposure  (the darker LUT will make you open the aperture to compensate) will result in a brighter recording which will improve the shadow details and textures that are recorded and thus can be seen in the shadow areas. At the same time however that brighter exposure will reduce the highlight range by a similar amount to the increase in the shadow range. And no matter what the offset, you always record at the cameras full dynamic range.

I think what people misunderstand about CineEI is that it’s there to allow you to get the best possible, highly controlled images from the camera. Getting the best out of any camera requires appropriate and sufficient light levels. CineEI is not designed or intended to be a replacement for adding gain or shooting at high recording ISOs where the images will be already compromised by noise and lowered dynamic range.
 
CineEI exists so that when you have enough light to really make the camera perform well you can make those decisions over noise v highlights v shadows to get the absolute best “negative” with consistent and accurate exposure to take into post production. It is also the only possible way you can shoot when using raw as raw recordings are straight from the sensor and never have extra gain added in camera.
 
Getting that noise/shadow/highlight balance exactly right, along with good exposure is far more important than the use of external recorders or fatter codecs. You will only ever really benefit fully from higher quality codecs if what you are recording is as good as it can be to start with. The limits as to what you can do in post production are tied to image noise no matter what codec or recording format you use. So get that bit right and everything else gets much easier and the end result much better. And that’s what CineEI gives you great control over.
 
When using CineEI or S-Log3 in general you need to stop thinking “video camera – slap in a load if gain if its dark” and think “film camera – if its too dark I need more light”. The whole point of using log is to get the best possible image quality, not shooting with insufficient light and a load of gain and noise. It requires a different approach and completely different way of thinking, much more in line with the way someone shooting on film would work.

What surprises me is the eagerness to adopt shutter angles and ISO ratings for electronic video cameras because they sound cool but less desire to adopt a film style approach to exposure based on getting the very best from the sensor.  In reality a video sensor is the equivalent of a single sensitivity film stock. When a camera has dual ISO then it is like having a camera that takes two different film stocks.  Adding gain or raising the ISO away from the base sensitivity in custom mode is a big compromise that can never be undone. It adds noise and decreases the dynamic range. Sometimes it is necessary, but don’t confuse that necessity with getting the very best that you can from the camera.

For more information on CineEI see:

Using CineEI with the FX6  
 
 

New addition to Sony’s Cinema Line On The Way

In some regards this is now already old news. I’m under NDA so there are limits as to what I can, or what I should write. Hopefully I won’t get into too much trouble if I point out that there are already a lot of rumours circulating right now that a new camera called the FX3 might be about to be launched. The official line is that there is going to be an announcement on the 23rd of February, just a week from now.  So why not just chill out for a one week as then you will be able to get the full facts about whatever it is that’s going to be added to the Cinema Line. 

Aliens Might Think The Earth Is Perpetually Dark!

If you were an alien on another planet and had access to nothing but all the latest camera demo or “film-maker” reels on YouTube or Vimeo you would quite possibly believe that much of planet Earth is in perpetual darkness.

All you see is clips shot at night or in blacked out studios. Often with very little dynamic range, often incredibly (excessively?) dark. I keep having to check that the brightness on my monitor is set correctly. Even daytime interiors always seem to be in rooms that are 90% dark with just a tiny, low contrast pool of daylight from a window filling one corner and even the light coming through the window is dim and dull. 

I recently viewed a clip that was supposed to show the benefits of raw that contained nothing but low dynamic range shots where 70% of each frame was nothing but black. Sadly there were no deep shadow details, just blackness, some very dark faces and where there were highlights they were clipped. It was impossible to tell anything about the format being used.

The default showreel or demo shot is now a very dark space with a 3/4 profile person, very dimly lit, low key, by a large soft source. Throw in a shiny car with some specular highlights or a few dim practical lights into the background for extra brownie points. 

Let me let you in to a little secret – it’s really, really easy to make black frames. Want a little pool of light – add a light to your mostly black frame. It’s really easy to shoot under exposed and then as a result not have any issues with dynamic range. It’s really easy to shoot a face so that it’s so dark you can barely see the persons eyes and then not have a problem with shiny skin.

But try shooting someone at a desk in a bright office and make that look really great. Try shooting a proper daytime living room scene and making that look flawless. A summer picnic on a beach with brilliant blue sky perhaps. These are all challenging to do very well for both the DoP and the camera.

We have these wonderful cameras with huge dynamic ranges that work really well in daylight too. But we seem to be losing the ability to shoot anything other than shots that have very low average brightness levels and low dynamic range. They depend on often coloured or tinted lighting to provide some interest in what would otherwise be a very, very boring images. Where are all the challenging shots in difficult lighting? Where are the bright, vibrant shots where we can see the dynamic range, resolution and natural colour palettes that separate a good camera from a great camera? 

Dark is getting very boring. 

Sony Launches Airpeak Drone – Designed to carry Alpha sized cameras.

slide_aid_01-600x338 Sony Launches Airpeak Drone - Designed to carry Alpha sized cameras.
Sony Airpeak Drone

 

Sony has launched an entirely new division called Airpeak. Airpeak have produced a large drone that can carry an Alpha sized camera. They claim that this is the smallest drone capable of carrying an Alpha sized camera. It’s unknown at this time whether the Airpeak division will purely focus on larger drones capable of carrying non integrated cameras or whether they will also produce smaller drones with integral cameras. It would certainly make sense to leverage Sony’s sensor expertise by creating dedicated cameras for drones and then drones to carry those cameras. 

The drone market is going to be a tough one to make inroads into. There are already a couple of very well regarded drone manufacturers making some great drones such as the DJI inspire or Mavic Pro. But most of these are small and cannot carry larger external cameras. However the cameras that these drones are equipped with can deliver very high quality images – and they continue to get better and better. The use of larger drones for video applications is more specialist, however globally it is a large market. Whether Sony can compete in the more specialist area of larger drones that carry heavier payloads is yet to be seen. I hope the succeed.

One thing I intend to do in the next few years as the Sun enters the more active phase of it’s 11 year solar cycle is to shoot the Aurora from a drone and a camera like the A7S III and a larger, stable drone would be perfect. But there is no indication of pricing yet and a drone of this size won’t be cheap. So unless I decide to do a lot more drone work than I do already, perhaps it will be better to hire someone with the right kit. But that’s not as much fun as doing it yourself!

For more information on Airpeak do take a look at their website. There is already some impressive footage of it being used to shoot a Vision-S car on a test track.

Sony Airpeak Website.