Category Archives: FX3

Copying a LUT from a Sony FX camera into DaVinci Resolve.

A question that comes up quite a bit is – how do I get the LUT I have been using in the camera into DaVinci Resolve.

There are two parts to this. The first is how do you get the LUT you are using in the camera, out of the camera. Perhaps you want to export the s709 LUT or perhaps some other LUT.

To export a LUT from the camera you can use the embedded LUT option that is available when using the Cine EI mode. 
If you turn on “Embedded LUT” on the camera and record a clip the camera will save the LUT on the SD card under:

FX3/FX30 – private – M4ROOT – GENERAL – LUT folder.

FX6/FX9 – private – XDROOT – GENERAL – LUT folder.

Then to get a LUT into DaVinci Resolve the easy way is to go to the Resolve preferences colour management page, scroll down and there is an “open LUT folder” button that will open the LUT folder. Copy your LUT into this folder. Then click on the “Update Lists” button. Now your LUT will be available to use in Resolve.

Filming the Northern Lights.

It’s that time of year when the nights draw in and get longer in the Northern Hemisphere and many will be thinking of trips to Scandinavia, Iceland, Alaska or Canada to see the spectacle of the Northern Lights. While this year many have been treated to Aurora displays further south, there remains something very magical about the way an Arctic Aurora dances and the light is reflected by the snow. For those that fancy trying to film the Northern Lights I put together this video with some tips and ideas. If you like the video please don’t forget to subscribe to my YouTube channel.


Portkeys LH7P – A monitor that can control your camera.

 

For this years Glastonbury festival I chose to use a combination of a Sony A1, FX3 and FX30 (we also used a DJI Pocket 3 and a Wirral wire cam). These are all small cameras and the screens on the back of them really rather small. So, I wanted to use an external monitor to make it easier to be sure I was in focus.

Using the Portkeys LH7P with a Sony A1 at Glastonbury Festival



I have been aware of the Portkeys monitors for some time, and in particular their ability to remotely control the Sony cameras via WiFi. So this seemed like the perfect opportunity to try out the LH7P as it would give me the ability to control the cameras touch tracking autofocus using the monitors touch screen. So, I obtained a demo unit form Portkeys to try. Click here for the Portkeys LH7P specs.

The Portkeys LH7P with a Sony FX3



I have to say that I am pretty impressed by how well this relatively cheap monitor performs. It has a 1000 Nit screen so it’s pretty bright and overall the colour and contrast accuracy is pretty good. It won’t win any awards for having the very best image, but it is pretty decent and certainly good enough for most on camera applications. 

The LH7P is HDMI only, but this helps keep the weight and power consumption down. While mostly made of plastic it does feel robust enough for professional use. But I wouldn’t be rough with it.

The monitor is very thin and very light. It runs off the very common Sony NP-F style batteries or via a DC in socket that accepts 7 to 24 volts, a surprisingly large range that allows you to use it with almost any battery found in the world of film and TV. It uses very little power at around 9 watts, so the larger NP-F type batteries will run it for at least 3 or 4 hours. 

It’s a touch screen monitor and the touch operated menu system is quite straightforward. One small issue is that if you are using the monitors touchscreen to control the cameras touch autofocus you can’t also use the touchscreen to access the menu system or change the cameras other settings, it’s one or the other. When connected to a camera, to use the monitors menus or access the camera settings you must have the touch tracking focus control turned off. If you are using the touch tracking controls there are 4 assignable buttons on the top of the monitor and you can assign things like peaking, zebras, false colour etc to these, so most of the time having to choose between touch focus or touch menus isn’t a big drama as these switches can be used to turn on and off your most commonly used exposure and focus tools. But you do have to remember to turn off the touch tracking if you want to change another setting from the monitor.

When you are using the monitor to control the touch tracking it is very responsive and because there is very minimal latency due to the direct HDMI connection to the camera it works well, just touch where you want the camera to focus. The only downside is that you don’t get a tracking box on the monitors screen. This is because Sony don’t output the tracking box overlay over the HDMI.

As a result there may be times where you do need to look at the LCD on the back of the camera to see what the camera is tracking. When I used it a Glastonbury I didn’t really find this to be too much of a problem, f I was unsure of what the camera was focussing on, I simply touched the LH7P’s screen where I wanted to focus. 

Pairing the monitor with the camera is simple, but you do need to make sure the cameras wifi is set to 2.4Ghz as this is the only band the monitor supports. To see how to pair it with an FX3 please watch the video linked above. Once connected I found the connection to be very stable and I didn’t experience any unexpected disconnects, even when the venue at Glastonbury was completely full.

The LH7P screen with camera control activated



I have to say that this low cost monitor has really surprised me. The image quality is more than acceptable for a 7″ monitor and controlling the  camera via the monitors touch screen is a very nice way to work, especially given the small size of the LCD screen on a camera like the FX3 or A1. I haven’t had it all that long, so I don’t know what the long term reliability is like, but for what it costs it represents excellent value.

Film making workshop in Dubai, 25th May 2024

I’m running a film making workshop around “how to get the film look” in Dubai for Nanlite and Sony on the 25th of May. During the workshop I will be showing how to expose S-Log3 on the Sony FX series cameras, how to use CineEI and then looking at film style lighting using Nanlite fixtures. We will look at a couple of different types of scenes, an office, a romantic scene and also at how to light for greenscreen.

I will also be at Cabsat 2024, so do drop by the Nanlite booth to say hello.

Please click here for more information or to book a place.

Is This The Age Of The Small Camera Part 2

This is part 2 of my 2 part look at whether small cameras such as a Sony FX3 or A1 really can replace full size cinema cameras.

For this part of the article to make sense you will want to watch the YouTube clips that are linked here full screen at at the highest possible quality settings, Preferably 4K. Please don”t cheat, watch them in the order they are presented as I hope this will allow you to understand the points I am trying to make better.

Also, in the videos I have not put the different cameras that were tested side by side. You may ask why – well it’s because if you do watch a video online or a movie in a cinema you don’t see different cameras side by side on the same screen at the same time. A big point of all of this is that we are now at a place where the quality of even the smallest and cheapest  large sensor camera is likely going to be good enough to make a movie. It’s not necessarily a case of is camera A better than camera B, but the question is will the audience know or care which camera you used. There are 5 cameras and I have labelled them A through to E.

The footage presented here was captured during a workshop I did for Sony at Garage Studios in Dubai (if you need a studio space in Dubai they have some great low budget options). We weren’t doing carefully orchestrated  camera tests, but I did get the chance to quickly capture some side by side content.

So lets get into it.

THE FINAL GRADE:

In many regards I think this is the most important clip as this is how the audience would see the 5 cameras. It represents how they might look at the end of a production. I graded the cameras using ACES in DaVinci Resolve. 

Why ACES? Well, the whole point of ACES is to neutralise any specific camera “look”.  The ACES input transform takes the cameras footage and converts it to a neutral look that is meant to represent the scene as it actually was but with a film like highlight roll off added. From here the idea is that you can apply the same grade to almost any camera and the end result should look more or less the same. The look of different cameras is largely a result of differences in the electronic processing of the image in post production rather than large differences in the sensors. Most modern sensors capture a broadly similar range of colours with broadly similar dynamic range. So, provided you know the what recording levels represent what colour in the scene, it is pretty easy to make any camera look like any other, which is what ACES does.

The footage captured here was captured during a workshop, we weren’t specifically testing the different cameras in great depth. For the workshop the aim was to simply show how any of these cameras could work together. For simplicity and speed I manually set each camera to 5600K and as a result of the inevitable variations you get between different cameras, how each is calibrated and how each applies the white balance settings there were differences between in the  colour balance of each camera.

To neutralise these white balance differences the grading process started by using the colour chart to equalise the images from each camera using the “match” function in DaVinci Resolve. Then each camera has exactly the same grade applied – there are no grading differences, they are all graded in the same way.

Below are frame grabs from each camera with a slightly different grade to the video clips, again, they all look more or less the same.

The graded image from camera A. Click on the image to view the full resolution image.

 

The graded image from camera B. Click on the image to view the full resolution image.

 

The graded image from camera C. Click on the image to view the full resolution image.

 

The graded image from camera D. Click on the image to view the full resolution image.

 

The graded image from camera E. Click on the image to view the full resolution image.



The first thing to take away from all of this then is that you can make any camera look like pretty much any other and a chart such as the “color checker video” and software that can read the chart and correct the colours according to the chart makes it much easier to do this.

To allow for issues with the quality of YouTube’s encoding etc here is a 400% crop of the same clips:

 

What I am expecting is that most people won’t actually see a great deal of difference between any of the cameras. The cheapest camera is $6K and the most expensive $75K, yet it’s hard to tell which is which or see much difference between them. Things that do perhaps stand out initially in the zoomed in image are the softness/resolution differences between the 4K and 8K cameras, but in the first un cropped clip this difference is much harder to spot and I don’t think an audience would notice especially if the one camera is used on it’s own so the viewer has nothing to directly compare it with. It is possible that there are also small focus differences between each camera, I did try to ensure each was equally well focussed but small errors may have crept in.

WHAT HAPPENS IF WE LIFT THE SHADOWS?

OK, so lets pixel peep a bit more and artificially raise the shadows so that we can see what’s going on in the darker parts of the image.

 

There are differences, but again there isn’t a big difference between any of the cameras. You certainly couldn’t call them huge and in all likelihood, even if for some reason you needed to raise or lift the shadows by an unusually large amount as done here (about 2.5 stops) the difference between “best” and “worst” isn’t large enough for it to be a situation where any one of these cameras would be deemed unusable compared to the others.

SO WHY DO YOU WANT A BETTER CAMERA?

So, if we are struggling to tell the difference between a $6K camera and a $75K one why do you want a “better” camera? What are the differences and why might they matter?

When I graded the footage from these cameras in the workshop it was actually quite difficult to find a way to “break” the footage from any of them. For the majority of grading processes that I tried  they all held up really well and I’d be happy to work with any of them, even the cameras using the highly compressed internal recordings held up well. But there are differences, they are not all the same and some are easier to work with than the others. 

The two cheapest cameras were a Sony FX3 and a Sony A1. I recorded using their built in codecs, XAVC-SI in the FX3 and XAVC-HS in the A1. These are highly compressed 10 bit codecs. The other cameras were all recorded using their internal raw codecs which are either 16 bit linear or 12 bit log. At some time I really do need to do a proper comparison of the internal XAVC form the FX3 and the ProResRaw that can be recorded externally. But it is hard to do a fully meaningful test as to get the ProResRaw into Resolve requires transcoding and a lot of other awkward steps. From my own experience the difference in what you can do with XAVC v ProResRaw is very small.

One thing that happens with most highly compressed codecs such as H264 (XAVC-SI) or H265(XAVC-HS) is a loss of some very fine textural information and the image breaking up into blocks of data. But as I am showing these clips via YouTube in a compressed state I needed to find a way to illustrate the subtle differences that I see when looking at the original material. So, to show the difference between the different sensors and codecs within these camera I decided to pick a colour using the Resolve colour picker and then turn that colour into a completely different one, in this case pink.

What this allows you to see is how precisely the picked colour is recorded and it also shows up some of the macro block artefacts. Additionally it gives an indication on how fine the noise is and the textural qualities of the recording. In this case  the finer the pink “noise” the better, as this is an indication of smaller, finer textural differences in the image. These smaller textural details would be helpful if chroma keying or perhaps for some types of VFX work. It might (and say might because I’m not convinced it always will) allow you to push a very extreme grade a little bit further.

I would guess that by now you are starting to figure out which camera is which – The cameras are an FX3, A1, Burano, Venice 2 and an ArriLF.

In this test you should be able to identify the highly compressed cameras from the raw cameras. The pink areas from the raw cameras are finer and less blocky, this is a good representation of the benefit of less compression and a deeper bit depth.

Camera A. Click on the image to view the full resolution image.

 

Compression and codec Camera B. Click on the image to view the full resolution image.

 

Compression and codec Camera C. Click on the image to view the full resolution image.

 

Compression and codec Camera D. Click on the image to view the full resolution image.

 

Compression and codec Camera E. Click on the image to view the full resolution image.



But even here the difference isn’t vast. It certainly, absolutely, exists. But at the same time  you could push ANY of these cameras around in post production and if you’ve shot well none of them are going to fall apart. 

As a side note I will say that I find grading linear raw footage such as the 16 bit X-OCN from a Venice or Burano more intuitive compared to working with compressed Log. As a result I find it a bit easier to get to where I want to be with the X-OCN than the XAVC. But this doesn’t mean I can’t get to the same place with either.

RESOLUTION MATTERS.

Not only is compression important but so too is resolution. To some degree increasing the resolution can make up for a lesser bit depth.  As these camera all use bayer sensors the chroma resolution will be somewhat less than the luma resolution. A 4K sensor such as the one in the FX3 or the Arri LF will have much lower chroma resolution than the 8K A1, Burano or Venice 2. If we look at the raised shadows clip again we can see some interesting things going on the the girls hair.

 

If you look closely camera D has a bit of blocky chroma noise in the shadows. I suspect this might be because this is one of the 4K sensor cameras and the lower chroma resolution means the chroma noise is a bit larger.

I expect that by now you have an idea of which camera is which, but here is the big reveal: A is the FX3, B is the Venice 2, C is Burano, D is an Arri LF, and E is the Sony A1.

What can we conclude from all of this: 

There are differences between codecs. A better codec with a greater bit depth will give you  more textural information. It is not necessarily simply that raw will always be better than YUV/YCbCr but because of raws compression efficiency it is possible to have very low levels of compression and a deep bit depth. So, if you are able to record with a better codec or greater bit depth why not do so. There are some textural benefits and there will be fewer compression artefacts. BUT this doesn’t mean you can’t get a great result from XAVC or another compressed codec.

If using a bayer sensor than using a sensor with more “K” than the delivery resolution can bring textural benefits.

There are differences in the sensors, but these differences are not really as great as many might expect. In terms of DR they are all actually very close, close enough that in the real world it isn’t going to make a substantial difference. As far as your audience is concerned I doubt they would know or care. Of course we have all seen the tests where you greatly under expose a camera and then bring the footage back to normal, and these can show differences. But that’s not how we shoot things. If you are serious about getting the best image that you can, then you will light to get the contrast and exposure that you want. What isn’t in this test is rolling shutter, but generally I rarely see issues with rolling shutter these days. But if you are worried about RS, then the Venice 2 is excellent and the best of the group tested here.

Assuming you have shot well there is no reason why an audience should find the image quality from the $6K FX3 unacceptable, even on a big screen. And if you were to mix and FX3 with a Venice 2 or Burano, again if you have used each camera equally well I doubt the audience would spot the difference.

BACK TO THE BEGINNING:

So this brings me back to where I started in part 1. I believe this is the age of the small camera – or at least there is no reason why you can’t use a camera like an FX3 or an A1 to shoot a movie. While many of my readers I am sure will focus on the technical details of the image quality of camera A against camera B, in reality these days it’s much more about the ergonomics and feature set as well as lens and lighting choices.

A small camera allows you to be quick and nimble, but a bigger camera may give you a lot more monitoring options as well as other things such as genlock. And….. if you can – having a better codec doesn’t hurt. So there is no – one fits all – camera that will be the right tool for every job.  

Is this the age of the small camera? Part 1.

As Sony’s new Burano camera starts to ship – a relatively small camera that  could comfortably be used to shoot a blockbuster movie we have to look at how over the last few years the size of the cameras used for film production has reduced.

Which was shot with an 8K Venice 2 and which was shot with a 4K FX3?

 

Only last year we saw the use of the Sony FX3 as the principle camera for the movie the Creator. What is particularly interesting about the Creator is that the FX3 was chosen by the director Gareth Edwards for a mix of both creative and financial reasons.

To save money or to add flexibility?

To save money, rather than building a lot of expensive sets Edwards chose to shoot on location using a wide and varied range of locations (80 different locations)  all over Asia. To make this possible he used a smaller than usual crew.  Part of the reasoning that was given was that it was cheaper to fly a small crew to all these different locations than to try to build a different set for each part of the film. The film cost $80 million to make and took $104 million in the box office, a pretty decent profit at a time when many movies take years to break even.

FX3 on gimbal during the filming of The Creator



The FX3 was typically mounted on a gimbal and this allowed them to shoot quickly and in a very fluid manner, making use of natural light where possible.  A 2x anamorphic lens was used and the final delivery aspect ratio was a very wide 2.76:1. The film was edited first and then when the edit was locked down the VFX elements were added to the film. Modern tracking and rotoscoping techniques make it much easier to add VFX into sequences without needing to use green or blue screen techniques and this is one of those areas where AI will become a very useful and powerful tool.

You don’t NEED a big camera, but you might want one.

So, what is clear is that you don’t NEED a big camera to make a feature film and The Creator demonstrates that an FX3 (recording to an Atomos Ninja) offers sufficient image quality to stand up to big screen presentation. I don’t think this is really anything new, but we have now reached the stage where the difference in image quality between a cheap $1500 camera like the FX30 and a high end “cinema” camera like the $70K  Venice 2  is genuinely so small that an audience probably won’t notice.

There may be reasons why you might prefer to have a bigger camera body – it does make mounting accessories easier and will often have much better monitoring and viewfinder options. And you may argue that a camera like Venice can offer greater image quality (as you will see in part 2 – it technically does have a higher quality image than the FX3), but would the audience actually be able to see the difference and even if they can would they actually care? And what about post production – surely a better quality image is a big help with post – again come back for part 2 where I explore this in more depth.

Which is the Arri LF and which is the Sony A1?


And small cameras will continue to improve. If what we have now is already good enough things can only get better.

8K Benefits??

Since the launch of Burano I’ve become more and more convinced of the benefits of an 8K sensor – even if you only ever intend to deliver in 4K, the extra chroma resolution from actually having 4K of R and B pixels makes a very real difference. Venice 2 really made me much more aware of this and Burano confirms it. Because of this I’ve been shooting a lot more with the Sony A1 (which possibly shares the same sensor as Burano). There is something I really like about the textural quality in the images from the A1, Burano and Venice 2 (having said that after spending hours looking at my side by side test samples from both 4K and 8K cameras while the difference is real, I’m not sure it will always be seen in the final deliverable). In addition when using a very compressed codec such as the XAVC-HS in the A1 recording at 8K leads to smaller artefacts which then tend to be less visible in a 4K deliverable. This allows you to grade the material harder than perhaps you can with similarly compressed 4K footage. The net result is the 10 bit 8K looks fantastic in a 4K production.

Sony A1 cropped and zoomed in 6x.


I have to wonder if The Creator wouldn’t have been better off being shot with an A1 rather than an FX3. You can’t get 8K raw out of an A1, but the extra resolution makes up for this and it may have been a better fit for the 2x anamorphic lens that they used.

So many choices….

And that’s the thing – we have lots of choices now. There are many really great small cameras, all capable of producing truly excellent images. A small camera allows you to be nimble. The grip and support equipment becomes smaller. This allows you to be more creative. A lot of small cameras are being used for the Formula 1 movie, small cameras are often mixed with larger cameras and these days the audience isn’t going to notice. 

Plus we are seeing a change in attitudes. A few years ago most cinematographers wouldn’t have entertained the idea of using a DSLR or pocket sized camera as the primary camera for a feature. Now it is different, a far greater number of DP’s are looking at what a small camera might allow them to do, not just as a B camera but as the A camera. When the image quality stops being an issue, then small might allow you to do more.

This doesn’t mean big cameras like Venice will go away, there will always be a place for them. But I expect we will see more and more really great theatrical releases shot with cameras like the FX3 or A1 and that makes it a really interesting time to be a cinematographer. Again, look at The Creator – this was a relatively small budget for a science fiction film packed with CGI and other effects. And it looked great. Of course there is also that middle ground, a smaller camera but with the image quality of a big one – Burano perhaps?

In Part 2……

In part 2 I’m going to take some sample clips that I grabbed at a recent workshop from a Venice 2, Burano, A1 and FX3 and show you just how close the footage from these cameras is. I’ll also throw in some footage from an Arri LF and then I’ll “break” the footage in post production to give you an idea of where the differences are and whether they are actually significant enough to worry about.

 

Timecode and external record run for the FX3 and FX30 from Mutiny

This is a really useful teeny tiny input/output box from the guys at Mutiny. It allows users to input timecode into the FX3 or FX30 as well as connect a remote rec run control to start or stop recording. This will be so useful for those using the camera on a crane or jib as well as many other applications where the camera needs to be controlled remotely.

The Mutiny TC-R/S for Sony FX3 and FX30 camera, feeds Timecode IN and R/S (remote triggering) via the multi-terminal (Multiport). Works with every FIZ wireless follow focus system (Preston, Arri, C-Motion, Nucleus, Heden, etc) as well as every timecode generator (Tentacle, Deity, Deneke, Ambient, etc). Orders start shipping Monday in the order taken. https://mutiny.store/products/tcrs

Northern Lights Live Streams from Norway 2024

Next week I head out to Norway for my annual trip in search of the Northern lights. Like last year I will try to stream the Aurora live from Norway. Of course this does depend on the weather and whether the Aurora comes out to play. 

The plan is to stream each evening from around 6pm CET Central European time starting from February 2nd. I will stream for as long as I can when the Aurora is visible. I have scheduled 5 YouTube live streams but there will likely be more added depending on the weather and many other variables that are out of my control. These streams may start later than planned or get interrupted if I need to move the camera position or if I run out of power. As well as the scheduled streams I intend to include additional streams where I will go over the equipment used and things like that.

To stream the Aurora I will be using various pieces of kit including my Sony FX3 camera connected to an Accsoon Seemo or an Accsson CineView. The Seemo connects to an iPhone directly via a cable and I can then stream the output of the FX3 from the phone. However the area where I will be doesn’t have the best cell phone signal so I might need to use the CineView. With the Cineview connected to the camera I can send the pictures to my phone and then stream from the phone. This way I can put the phone in a location where there is a better signal.

The livestream page of my YouTube Channel is here: https://www.youtube.com/@alisterchapman/streams

I will also try to send out notifications from my facebook feed of any streams shortly before I go live: https://www.facebook.com/alister.chapman.9

And in case you haven’t seen it before here is a little bit of behind the scenes info from last years Aurora trip. 

New Firmware Coming For The FX3, FX30 and FX6 – Shutter Angle for FX3/30.

Before you get too excited – these firmware updates are not coming just yet. But they are coming.

FX6

The FX6 will get an update to Version 5  to quote Sony “in May 2024 or later” which will include:

–  The addition of 1.5x setting to the De-squeeze function

– Monitor & Control app compatibility (ex. Waveform, False colour such as FX3/30 already supported)

–  A new preset 709tone to support to colour match multiple cameras  (I assume this is to match the older Sony Rec-709 look)

– The expansion of supported lenses, such as the SEL100400GM & SEL200600G, for breathing compensation.

FX3 and FX30.

Then later in the year, in September 2024 or later the  FX3 and FX30 will get:

– A Shutter Angle option

– 709tone support

– SRT/RTMP/RTMPS support for Live streaming demand

The addition of shutter angle in the FX3 and FX30 is going to please a lot of owners of these 2 cameras.

 

 

How I shoot the Northern Lights

Every year as many of my regular readers will know  I run tours to the very north of Norway taking small groups of adventurers well above the arctic circle in the hope of seeing the Aurora Borealis or Northern Lights. I have been doing this for around 20 years and over the years as cameras have improved it’s become easier and easier to video the Aurora in real time so that what you see in the video matches what you would have seen if you had been there yourself.

In the past Aurora footage was almost always shot using long exposures and time lapse sometimes with photo cameras or with older video cameras like the Sony EX1 or EX3 which resulted in greatly sped up motion and the loss of many of the finer structures seen in the Aurora. I do still shoot time lapse of the Aurora using still photos, but in this video I give you a bit of behind the scenes look at one of my trips with details of how I shoot the Aurora with the Sony FX3 in real time and also with the FX30 using S&Q motion. The video was uploaded in HDR so if you have an HDR display you should see it in HDR, if not it will be streamed to you in normal standard dynamic range. The cameras used are Sony’s FX3 and FX30. The main lenses are the Sony 24mm f1.4 GM and 20mm f1.8 G but when out and about on the snow scooters I use the Sony 18-105 G power zoom on the FX30 for convenience.

I used the Flexible ISO mode in the cameras to shoot S-Log3 with the standard s709 LUT for monitoring. I don’t like going to crazy high ISO values as the images get too noisy, so I tend to stick to 12,800 or 25,600 ISO on the FX3 or a maximum  of 5000 ISO on the FX30 (generally on the FX30 I stay at 2500). If the images are still not bright enough I will use a 1/12th shutter speed at 24fps. This does mean that pairs of frames will be the same, but at least the motion remains real-time and true to life.

If that still isn’t enough rather than raising the ISO still further I will go to the cameras S&Q (slow and quick) mode and drop the frame rate down to perhaps 8fps with a 1/8th shutter, 4fps with a 1/4 shutter or perhaps all the way down to 1fps and a 1 second shutter.  But – once you start shooing at these low frame rates the playback will be sped up and you do start to loose many of the finer, faster moving and more fleeting structures within the aurora because of the extra motion blur. 

So much of all of this will depend on the brightness of the Aurora. Obviously a bright Aurora is easier to shoot in real time than a dim one. This is where patience and perseverance pays off. On a dark arctic night if you are sufficiently far north the Aurora will almost always be there even if very faint. And you can never be sure when it might brighten. It can go from dim and barely visible to bright and dancing all across the sky in seconds – and it can fade away again just as fast. So, you need to stay outside in order to catch the those often brief bright periods. On my trips it is not at all unusual for the group to start the evening outside watching the sky, but after a couple of hours of only a dim display most people head inside to the warm only to miss out when the Aurora brightens. Because of this we do try to have someone on aurora watch.

During 2024 we should be at the peak of the suns 11 year solar cycle, so this winter and next winter should present some of the best Aurora viewing conditions for a long time to come. My February 2024 Norway trip is sold out but I can run extra trips or bespoke tours if wanted so do get in touch if you need my help. There is more information on my tours here: https://www.xdcam-user.com/northern-lights-expeditions-to-norway/

Don’t forget I also have information on filming in cold weather here: https://www.xdcam-user.com/2023/12/filming-in-very-cold-weather/

I will be back in Norway from the 1st of February, keep an eye out for any live streams, I will be taking an Accsoon SeeMo to try to live stream the Aurora.