All posts by alisterchapman

How does the Panasonic EVA1 stack up against the Sony FS7 and FS5?

This is a question a lot of people are asking. As I’ve mentioned in other recent posts, sensors have reached a point where it’s very difficult to bring out a camera where the image quality will be significantly different from any other on the market for any given price point. Most differences will be in things like codec choices or trading off a bit of extra resolution for sensitivity etc. Other differences will be in the ergonomics, lens mounts and battery systems.

So it’s interesting to see what Keith Mullin over at  Z-Systems thought of the EVA1. Keith knows his stuff and Z-Systems are not tied to any one particular brand.

Overall as expected there isn’t a huge difference in image quality between any of the 3 cameras. The EVA1 seems weaker in low light which is something I would have predicted given the higher pixel count. The dual ISO mode seems not to be anywhere near the same as the really very good dual ISO mode in the Varicam LT.

Why not take a look at the full article and video for yourself. http://zsyst.com/2017/12/panasonic-eva1-first-look/

 

Advertisements

Using LUT’s for exposure – choosing the right LUT.

If using a LUT to judge the exposure of a camera shooting log or raw it’s really important that you fully understand how that LUT works.

When a LUT is created it will expect a specific input range and convert that input range to a very specific output range. If you change the input range then the output will range will be different and it may not be correct. As an example a LUT designed and created for use with S-Log2 should not be used with S-Log3 material as the the higher middle grey level used by S-Log3 would mean that the mid range of the LUT’s output would be much brighter than it should be.

Another consideration comes when you start offsetting your exposure levels, perhaps to achieve a brighter log exposure so that after grading the footage will have less noise.

Lets look at a version of Sony’s 709(800) LUT designed to be used with S-Log3 for a moment. This LUT expects middle grey to come in at 41% and it will output middle grey at 43%. It will expect a white card to be at 61% and it will output that same shade of white at a little over 85%. Anything on the S-Log3 side brighter than 61% (white) is considered a highlight and the LUT will compress the highlight range (almost 4 stops) into the output range between 85% and 109% resulting in flat looking highlights. This is all perfectly fine if you expose at the levels suggested by Sony. But what happens if you do expose brighter and try to use the same LUT either in camera or in post production?

Well if you expose 1.5 stops brighter on the log side middle grey becomes around 54% and white becomes around 74%. Skin tones which sit half way between middle grey and white will be around 64% on the LUT’s input. That’s going to cause a problem! The LUT considers anything brighter than 61% on it’s input to be a highlight and it will compresses anything brighter than 61%. As a result on the output of your LUT your skin tones will not only be bright, but they will be compressed and flat looking. This makes them hard to grade. This is why if you are shooting a bit brighter it is much, much easier to grade your footage if your LUT’s have offsets to allow for this over exposure.

If the camera has an EI mode (like the FS7, F5, F55 etc) the EI mode offsets the LUT’s input so you don’t see this problem in camera but there are other problems you can encounter if you are not careful like unintentional over exposure when using the Sony LC709 series of LUTs.

Sony’s  709(800) LUT closely matches the gamma of most normal monitors and viewfinders, so 709(800) will deliver the correct contrast ie. contrast that matches the scene you are shooting plus it will give conventional TV brightness levels when viewed on standard monitors or viewfinders.

If you use any of the LC709 LUT’s you will have a miss-match between the LUT’s gamma and the monitors gamma so the images will show lower contrast and the levels will be lower than conventional TV levels when exposed correctly. LC709 stands for low contrast gamma with 709 color primaries, it is not 709 gamma!

Sony’s LC709 Type A LUT is very popular as it mimics the way an Arri Alexa might look. That’s fine but you also need to be aware that the correct exposure levels for this non-standard LC gamma are middle grey at around 41% and white at 70%.

An easy trap to fall into is to set the camera to a low EI to gain a brighter log exposure and then to use one of the LC709 LUT’s and try to eyeball the exposure. Because the LC709 LUT’s are darker and flatter it’s harder to eyeball the exposure and often people will expose them as you would regular 709. This then results in a double over exposure. Bright because of the intentional use of the lower EI but even brighter because the LUT has been exposed at or close to conventional 709 brightness. If you were to mistakenly expose the LC709TypeA LUT with skin tones at 70%, white at 90% etc then that will add almost 2 stops to the log exposure on top of any EI offset.

Above middle grey with 709(800) a 1 stop exposure change results in an a 20% change in brightness, with LC709TypeA the same exposure change only gives a just over 10% change, as a result over or under exposure is much less obvious and harder to measure or judge by eye with LC709. The cameras default zebra settings for example have a 10% window. So with LC709 you could easily be a whole stop out, while with 709(800) only half a stop.

Personally when shooting I don’t really care too much about how the image looks in terms of brightness and contrast. I’m more interested in using the built in LUT’s to ensure my exposure is where I want it to be. So for exposure assessment I prefer to use the LUT that is going to show the biggest change when my exposure is not where it should be. For the “look” I will feed a separate monitor and apply any stylised looks there. To understand how my highlights and shadows, above and below the LUT’s range are being captured I use the Hi/Low Key function.

If you are someone that creates your own LUT’s an important consideration is to ensure that if you are shooting test shots, then grading these test shots to produce a LUT it’s really, really important that the test shots are very accurately exposed.

You have 2 choices here. You can either expose at the levels recommended by Sony and then use EI to add any offsets or you can offset the exposure in camera and not use EI but instead rely on the offset that will end up in the LUT. What is never a good idea is to add an EI offset to a LUT that was also offset.

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.

Spaces open for my January 2018 Northern Lights adventure.

-Jan-Helmer-Olsen-Ravnastua_2017-01297-1024x576 Spaces open for my January 2018 Northern Lights adventure.
Picture taken by Jan Helmer Olsen. One of the guides on the tour.

Due to the unexpected redundancy of one of my guests I am now looking to sell on a couple of places on my Northern Lights expedition in January on his behalf.

The trip starts and finishes in Alta, Norway. Food is included for most of the trip. We spend 4 days up on the Finnmarksvidda where we normally get amazing Northern Lights viewing opportunities. I can also help guide anyone that wants to learn how to photograph or video the Aurora.

This is a real adventure and a lot of fun. The only way up to the cabins by snow scooter, crossing frozen lakes along the way. We eat a campfire lunch in a Sami style tent, we go ice fishing, exploring by snow scooter and enjoy traditional sauna nights.

More details can be found here: http://www.xdcam-user.com/northern-lights-expeditions-to-norway/

Low Light Performance – It’s all about the lens!

This post follows on from my previous post about sensors and was inspired by one of the questions asked following that post.

While sensor size does have some effect on low light performance, the biggest single factor is really the lens. It isn’t really bigger sensor that has revolutionised low light performance. It’s actually the lenses that we can use that has chnge our ability to shoot in low light. When we used to use 1/2″ or 2/3″ 3 chip cameras for most high end video production the most common lenses were the wide range zoom lenses. These were typically f1.8 lenses, reasonably fast lenses.

But the sensors were really small, so the pixels on those sensors were also relatively small, so having a fast lens was important.

Now we have larger sensors, super 35mm sensors are now common place. These larger sensors often have larger pixels than the old 1/2″ or 2/3″ sensors, even though we are now cramming more pixels onto the sensors. Bigger pixels do help increase sensitivity, but really the biggest change has been the types of lenses we use.

Let me explain:

The laws of physics play a large part in all of this.
We start off with the light in our scene which passes through a lens.

If we take a zoom lens of a certain physical size, with a fixed size front element and as a result fixed light gathering ability, for example a typical 2/3″ ENG zoom. You have a certain amount of light coming in to the lens.
When the size of the image projected by the rear of the lens is small it will be relatively bright and as a result you get an effective large aperture.

Increase the size of the sensor and you have to increase the size of the projected image. So if we were to modify the rear elements of this same lens to create a larger projected image (increase the image circle) so that it covers a super 35mm sensor what light we have. is spread out “thinner” and as a result the projected image is dimmer. So the effective aperture of the same lens becomes smaller and because the image is larger the focus more critical and as a result the DoF narrower.

But if we keep the sensor resolution the same, a bigger sensor will have bigger pixels that can capture more light and this makes up for dimmer image coming from the lens.

So where a small sensor camera (1/2″, 2/3″) will typically have a f1.8 zoom lens when you scale up to a s35mm sensor by altering the projected image from the lens, the same lens becomes the equivalent of around f5.6. But because for like for like resolution the pixels size is much bigger, the large sensor will be 2 to 3 stops more sensitive, so the low light performance is almost exactly the same, the DoF remains the same and the field of view remains the same (the sensor is larger, so DoF decreases, but the aperture becomes smaller so DoF increases again back to where we started). Basically it’s all governed by how much light the lens can capture and pass through to the sensor.

It’s actually the use of prime lenses that are much more efficient at capturing light has revolutionised low light shooting as the simplicity of a prime compared to a zoom makes fast lenses for large sensors affordable. When we moved to sensors that are much closer to the size of sensors used on stills cameras the range and choice of affordable lenses we could use increased dramatically. We were no longer restricted to expensive zooms designed specifically for video cameras.

Going the other way. If you were to take one of todays fast primes like a common and normally quite affordable 50mm f1.4 and build an optical adapter of the “speedbooster” type so you could use it on a 2/3″ sensor you would end up with a lens the equivalent of a f0.5 10mm lens that would turn that 2/3″ camera into a great low light system with performance similar to that of a s35mm camera with a 50mm f1.4.

Adjusting the Color marix

Every now and again I get asked how to adjust the color matrix in a video camera. Back in 2009 I made a video on how to adjust the color matrix in the Sony’s EX series of cameras. This video is just as relevant today as it was then. The basic principles have not changed.

The exact menu settings and menu layout may be a little different in the latest cameras, but the adjustment of the matrix setting (R-G, G-R etc) have exactly the same effect in the latest camera that provide matrix adjustments (FS7, F5, F55 and most of the shoulder mount and other broadcast cameras). So if you want a better understanding of how these settings and adjustment works, take a look at the video.

I’ll warn you now that adjusting the color matrix is not easy as each setting interacts with the others. So creating a specific look via the matrix is not easy and requires a fair bit of patience and a lot of fiddling and testing to get it just right.

Digital Film Making Workshop – Dubai.

alister-wb Digital Film Making Workshop - Dubai.I’m running a digital film making workshop in Dubai, December 15/16th 2017.

This 1.5 day course will take you through composition, lighting, and exposure (including color, gamma and exposure index) as well as post production including different grading techniques including LUT’s, S-Curves and color managed workflows. It will focus on how to create high quality, film-like images using the latest digital techniques. It will also cover one of the hotest topics right now which is HDR.

Day 2 will include practical sessions where different shooting techniques can be tested to compare how they effect the end result.

Full details of the workshop can be found here: http://www.amt.tv/event/Digital_Filmmaking_Workshop_Dubai/

Sony Pro Tour – Glasgow 7th December.

Just a very quick note that the last UK event of the Sony Pro Tour for 2017 will be in Glasgow on Thursday the 7th of December. I’ll be there to answer any questions and to give an in depth seminar on HDR including how to shoot HDR directly with the Sony cameras that feature Hybrid Log Gamma.

The event is free, there will be a wide range of cameras for you to play with including FS5, FS7, the new Z90 and X80 as well as monitors, mixers and audio gear.

More info here: https://www.sony.co.uk/pro/page/sony-pro-tour-2017

Why hasn’t anyone brought out a super sensitive 4K camera?

Our current video cameras are operating at the limits of current sensor technology. As a result there isn’t much a camera manufacturer can do to improve sensitivity without compromising other aspects of the image quality.
Every sensor is made out of silicon and silicon is around 70% efficient at converting photons of light into electrons of electricity. So the only things you can do to alter the sensitivity is change the pixel size, reduce losses in the colour and low pass filters, use better micro lenses and use various methods to prevent the wires and other electronics on the face of the sensor from obstructing the light. But all of these will only ever make very small changes to the sensor performance as the key limiting factor is the silicon used to make the sensor.
 
This is why even though we have many different sensor manufacturers, if you take a similar sized sensor with a similar pixel count from different manufacturers the performance difference will only ever be small.
 
Better image processing with more advanced noise reduction can help reduce noise which can be used to mimic greater sensitivity. But NR rarely comes without introducing other artefacts such as smear, banding or a loss of subtle details. So there are limits as to how much noise reduction you want to apply. 
 
So, unless there is a new sensor technology breakthrough we are unlikely to see any new camera come out with a large, actual improvement in sensitivity. Also we are unlikely to see a sudden jump in resolution without a sensitivity or dynamic range penalty with a like for like sensor size. This is why Sony’s Venice and the Red cameras are moving to larger sensors as this is the only realistic way to increase resolution without compromising other aspects of the image. It’s why all the current crop of S35mm 4K cameras are all of very similar sensitivity, have similar dynamic range and similar noise levels.

A great example of this is the Sony A7s. It is more sensitive than most 4K S35 video cameras simply because it has a larger full frame sensor, so the pixels can be bigger, so each pixel can capture more light. It’s also why cameras with smaller 4K sensors will tend to be less sensitive and in addition have lower dynamic range (because the pixel size determines how many electrons it can store before it overloads).

FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.

Here is something a little different.

I few years ago I was privileged to have Jean Mouettee and Thierry Legault join me on one of my Northern Lights tours. They were along to shoot the Aurora on an FS100 (it might have been an FS700) in real time. Sadly we didn’t have the best of Auroras on that particular trip. Theirry is famous for his amazing images of the Sun with the International Space Station passing in front of it.

iss_atlantis_transit2_2010 FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
Amazing image by Thierry Legault of the ISS passing in front of the Sun.

Well the two of them have been very busy. Working with some special dual A7s camera rigs recording on to a pair of Atomos Shoguns, they have been up in Norway shooting the Northern Lights in 3D. You can read more about their exploits and find out how they did it here: https://www.swsc-journal.org/articles/swsc/abs/2017/01/swsc170015/swsc170015.html

To be able to “see” the Aurora in 3D they needed to place the camera rigs over 6km apart. I did try to take some 3D time-lapse of the Aurora a few years back with cameras 3Km apart, but that was timelapse and I was thwarted by low cloud. Jean and Thierry have gone one better and filmed the Aurora not only in 3D but also in real time. That’s no mean feat!

20170218_233041_rec FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
One of the two A7s camera rigs used for the real time 3D Aurora project. The next stage will use 4 cameras in each rig for whole sky coverage.

If you want to see the 3D movies take a look at this page: http://www.iap.fr/science/diffusion/aurora3d/aurora3d.html

I’d love to see these projected in a planetarium or other dome venue in 3D. It would be quite an experience.

Jean was also in the US for the total Eclipse in August. He shot the eclipse using an FS5 recording 12 bit raw on a Atomos Shogun. He’s put together a short film of his experience and it really captures the excitement of the event as well as some really spectacular images of the moon moving across the face of the sun. I really shows what a versatile camera the FS5 is.

If you want a chance to see the Northern Lights for yourself why not join me next year for one of my rather special trips to Norway. I still have some spaces. http://www.xdcam-user.com/northern-lights-expeditions-to-norway/