Category Archives: Technology

Skills and knowledge in TV and video production are not keeping up with the technology.

TV and video production, including digital cinema is a highly technical area. Anyone that tells you otherwise is in my opinion mistaken. Many of the key jobs in the industry require an in depth knowledge of not just the artistic aspects but also the technical aspects.
Almost everyone in the camera department, almost everyone in post production and a large portion of the planning and pre-production crew need to know how the kit we use works.
A key area where there is a big knowledge gap is gamma and color. When I was starting out in this business I had a rough idea of what gamma and gamut was all about. But then 10 years or more ago you didn’t really need to know or understand it because up to then we only ever had variations on 2.2/2.4 gamma. There were very few adjustments you could make to a camera yourself and if you did fiddle, generally you would often create more problems than you solved. So those things were just best left alone.
But now it’s vital that you fully understand gamma, what it does, how it works and what happens if you have a gamma miss-match. But sadly so many camera operators (and post people) like to bury their heads in the sand using the excuse “I’m an artist – I don’t need to understand the technology”. Worse still are those that think they understand it, but in reality do not, mainly I think, due to the spread of miss-information and bad practices that become normal. As an example shooting flat seems to mean something very different today to what it meant 10 years ago. 10 years ago it meant shooting with flat lighting so the editor or color grader could adjust the contrast in post production. Now though, shooting flat is often incorrectly used to describe shooting with log gamma (shooting with log isn’t flat, it’s a gamma miss-match that might fool the operator into thinking it’s flat). The whole “shooting flat” miss-conception comes from the overuse and incorrect use of the term on the internet until it eventually became the accepted term for shooting with log.
As only a very small portion of film makers actually have any formal training and even fewer go back to school to learn about new techniques or technologies properly this is a situation that isn’t going to get any better. As we move into an era where, in the short term at least, we will need to start delivering multiple versions of productions in both standard dynamic range as well as several different HDR versions, additionally saving the programme master in another intermediate format. Things are only going to get more complicated and more and more mistakes will be made, technology will be applied and used incorrectly.
Most people are quite happy to spend thousands on a new camera, new recorder or new edit computer. But then they won’t spend any money on training to learn how to get the very best from it. Instead they will surf the net for information and guides of unknown quality and accuracy.
When you hire a crew member you have no idea how good their knowledge is. As it’s normal for most not to have attended any formal courses we don’t ask for certificates and we don’t expect them. But they could be very useful. Most other industries that benefit from a skilled labour force have some form of formal certification process, but our industry does not, so hiring crew, booking an editor etc becomes a bit of a lottery.
Of course it’s not all about technical skills. Creative skills are equally important. But again it’s hard to prove that you do have such skills to a new client. Showreels are all to easy to fake.
Guilds and associations are a start. But many of these can be joined simply by paying the joining or membership fee. You could be a member of one of the highly exclusive associations such as the ASC or BSC, but even that doesn’t mean you know about technology “A” or technique “Z”.
We should all take a close look at our current skill sets. What is lacking, where do I have holes, what could I do better. I’ve been in this business for 30 years and I’m still learning new stuff almost every day. It’s one of the things that keeps life interesting. Workshops and training events can be hugely beneficial and they really can lead to you getting better results. Or it may simply be that a day of training helps give you the confidence that you are doing it right. They are also great opportunities to meet other similar people and network.
Whatever you do, don’t stop learning, but beware the internet, not everything you read is right. The key is to not just read and then do, but to read, understand why, ask questions if necessary, then do. If you don’t understand why, you’ll never be able to adapt the “do” to fit your exact needs.

Should I shoot 8 bit UHD or 10 bit HD?

This comes up so many times, probably because the answer is rarely clear cut.

First lets look at exactly what the difference between an 8 bit and a 10 bit recording is.
Both will have the same dynamic range. Both will have the same contrast. Both will have the same color range. One does not  necessarily have more color or contrast than the other. The only thing you can be sure of is the difference in the number of code values. An 8 bit video recording has a maximum of 235 code values per channel giving 13 million possible tonal values. 10 bit recording has up to 970 code values per channel giving up to 912 million tonal values.
There is a lot of talk of 8 bit recordings resulting in banding because there are only 235 luma shades. This is a bit of a half truth. It is true that if you have a monochrome image there would only be 235 steps. But we are normally making colour images so we are typically dealing with 13 million tonal values, not simply 235 luma shades. In addition it is worth remembering that the bulk of our current video distribution and display technologies are 8 bit – 8 bit H264, 8 bit screens etc. There are more and more 10 bit codecs coming along as well as more 10 bit screens, but the vast majority are still 8 bit.
Compression artefacts cause far more banding problems than too few steps in the recording codec. Most codecs use some form of noise reduction to help reduce the amount of data that needs to be encoded and this can result in banding. Many codecs divide the image data into blocks and  the edges of these small blocks can lead to banding and stepping.
Of course 10 bit can give you more shades. But then 4K gives you more shades too. So an 8 bit UHD recording can sometimes have more shades than a 10 bit HD recording. How is this possible? If you think about it, in UHD each color object in the scene is sampled with twice as many pixels. Imagine a gradient that spans 4 pixels. In 4K you will have 4 samples and 4 steps. In HD you will only have 2 samples and 2 steps, so the HD image might show a single big step while the 4K may have 4 smaller steps. It all depends on how steep the gradient is and how it falls relative to the pixels. It then also depends on how you will handle the footage in post production.
So it is not as clear cut as often made out. For some shots with lots of textures 4K 8 bit might actually give more data for grading than 10 bit HD. In other scenes 10 bit HD might be better.
Anyone that is getting “muddy” results in 4K compared to HD is doing something wrong. Going from 8 bit 4K to 10 bit HD should not change the image contrast, brightness or color range. The images shouldn’t really look significantly different. Sure the 10 bit HD recording might show some subtle textures a little better, but then the 8 bit 4K might have more texture resolution.
My experience is that both work and both have pro’s and con’s. I started shooting 8 bit S-log when the Sony PMW-F3 was introduced 7 years ago and have always been able to get great results provided you expose well. 10 bit UHD would be preferable, I’m not suggesting otherwise (at least 10 GOOD bits are always preferable), but 8 bit works too. 

How can 16 bit X-OCN deliver smaller files than 10 bit XAVC-I?

Sony’s X-OCN (XOriginal Camera Negative) is a new type of codec from Sony. Currently it is only available via the R7 recorder which can be attached to a Sony PMW-F5, F55 or the new Venice cinema camera.

It is a truly remarkable codec that brings the kind of flexibility normally only available with 16 bit linear raw files but with a files size that is smaller than many conventional high end video formats.

Currently there are two variations of X-OCN.

X-OCN ST is the standard version and then X-OCN LT is the “light” version. Both are 16 bit and both contain 16 bit data based directly on what comes off the cameras sensor. The LT version is barely distinguishable for a 16 bit linear raw recording and the ST version “visually lossless”. Having that sensor data in post production allows you to manipulate the footage over a far greater range than is possible with tradition video files. Traditional video files will already have some form of gamma curve as well as a colour space and white balance baked in. This limits the scope of how far the material can be adjusted and reduces the amount of picture information you have (relative to what comes directly off the sensor) .

Furthermore most traditional video files are 10 bit with a maximum of 1024 code values or levels within the recording. There are some 12 bit codecs but these are still quite rare in video cameras. X-OCN is 16 bit which means that you can have up to 65,536 code values or levels within the recording. That’s a colossal increase in tonal values over traditional recording codecs.

But the thing is that X-OCN LT files are a similar size to Sony’s own XAVC-I (class 480) codec, which is already highly efficient. X-OCN LT is around half the size of the popular 10 bit Apple ProRes HQ codec but offers comparable quality. Even the high quality ST version of X-OCN is smaller than ProRes HQ. So you can have image quality and data levels comparable to Sony’s 16 bit linear raw but in a lightweight, easy to handle 16 bit file that’s smaller than the most commonly used 10 bit version of ProRes.

But how is this even possible? Surely such an amazing 16 bit file should be bigger!

The key to all of this is that the data contained within an X-OCN file is based on the sensors output rather than traditional video.  The cameras that produce the X-OCN material all use bayer sensors. In a traditional video workflow the data from a bayer sensor is first converted from the luminance values that the sensor produces into a YCbCr or RGB signal.

So if the camera has a 4096×2160 bayer sensor in a traditional workflow this pixel level data gets converted to 4096×2160 of Green plus 4096×2160 of Red, plus 4096×2160 of Green (or the same of Y, Cb and Cr). In total you end up with 26 million data points which then need to be compressed using a video codec.

Bayer-to-RGB How can 16 bit X-OCN deliver smaller files than 10 bit XAVC-I?However if we bypass the conversion to a video signal and just store the data that comes directly from the sensor we only need to record a single set of 4096×2160 data points – 8.8 million. This means we only need to store 1/3rd as much data as in a traditional video workflow and it is this huge data saving that is the main reason why it is possible for X-OCN to be smaller than traditional video files while retaining amazing image quality. It’s simply a far more efficient way of recording the data from a bayer camera.

Of course this does mean that the edit or playback computer has to do some extra work because as well as decoding the X-OCN file it has to be converted to a video file, but Sony developed X-OCN to be easy to work with – which it is. Even a modest modern workstation will have no problem working with X-OCN. But the fact that you have that sensor data in the grading suite means you have an amazing degree of flexibility. You can even adjust the way the file is decoded to tailor whether you want more highlight or shadow information in the video file that will created after the X-OCN is decoded.

Why isn’t 16 bit much bigger than 10 bit? Normally a 16 bit file will be bigger than a 10 bit file. But with a video image there are often areas of information that are very similar. Video compression algorithms take advantage of this and instead of recording a value for every pixel will record a single value that represents all of the similar pixels. When you go from 10 bit to 16 bit, while yes, you do have more bits of data to record a greater percentage of the code values will be the same or similar and as a result the codec becomes more efficient. So the files size does increase a bit, but not as much as you might expect.

So, X-OCN, out of the gate, only needs to store 1/3rd of the data points of a similar traditional RGB or YCbCr codec. Increasing the bit depth from the typical 10 bit bit depth of a regular codec to the 16 bits of X-OCN does then increase the amount of data needed to record it. But the use of a clever algorithm to minimise the data needed for those 16 bits means that the end result is a 16 bit file only a bit bigger than XAVC-I but still smaller than ProRes HQ even at it’s highest quality level.

Sony Venice. Dual ISO’s, 1 stop ND’s and Grading via Metadata.

With the first of the production Venice cameras now starting to find their way to some very lucky owners it’s time to take a look at some features that are not always well understood, or that perhaps no one has told you about yet.

Dual Native ISO’s: What does this mean?

An electronic camera uses a piece of silicon to convert photons of light into electrons of electricity. The efficiency at doing this is determined by the material used. Then the amount of light that can be captured and thus the sensitivity is determined by the size of the pixels. So, unless you physically change the sensor for one with different sized pixels (which will in the future be possible with Venice) you can’t change the true sensitivity of the camera. All you can do is adjust the electronic parameters.

With most video cameras the ISO is changed by increasing the amount of amplification applied to the signal coming off the sensor. Adding more gain or increasing the amplification will result in a brighter picture. But if you add more amplification/gain then the noise from the sensor is also amplified by the same amount. Make the picture twice as bright and normally the noise doubles.

In addition there is normally an optimum amount of gain where the full range of the signal coming from the sensor will be matched perfectly with the full recording range of the chosen gamma curve. This optimum gain level is what we normally call the “Native ISO”. If you add too much gain the brightest signal from the sensor would be amplified too much and exceed the recording range of the gamma curve. Apply too little gain and your recordings will never reach the optimum level and darker parts of the image may be too dark to be seen.

As a result the Native ISO is where you have the best match of sensor output to gain. Not too much, not too little and hopefully low noise. This is typically also referred to as 0dB gain in an electronic camera and normally there is only 1 gain level where this perfect harmony between sensor, gain and recording range is achieved, this becoming the native ISO.

Side Note: On an electronic camera ISO is an exposure rating, not a sensitivity measurement. Enter the cameras ISO rating into a light meter and you will get the correct exposure. But it doesn’t really tell you how sensitive the camera is as ISO has no allowance for increasing noise levels which will limit the darkest thing a camera can see.

Tweaking the sensor.

However, there are some things we can tweak on the sensor that effect how big the signal coming from the sensor is. The sensors pixels are analog devices. A photon of electricity hits the silicone photo receptor (pixel) and it gets converted into an electron of electricity which is then stored within the structure of the pixel as an analog signal until the pixel is read out by a circuit that converts the analog signal to a digital one, at the same time adding a degree of noise reduction. It’s possible to shift the range that the A to D converter operates over and the amount of noise reduction applied to obtain a different readout range from the sensor. By doing this (and/or other similar techniques, Venice may use some other method) it’s possible to produce a single sensor with more than one native ISO.

A camera with dual ISO’s will have two different operating ranges. One tuned for higher light levels and one tuned for lower light levels. Venice will have two exposure ratings: 500 ISO for brighter scenes and 2500 ISO for shooting when you have less light. With a conventional camera, to go from 500 ISO to 2500 ISO you would need to add just over 12dB of gain and this would increase the noise by a factor of more than 4. However with Venice and it’s dual ISO’s, as we are not adding gain but instead altering the way the sensor is operating the noise difference between 500 ISO and 2500 ISO will be very small.

You will have the same dynamic range at both ISO’s. But you can choose whether to shoot at 500 ISO for super clean images at a sensitivity not that dissimilar to traditional film stocks. This low ISO makes it easy to run lenses at wide apertures for the greatest control over the depth of field. Or you can choose to shoot at the equivalent of 2500 ISO without incurring a big noise penalty.

One of Venice’s key features is that it’s designed to work with Anamorphic lenses. Often Anamorphic lenses are typically not as fast as their spherical counterparts. Furthermore some Anamorphic lenses (particularly vintage lenses) need to be stopped down a little to prevent excessive softness at the edges. So having a second higher ISO rating will make it easier to work with slower lenses or in lower light ranges.


Next it’s worth thinking about how you might want to use the cameras ND filters. Film cameras don’t have built in ND filters. An Arri Alexa does not have built in ND’s. So most cinematographers will work on the basis of a cinema camera having a single recording sensitivity.

The ND filters in Venice provide uniform, full spectrum light attenuation. Sony are incredibly fussy over the materials they use for their ND filters and you can be sure that the filters in Venice do not degrade the image. I was present for the pre-shoot tests for the European demo film and a lot of time was spent testing them. We couldn’t find any issues. If you introduce 1 stop of ND, the camera becomes 1 stop less sensitive to light.  In practice this is no different to having a camera with a sensor 1 stop less sensitive. So the built in ND filters, can if you choose, be used to modify the base sensitivity of the camera in 1 stop increments, up to 8 stops lower.

So with the dual ISO’s and the ND’s combined you have a camera that you can setup to operate at the equivalent of 2 ISO all the way up to 2500 ISO in 1 stop steps (by using 2500 ISO and 500 together you can have approximately half stops steps between 10 ISO and 650 ISO). That’s an impressive range and at no stage are you adding extra gain. There is no other camera on the market that can do this.

On top of all this we do of course still have the ability to alter the Exposure Index of the cameras LUT’s to offset the exposure to move the exposure mid point up and down within the dynamic range. Talking of LUT’s I hope to have some very interesting news about the LUT’s for Venice. I’ve seen a glimpse of the future and I have to say it looks really good!


The raw and X-OCN material from a Venice camera (and from a PMW-F55 or F5 with the R7 recorder) contains a lot of dynamic metadata. This metadata tells the decoder in your grading software exactly how to handle the linear sensor data stored in the files. It tells your software where in the recorded data range the shadows start and finish, where the mid range sits and where the highlights start and finish. It also informs the software how to decode the colors you have recorded.

I recently spent some time with Sony Europe’s color grading guru Pablo Garcia at the Digital Motion Picture Center in Pinewood. He showed me how you can manipulate this metadata to alter the way the X-OCN is decoded to change the look of the images you bring into the grading suite. Using a beta version of Black Magic’s DaVinci Resolve software, Pablo was able to go into the clips metadata in real time and simply by scrubbing over the metadata settings adjust the shadows, mids and highlights BEFORE the X-OCN was decoded. It was really incredible to see the amount of data that Venice captures in the highlights and shadows. By adjusting the metadata you are tailoring the the way the file is being decoded to suit your own needs and getting the very best video information for the grade. Need more highlight data – you got it. Want to boost the shadows, you can, at the file data level before it’s converted to a traditional video signal.

It’s impressive stuff as you are manipulating the way the 16 bit linear sensor data is decoded rather than a traditional workflow which is to decode the footage to a generic intermediate file and then adjust that. This is just one of the many features that X-OCN from the Sony Venice offers. It’s even more incredible when you consider that a 16 bit linear  X-OCN LT file is similar in size to 10 bit XAVC-I(class 480) and around half the size of Apples 10 bit ProRes HQ.  X-OCN LT looks fantastic and in my opinion grades better than XAVC S-Log. Of course for a high end production you will probably use the regular X-OCN ST codec rather than the LT version, but ST is still smaller than ProRes HQ. What’s more X-OCN is not particularly processor intensive, it’s certainly much easier to work with X-OCN than cDNG. It’s a truly remarkable technology from Sony.

Next week I will be shooting some more test with a Venice camera as we explore the limits of what it can do. I’ll try and get some files for you to play with.

What shutter speed to use if shooting 50p or 60p for 50i/60i conversion.

An interesting question got raised on Facebook today.

What shutter speed should I use if I am shooting at 50p so that my client can later convert the 50p to 50i? Of course this would also apply to shooting at 60p for 60i conversion.

Lets first of all make sure that we all understand that what’s being asked for here is to shoot at 50(60) progressive frames per second so that the footage can later be converted to 25(30) frames per second interlace – which has 50(60) fields.

If we just consider normal 50p or 60p shooing the the shutter speed that you would chooses on many factors including what you are shooting and how much light you have and personal preference.

1/48 or 1/50th of a second is normally considered the slowest shutter speed at which motion blur in a typical frame no longer significantly softens the image. This is why old point and shoot film cameras almost always had a 1/50th shutter, it was the slowest you could get away with.

Shooting with a shutter speed that is half the duration of the cameras frame rate is also know as using a 180 degree shutter, a very necessary practice with a film movie camera due to the way the mechanical shutter must be closed while the film is physically advanced to the next frame. But it isn’t essential that you have the closed shutter period with an electronic camera as there is no film to move, so you don’t have to use a 180 degree shutter if you don’t want to.

There is no reason why you can’t use a 1/50th or 1/60th shutter when shooting at 50fps or 60fps, especially if you don’t have a lot of light to work with. 1/50(1/60) at 50fps(60fps) will give you the smoothest motion as there are no breaks in the motion between each frame. But many people like to sharpen up the image still further by using 1/100th(1/120th) to reduce motion blur.  Or they prefer the slightly steppy cadence this brings as it introduces a small jump in motion between each frame. Of course 1/100th needs twice as much light. So there is no hard and fast rule and some shots will work better at 1/50th while others may work better at 1/100th.

However if you are shooting at 50fps or 60fps so that it can be converted to 50i or 60i, with each frame becoming a field, then the “normal” shutter speed you should use will be 1/50th or 1/60th to mimic a 25fps-50i camera or 30fps-60i camera which would typically have it’s shutter running at 1/50 or 1/60th. 1/100th(120th) at 50i(60i) can look a little over sharp due to an increase in aliasing due to the way a interlace video field only has half the resolution of the full frame. Particularly with 50p converted to 50i as there is no in-camera anti-aliasing and each frame will simply have it’s resolution divided by 2 to produce the equivalent of a single field. When you shoot with a “real” 50i camera line pairs on the sensor are combined and read out together as a  single field line and this slightly softens and anti-aliases each of fields. 50i has lower vertical resolution than 25p. But with simple software conversions from 50p to 50i this anti-aliasing does not occur. If you combine that with a faster than typical shutter speed the interlaced image can start to look over sharp and may have jaggies or color moire not present in the original 50/60p footage.

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.

Why hasn’t anyone brought out a super sensitive 4K camera?

Our current video cameras are operating at the limits of current sensor technology. As a result there isn’t much a camera manufacturer can do to improve sensitivity without compromising other aspects of the image quality.
Every sensor is made out of silicon and silicon is around 70% efficient at converting photons of light into electrons of electricity. So the only things you can do to alter the sensitivity is change the pixel size, reduce losses in the colour and low pass filters, use better micro lenses and use various methods to prevent the wires and other electronics on the face of the sensor from obstructing the light. But all of these will only ever make very small changes to the sensor performance as the key limiting factor is the silicon used to make the sensor.
This is why even though we have many different sensor manufacturers, if you take a similar sized sensor with a similar pixel count from different manufacturers the performance difference will only ever be small.
Better image processing with more advanced noise reduction can help reduce noise which can be used to mimic greater sensitivity. But NR rarely comes without introducing other artefacts such as smear, banding or a loss of subtle details. So there are limits as to how much noise reduction you want to apply. 

So, unless there is a new sensor technology breakthrough we are unlikely to see any new camera come out with a large, actual improvement in sensitivity. Also we are unlikely to see a sudden jump in resolution without a sensitivity or dynamic range penalty with a like for like sensor size. This is why Sony’s Venice and the Red cameras are moving to larger sensors as this is the only realistic way to increase resolution without compromising other aspects of the image. It’s why all the current crop of S35mm 4K cameras are all of very similar sensitivity, have similar dynamic range and similar noise levels.


A great example of this is the Sony A7s. It is more sensitive than most 4K S35 video cameras simply because it has a larger full frame sensor, so the pixels can be bigger, so each pixel can capture more light. It’s also why cameras with smaller 4K sensors will tend to be less sensitive and in addition have lower dynamic range (because the pixel size determines how many electrons it can store before it overloads).

FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.

Here is something a little different.

I few years ago I was privileged to have Jean Mouettee and Thierry Legault join me on one of my Northern Lights tours. They were along to shoot the Aurora on an FS100 (it might have been an FS700) in real time. Sadly we didn’t have the best of Auroras on that particular trip. Theirry is famous for his amazing images of the Sun with the International Space Station passing in front of it.

iss_atlantis_transit2_2010 FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
Amazing image by Thierry Legault of the ISS passing in front of the Sun.

Well the two of them have been very busy. Working with some special dual A7s camera rigs recording on to a pair of Atomos Shoguns, they have been up in Norway shooting the Northern Lights in 3D. You can read more about their exploits and find out how they did it here:

To be able to “see” the Aurora in 3D they needed to place the camera rigs over 6km apart. I did try to take some 3D time-lapse of the Aurora a few years back with cameras 3Km apart, but that was timelapse and I was thwarted by low cloud. Jean and Thierry have gone one better and filmed the Aurora not only in 3D but also in real time. That’s no mean feat!

20170218_233041_rec FS5 Eclipse and 3D Northern Lights by Jean Mouette and Thierry Legault.
One of the two A7s camera rigs used for the real time 3D Aurora project. The next stage will use 4 cameras in each rig for whole sky coverage.

If you want to see the 3D movies take a look at this page:

I’d love to see these projected in a planetarium or other dome venue in 3D. It would be quite an experience.

Jean was also in the US for the total Eclipse in August. He shot the eclipse using an FS5 recording 12 bit raw on a Atomos Shogun. He’s put together a short film of his experience and it really captures the excitement of the event as well as some really spectacular images of the moon moving across the face of the sun. I really shows what a versatile camera the FS5 is.

If you want a chance to see the Northern Lights for yourself why not join me next year for one of my rather special trips to Norway. I still have some spaces.

SD Cards – how long do they last?

This came up on facebook the other day, how long do SD cards last?

First of all – I have found SD cards to be pretty reliable overall. Not as reliable as SxS cards or XQD cards, but pretty good generally. The physical construction of SD cards has let me down a few times, the little plastic fins between the contacts breaking off.  I’ve had a couple of cards that have just died, but I didn’t loose any content as the camera wouldn’t let me record to them. Plus I have also had SD cards that have given me a lot of trouble getting content and files off them. But compared to tape, I’ve had far fewer problems with solid state media.

But something that I don’t think most people realise is that a  lot of solid state media ages the more you use it. In effect it wears out.

There are a couple of different types of memory cell that can be used in solid state media. High end professional media will often use single level memory cells that are either on or off. These cells can only store a single value, but they tend to be fast and extremely reliable due to their simplicity. But you need a lot of them in a big memory card.  The other type of cell found in most lower cost media is a multi-level cell. Each multi-level cell stores a voltage and the level of the voltage in that cell represents many different values. As a result each cell can store more than one single value. The memory cells are insulated to prevent the voltage charge leaking away. However each time you write to the cell the insulation can be eroded. Over time this can result in the cell becoming leaky and this allows the voltage in the cell to change slightly resulting in a change to the data that it holds. This can lead to data corruption.

So multi level cards that get used a lot, may develop leaky cells. But if the card is read reasonably soon after it was written to (days, weeks, a month perhaps) then it is unlikely that the user will experience any problems. The cards include circuitry designed to detect problem cells and then avoid them. But over time the card can reach a point where it no longer has enough memory to keep mapping out damaged cells, or the cells loose there charge quickly and as a result the data becomes corrupt.

Raspberry Pi computers that use SD cards as memory can kill SD cards in a matter of days because of the extremely high number of times that the card may be written to.

With a video camera it will depend on how often you use the cards. If you only have one or 2 cards and you shoot a lot I would recommend replacing the cards yearly. If you have lots of cards either use one or two and replace them regularly or try to cycle through all the cards you have to extend their life and avoid any one card from excessive use which might make it less reliable than the rest.

One thing regular SD cards are not good for is long term storage (more than a year and never more than 5 years) as the charge in the cells will leak away over time. There are special write once SD cards designed for archival purposes where each cell is permanently fused to either On or Off.  Most standard SD cards, no matter how many times they have been used won’t hold data reliably beyond 5 years.

What does ISO mean with todays cameras?

Once upon a time the meaning of ISO was quite clear. It was a standardised sensitivity rating of the film stock you were using. If you wanted more sensitivity, you used film with a higher ISO rating. But today the meaning of ISO is less clear and we can’t swap our sensor out for more or less sensitive ones. So what does it mean?

ISO is short for International Standards Organisation. And they specify many, many different standards for many different things. For example ISO 3166 is for country codes, ISO 50001 is for energy management.

But in our world of film and TV there are two ISO standards that we have blended into one and we just call it “ISO”.

ISO 5800:2001 is the system used to determine the sensitivity of color negative film found by plotting the density of the film against exposure to light.

ISO 12232:2006 specifies the method for assigning and reporting ISO speed ratings, ISO speed latitude ratings, standard output sensitivity values, and recommended exposure index values, for digital still cameras.

Note a key difference: ISO 5800 is the measurement of the actual sensitivity to light of film.  ISO 12232 is a standardised way to report the speed rating, it is not a direct sensitivity measurement.

Within the digital camera ISO rating system there are 5 different standards that a camera manufacturer can use when obtaining the ISO rating of a camera. The most commonly used method is the Recommended Exposure Index (REI) method, which allows the manufacturer to specify a camera model’s EI or base ISO arbitrarily based on what the manufacturer believes produces a satisfactory image. So it’s not really a measure of the cameras sensitivity, but a rating that if used with a standard external calibrated light meter to set the exposure will give a satisfactory looking image. This is very different to a sensitivity measurement and variations in the opinion as to what is a satisfactory image will vary from person to person. So there is a lot of scope for movement as to how an electronic camera might be rated.

As you cannot change the sensor in a digital camera, you cannot change the cameras efficiency at converting light into electrons (which is largely determined by the materials used and the physical construction). So you cannot change the actual sensitivity of the camera to light. But we have all seen how the ISO number of most digital cameras can normally be increased (and sometimes lowered) from the base ISO number.

Raising and lowering the ISO in an electronic camera is normally done by adjusting the amplification of the signal coming from the sensor, typically referred to as “gain” in the camera. It’s not actually a physical change in the cameras sensitivity to light, it like turning up the volume on a radio to make the music louder. Dual ISO cameras that claim not to add gain when switching between ISO’s typically do this by adjusting the way the signal from the sensor is converted from an analog signal to a digital one. While it is true that this is different to a gain shift it does typically alter the noise levels as to make the picture brighter you need to sample the sensors output lower down and closer to the noise floor. Once again though it is not an actual sensitivity change, it does not alter the sensors sensitivity to light, you are just picking a different part of it’s output range.

Noise and Signal To Noise Ratio.

Most of the noise in the pictures we shoot comes from the sensor and the level of this noise coming from the sensor is largely unchanged no matter what you do (some dual ISO cameras use variations in the way the sensor signal is sampled to shift the noise floor up and down a bit). So the biggest influence on the signal to noise ratio is the amount of light you put on the sensor. More light = More signal. The noise remains the same but the signal is bigger so you get a better signal to noise ratio, up to the point where the sensor overloads.

But what about low light?

To obtain a brighter image when there the light levels are low and the picture coming from the sensor looks dark the signal coming from the sensor is boosted or amplified (gain is added). This amplification makes both the desirable signal bigger but also the noise bigger. If we make the desirable picture 2 times brighter we also make the noise 2 x bigger. As a result the picture will be more noisy and grainy than one where we had enough light to get the brightness we want.

The signal to noise ratio deteriorates because the added amplification means the recording will clip more readily. Something that is close to the recordings clip point may be sent above the clip point by adding gain, so the range you can record reduces while the noise gets bigger. However the optimum exposure is now achieved with less light so the equivalent ISO number is increased. If you were using a light meter you would increase the ISO setting on the light meter to get the correct exposure. But the camera isn’t more sensitive, it’s just that the optimum amount of light for the “best” or “correct” exposure is reduced due to the added amplification.

So with an electronic camera, ISO is a rating that will give you the correct brightness of recording for the amount of light and the amount of gain that you have. This is different to sensitivity. Obviously the two are related, but they are not quite the same thing.

Getting rid of noise:

To combat the inevitable noise increase as you add gain/amplification most modern cameras use electronic noise reduction which is applied more and more aggressively as you increase the gain. At low levels this goes largely un-noticed. But as you start to add more gain and thus and more noise reduction you will start to degrade the image. It may become softer, it may become smeary. You may start to see banding ghosting or other artefacts.

Often as you increase the gain you may only see a very small increase in noise as the noise reduction does a very good job of hiding the noise. But for every bit of noise thats reduced there will be another artefact replacing it.

Technically the signal to noise ratio is improved by the use of noise reduction, but this typically comes at a price and NR can be very problematic if you later want to grade or adjust the footage as often you won’t see the artefacts until after the corrections or adjustments have been made. So be very careful when adding gain. It’s never good to have extra gain.