Tag Archives: cameras

Is this the age of the small camera? Part 1.

As Sony’s new Burano camera starts to ship – a relatively small camera that  could comfortably be used to shoot a blockbuster movie we have to look at how over the last few years the size of the cameras used for film production has reduced.

Which-is-which1-600x474 Is this the age of the small camera? Part 1.
Which was shot with an 8K Venice 2 and which was shot with a 4K FX3?

 

Only last year we saw the use of the Sony FX3 as the principle camera for the movie the Creator. What is particularly interesting about the Creator is that the FX3 was chosen by the director Gareth Edwards for a mix of both creative and financial reasons.

the-creator-600x338 Is this the age of the small camera? Part 1.

To save money or to add flexibility?

To save money, rather than building a lot of expensive sets Edwards chose to shoot on location using a wide and varied range of locations (80 different locations)  all over Asia. To make this possible he used a smaller than usual crew.  Part of the reasoning that was given was that it was cheaper to fly a small crew to all these different locations than to try to build a different set for each part of the film. The film cost $80 million to make and took $104 million in the box office, a pretty decent profit at a time when many movies take years to break even.

Creator-BTS-600x337 Is this the age of the small camera? Part 1.
FX3 on gimbal during the filming of The Creator



The FX3 was typically mounted on a gimbal and this allowed them to shoot quickly and in a very fluid manner, making use of natural light where possible.  A 2x anamorphic lens was used and the final delivery aspect ratio was a very wide 2.76:1. The film was edited first and then when the edit was locked down the VFX elements were added to the film. Modern tracking and rotoscoping techniques make it much easier to add VFX into sequences without needing to use green or blue screen techniques and this is one of those areas where AI will become a very useful and powerful tool.

You don’t NEED a big camera, but you might want one.

So, what is clear is that you don’t NEED a big camera to make a feature film and The Creator demonstrates that an FX3 (recording to an Atomos Ninja) offers sufficient image quality to stand up to big screen presentation. I don’t think this is really anything new, but we have now reached the stage where the difference in image quality between a cheap $1500 camera like the FX30 and a high end “cinema” camera like the $70K  Venice 2  is genuinely so small that an audience probably won’t notice.

There may be reasons why you might prefer to have a bigger camera body – it does make mounting accessories easier and will often have much better monitoring and viewfinder options. And you may argue that a camera like Venice can offer greater image quality (as you will see in part 2 – it technically does have a higher quality image than the FX3), but would the audience actually be able to see the difference and even if they can would they actually care? And what about post production – surely a better quality image is a big help with post – again come back for part 2 where I explore this in more depth.

which-is-which2-600x438 Is this the age of the small camera? Part 1.
Which is the Arri LF and which is the Sony A1?


And small cameras will continue to improve. If what we have now is already good enough things can only get better.

8K Benefits??

Since the launch of Burano I’ve become more and more convinced of the benefits of an 8K sensor – even if you only ever intend to deliver in 4K, the extra chroma resolution from actually having 4K of R and B pixels makes a very real difference. Venice 2 really made me much more aware of this and Burano confirms it. Because of this I’ve been shooting a lot more with the Sony A1 (which possibly shares the same sensor as Burano). There is something I really like about the textural quality in the images from the A1, Burano and Venice 2 (having said that after spending hours looking at my side by side test samples from both 4K and 8K cameras while the difference is real, I’m not sure it will always be seen in the final deliverable). In addition when using a very compressed codec such as the XAVC-HS in the A1 recording at 8K leads to smaller artefacts which then tend to be less visible in a 4K deliverable. This allows you to grade the material harder than perhaps you can with similarly compressed 4K footage. The net result is the 10 bit 8K looks fantastic in a 4K production.

A1-Face-600PC-600x317 Is this the age of the small camera? Part 1.
Sony A1 cropped and zoomed in 6x.


I have to wonder if The Creator wouldn’t have been better off being shot with an A1 rather than an FX3. You can’t get 8K raw out of an A1, but the extra resolution makes up for this and it may have been a better fit for the 2x anamorphic lens that they used.

So many choices….

And that’s the thing – we have lots of choices now. There are many really great small cameras, all capable of producing truly excellent images. A small camera allows you to be nimble. The grip and support equipment becomes smaller. This allows you to be more creative. A lot of small cameras are being used for the Formula 1 movie, small cameras are often mixed with larger cameras and these days the audience isn’t going to notice. 

Plus we are seeing a change in attitudes. A few years ago most cinematographers wouldn’t have entertained the idea of using a DSLR or pocket sized camera as the primary camera for a feature. Now it is different, a far greater number of DP’s are looking at what a small camera might allow them to do, not just as a B camera but as the A camera. When the image quality stops being an issue, then small might allow you to do more.

This doesn’t mean big cameras like Venice will go away, there will always be a place for them. But I expect we will see more and more really great theatrical releases shot with cameras like the FX3 or A1 and that makes it a really interesting time to be a cinematographer. Again, look at The Creator – this was a relatively small budget for a science fiction film packed with CGI and other effects. And it looked great. Of course there is also that middle ground, a smaller camera but with the image quality of a big one – Burano perhaps?

In Part 2……

In part 2 I’m going to take some sample clips that I grabbed at a recent workshop from a Venice 2, Burano, A1 and FX3 and show you just how close the footage from these cameras is. I’ll also throw in some footage from an Arri LF and then I’ll “break” the footage in post production to give you an idea of where the differences are and whether they are actually significant enough to worry about.

 

Timecode doesn’t synchronise anything!!!

There seems to be a huge misunderstanding about what timecode is and what timecode can do. I lay most of the blame for this on manufactures that make claims such as “Our Timecode Gadget Will Keep Your Cameras in Sync” or “by connecting our wireless time code device to both your audio recorder and camera everything will remain in perfect sync”. These claims are almost never actually true.

What is “Sync”.

First we have to consider what we mean when we talk about “sync” or synchronisation.  A dictionary definition would be something like “the operation or activity of two or more things at the same time or rate.” For film and video applications if we are talking about 2 cameras they would be said to be in sync when both start recording each frame that they record at exactly the same moment in time and then over any period of time they record exactly the same number of frames, each frame starting and ending at precisely the same moment.

What is “Timecode”.

Next we have to consider what time code is. Timecode is a numerical value that is attached to each frame of a video or an audio recording in an audio device to give it a time value in hours, minutes, seconds, frames. It is used to identify individual frames and each frame must have a unique numerical value. Each individual successive frames timecode value MUST be “1” greater than the frame before (I’m ignoring drop frame for the sake of clarity here). A normal timecode stream does not feature any form of sync pulse or sync control, it is just a number value.

Controlling the “Frame Rate”.


And now we have to consider what controls the frame rate that a camera or recorder records at. The frame rate the camera records at is governed by the cameras internal sync or frame clock. This is normally a  circuit controlled by a crystal oscillator. It’s worth noting that these circuits can be affected by heat and at different temperatures there may be very slight variations in the frequency of the sync clock. Also this clock starts when you turn the camera on, so the exact starting moment of the sync clock depends on the exact moment the camera is switched on. If you were to randomly turn on a bunch of cameras their sync clocks would all be running out of sync. Even if you could press the record button on each camera at exactly the same moment, each would start recording the first frame at a very slightly different moment in time depending on where in the frame rate cycle the sync clock of each camera is. In higher end cameras there is often a way to externally control the sync clock via an input called “Genlock”.  Applying a synchronisation signal to the cameras Genlock input will pull the cameras sync clock into precise sync with the sync signal and then hold it in sync. 

And the issue is………..

Timecode doesn’t perform a sync function. To SYNCHRONISE two cameras or a camera and audio recorder you need a genlock sync signal and timecode isn’t a sync signal, timecode is just a frame count  number. So timecode cannot synchronise 2 devices. The camera’s sync/frame clock might be running at a very slightly different frame rate to the clock of the source of the time code. When feeding timecode to a camera the camera might already be part way through a frame when the timecode value for that frame arrives, making it too late to be added, so there will be an unavoidable offset. Across multiple cameras this offset will vary, so it is completely normal to have a +/- 2 frame (sometimes more) offset amongst several cameras at the start of each recording.

And once you start to record the problems can get even worse…

If the camera’s frame clock is running slightly faster than the clock of the TC source then perhaps the camera might record 500 frames but only receive 498 timecode values – So what happens for the 2 extra frames the camera records in this time? The answer is the camera will give each frame in the sequence a unique numerical value that increments by 1, so the extra frames will have the necessary 2 additional TC values. And as a result the TC in the camera at the end of the clip will be an additional 2 frames different to that of the TC source. The TC from the source and the TC from the camera won’t exactly match, they won’t be in sync or “two or more things at the same time or rate”, they will be different.

The longer the clip that you record, the greater these errors become as the camera and TC source drift further apart.

Before you press record on the camera, the cameras TC clock will follow the external TC input. But as soon as you press record, every recorded  frame MUST have a unique new numerical value 1 greater than the previous frame, regardless of what value is on the external TC input. So the cameras TC clock will count the frames recorded. And the number of frames recorded is governed by the camera sync/frame clock, NOT the external TC.  

So in reality the ONLY way to truly synchronise the timecode across multiple cameras or audio devices is to use a sync clock connected to the  GENLOCK input of each device.

Connecting an external TC source to a cameras TC input is likely to  result in much closer TC values for both the audio recorder and camera(s) than no connection at all. But don’t be surprised if you see small 1 or 2 frame errors at the start of clips due to the exact timing of when the TC number arrives at the camera relative to when the camera starts to record the first frame and then possibly much larger errors at the ends of clips, these errors are expected and normal. If you can’t genlock everything with a proper sync signal, a better way to do it is to use the camera as the TC source and feed the TC from the camera to the audio recorder. Audio recorders don’t record in frames, they just lay the TC values alongside the audio. As an audio recorder doesn’t need to count frames the TC values will always be in the right place in the audio file to match the cameras TC frame count. 

Notes on Timecode sync with two cameras.

When you want two cameras to have matching timecode you need to synchronise not just the time code but also the frame rates of both cameras. Remember timecode is a counter that counts the frames the camera is recording. If one camera is recording more frames than the other, then even with a timecode cable between both cameras the timecode will drift during long takes. So for perfect timecode sync you must also ensure the frame rates of both cameras is also identical by using Genlock to synchronise the frame rates.

Genlock is only going to make a difference if it is always kept connected. As soon as you disconnect the Genlock the cameras will start to drift. If using genlock first connect the Ref output to the Genlock in. Then while this is still connected connect the TC out to TC in. Both cameras should be set to Free-run timecode with the TC on the master camera set to the time of day or whatever time you wish both cameras to have. If you are not going to keep the genlock cable connected for the duration of the shoot, then don’t bother with it at all, as it will make no difference just connecting it for a few minutes while you sync the TC.

In the case of a Sony camera when the TC out is connected to the TC in of the slave camera, the slave camera will normally display EXT-LK when the timecode signals are locked.

Genlock: Synchronises the precise timing of the frame rates of the cameras. So taking a reference out from one camera and feeding it to the Genlock input of another will cause both cameras to run precisely in sync while the two cameras are still connected together. While connected by genlock the frame counts of both camera (and the timecode counts) will remain in sync. As soon as you remove the genlock sync cable the cameras will start to drift apart. The amount of sync (and timecode) drift will depend on many factors, but with a Sony camera will most likely be in the order of a at least a few seconds a day, sometimes as much as a few minutes.

Timecode: Connecting the TC out of one camera to the TC in of another will cause the time code in the receiving camera to sync to the nearest possible frame number of the sending camera when the receiving camera is set to free run while the camera is in standby.  When the TC is disconnected both cameras timecode will continue to count according to the frame rate that the camera is running at. If the cameras are genlocked, then as the frame sync and frame count is the same then so too will be the timecode counts. If the cameras are not genlocked then the timecode counts will drift by the same amount as the sync drift.

Timecode sync only and long takes can be problematic. If the timecodes of two cameras are jam sync’d but there is no genlock then on long takes timecode drift may be apparent. When you press the record button the timecodes of both cameras will normally be in sync, forced into sync by the timecode signal. But when the cameras are rolling the timecode will count the actual frames recorded and ignor the timecode input. So if the cameras are not synchronised via genlock then they may not be in true sync so one camera may be running fractionally faster than the other and as a result in long clips there may be timecode differences as one camera records more frames than the other in the same time period.

More Codec and Gamma Tests.

More Gemini, Samurai, AC-Log and S-Log sample frame grabs. See download box at bottom of post.
I had thought, when I first wrote this post that I had discovered a strange issue where the 444 RGB recordings from the Gemini had more dynamic range than 422 recordings. I didn’t think this was right, but it was what my NLE’s (FCP and CS5.5) were telling me. Anyway to cut a long story short, what was happening was that when I dropped the Gemini RGB files into the timeline the levels got mapped to legal levels, i.e. nothing over 100% while the YCbCr 422 clips go into the timeline at their original levels. The end result was that it appeared that the 422 clips were clipping before the 444 clips. Thanks to Waho for suggesting that it may be a conversion issue with the frame grabs, I was able to see that it was simply the way the NLE’s (both CS5.5 and FCP were behaving in the same way) were clipping off anything in the 422 clips above 100% both in the frame grabs and also on the monitor output. As the RGB files were all below 100% they were not clipped so appeared to have greater dynamic range.

Anyway….. below is a new set of frame grabs layered up in a single photoshop file showing how the various codecs and recorders and codecs perform. The levels in these have been normalised at 100% to avoid any dodgy clipping issues. I’ve included F3 Cinegamma 4, plus my AC-Log picture profile, plus Samurai ProRes, Gemini S-Log and F3 Internally recorded S-Log of a very extreme contrast situation. Use the link below to download the photoshop layers file. You’ll need to me a registered user to access the link.

[downloads_box title=”More codec test grabs.”]
Photoshop Layered Frame Grabs v3
[/downloads_box]

The Great CMOS Debate. More on CMOS vs CCD.

Until a couple of years ago CMOS sensors were definitely the underdog, they tended to be very noisy due to electrical noise generated the on chip by the readout circuits and A/D converters. In addition they lacked sensitivity due to the electronics on the face of the chip leaving less room for the light sensitive parts. Today, on chip noise reduction has made it possible to produce CMOS sensors with very low noise and micro lenses and better design has mitigated most of the sensitivity problems. In terms of a static image there is very little difference between a CMOS sensor and a CCD sensor. Dynamic range is remarkably similar (both types of sensor use essentially the same light gathering methods), in some respects CMOS has the edge as they are less prone to overload issues. CCD’s are very expensive to manufacture as the way they are read out requires near lossless transfer of minute charges through a thousand or more (for HD) memory cells. The first pixel to be read passes down through over 1000 memory cells, if it was to loose 5% of it’s charge in each cell, the signal would be seriously reduced by the time it left the chip. The last pixel to be read out only passes through one memory cell, so it would be less degraded, this variation could ruin an image making it uneven. Although there is more electronics on a CMOS sensor, as each pixel is read directly a small amount of loss in the transfer is acceptable as each pixel would have a similar amount of loss. So the chips are easier to make as although the design is more complex, it is less demanding and most semiconductor plants can make CMOS sensors while CCD needs much more specialised production methods. Yes, CMOS sensors are more prone to motion artifacts as the sensor is scanned from top to bottom, one pixel at a time (A CCD is read in it’s entirety just about instantaneously). This means that as you pan, at the start of the pan the top of the sensor is being read and as the pan progresses the scan moves down the chip. This can make things appear to lean over and it’s known as skew. The severity of the skew is dependent on the readout speed of the chip. Stills cameras and mobile phone cameras suffer from terrible skew as they typically have very slow readout speeds, the sensors used in an EX have a much higher readout speed and in most real world situations skew is not an issue. However there may be some circumstances where skew can cause problems but my experience is that these are few and far between. The other issue is Flash Banding. Again this is caused by the CMOS scan system. As a flash gun or strobe light is of very short duration compared to the CMOS scan it can appear that only part of the frame is illuminated by the flash of light. You can reduce the impact of Flash Banding by shooting at the slowest possible shutter speed (for example shooting 25P or 24P with no shutter) but it is impossible to completely eliminate. When I shoot lightning and thunderstorms I often use a 2 frame shutter, shooting this way I get very few partial bolts of lightning, maybe 1 in 50. If you shoot interlace then you can use the Flash Band removal tool in Sony’s Clip Browser software to eliminate flash gun problems. CMOS sensors are becoming much more common in high end cameras. Arri’s new Alexa film replacement camera uses a CMOS sensor rated at 800asa with 13 stops of latitude. Red uses CMOS as does SI2K. Slumdog Millionaire (SI2K) was the first electronically shot film to get an Oscar for cinematography, so certainly CMOS has come a long way in recent years. CMOS is here to stay, it will almost certainly make bigger and bigger inroads at higher levels. Read speeds will increase and skew etc will become less of an issue. IMHO skew is not an issue to loose sleep over with the EX’s anyway. I shoot all sorts from hurricanes and tornadoes to fast jets and race cars. I have yet to come across a shot spoilt by skew, generally motion blur tends to mask any skew long before it gets noticeable. If you shoot press conferences or red carpet events where flash guns will be going off, then you may prefer a CCD camera as this is harder to deal with, but the EXs are such good value for the money and bring many other advantages such as lower power and less weight that you have to look at the bigger picture and ask what you expect from your budget.

All types of XDCAM

All types of XDCAM

We were filming 9 replica first world war aircraft doing mock dog fights. The weather was near perfect. We had a couple of Sony PDW-700?s, 2x PMW-EX3?s and a couple of Sony’s mini-cams, the HXR-MC1P. It was a great day and we came away very pleased with the results, but we also came away with a smug feeling that with the camera kits that we now have (Me and DoP Dave Crute) that we could produce a programme about just about anything at top, no compromise quality. Ever since I picked up the prototype PDW-700 at IBC 2 years ago I new it was going to be a good camera. I am a big believer that when something looks and feels right then it generally is. The 700 is no exception to this. The balance is perfect, it sits on your shoulder like it belongs there. The camera controls are all where you would expect and the HDVF20 viewfinder is clear and sharp. One thing I would say is that having used the EX3 with its supurb colour viewfinder for some time it was a bit of a shock to go back to a black and white VF. Dave has a colour VF on his 700 and it is much nicer to use than the mono VF. We didn’t spend a lot of time setting up the paint settings on the 700?s yet the pictures they produced were superb. Back in the edit suite it was all but impossible to see the difference between the EX3 and 700, both cameras produce incredible, clear, sharp pictures. It has to be said that the EX3 represents incredible value for money and for some jobs the EX3 will be the better camera to have. Especially when portability is important such as on my current trip. On the flip side I do love the disc based workflow where you never have to delete your master clips as you do with the EX’s solid state workflow. The HXR-MC1P’s also produced amazing results and we have some really nice air to air shots of the dog fights. One shot was spoilt by a bug hitting the lens of the camera as the aircraft took off, but in terms of visual quality these little cameras are way better than anything that I have used before. These are for me exciting times. I have the tools available to produce top quality programmes. The whole workflow is smooth and easy. I can shoot, edit and output from my office at the bottom of my garden programmes to be proud of efficiently and quickly, without fuss or hassle. It’s taken a while to get here but file based workflows and NLE editing have finally come of age.