When you want two cameras to have matching timecode you need to synchronise not just the time code but also the frame rates of both cameras. Remember timecode is a counter that counts the frames the camera is recording. If one camera is recording more frames than the other, then even with a timecode cable between both cameras the timecode will drift during long takes. So for perfect timecode sync you must also ensure the frame rates of both cameras is also identical by using Genlock to synchronise the frame rates.
Genlock is only going to make a difference if it is always kept connected. As soon as you disconnect the Genlock the cameras will start to drift. If using genlock first connect the Ref output to the Genlock in. Then while this is still connected connect the TC out to TC in. Both cameras should be set to Free-run timecode with the TC on the master camera set to the time of day or whatever time you wish both cameras to have. If you are not going to keep the genlock cable connected for the duration of the shoot, then don’t bother with it at all, as it will make no difference just connecting it for a few minutes while you sync the TC.
In the case of a Sony camera when the TC out is connected to the TC in of the slave camera, the slave camera will normally display EXT-LK when the timecode signals are locked.
Genlock: Synchronises the precise timing of the frame rates of the cameras. So taking a reference out from one camera and feeding it to the Genlock input of another will cause both cameras to run precisely in sync while the two cameras are still connected together. While connected by genlock the frame counts of both camera (and the timecode counts) will remain in sync. As soon as you remove the genlock sync cable the cameras will start to drift apart. The amount of sync (and timecode) drift will depend on many factors, but with a Sony camera will most likely be in the order of a at least a few seconds a day, sometimes as much as a few minutes.
Timecode: Connecting the TC out of one camera to the TC in of another will cause the time code in the receiving camera to sync to the nearest possible frame number of the sending camera when the receiving camera is set to free run while the camera is in standby. When the TC is disconnected both cameras timecode will continue to count according to the frame rate that the camera is running at. If the cameras are genlocked, then as the frame sync and frame count is the same then so too will be the timecode counts. If the cameras are not genlocked then the timecode counts will drift by the same amount as the sync drift.
Timecode sync only and long takes can be problematic. If the timecodes of two cameras are jam sync’d but there is no genlock then on long takes timecode drift may be apparent. When you press the record button the timecodes of both cameras will normally be in sync, forced into sync by the timecode signal. But when the cameras are rolling the timecode will count the actual frames recorded and ignor the timecode input. So if the cameras are not synchronised via genlock then they may not be in true sync so one camera may be running fractionally faster than the other and as a result in long clips there may be timecode differences as one camera records more frames than the other in the same time period.
More Gemini, Samurai, AC-Log and S-Log sample frame grabs. See download box at bottom of post.
I had thought, when I first wrote this post that I had discovered a strange issue where the 444 RGB recordings from the Gemini had more dynamic range than 422 recordings. I didn’t think this was right, but it was what my NLE’s (FCP and CS5.5) were telling me. Anyway to cut a long story short, what was happening was that when I dropped the Gemini RGB files into the timeline the levels got mapped to legal levels, i.e. nothing over 100% while the YCbCr 422 clips go into the timeline at their original levels. The end result was that it appeared that the 422 clips were clipping before the 444 clips. Thanks to Waho for suggesting that it may be a conversion issue with the frame grabs, I was able to see that it was simply the way the NLE’s (both CS5.5 and FCP were behaving in the same way) were clipping off anything in the 422 clips above 100% both in the frame grabs and also on the monitor output. As the RGB files were all below 100% they were not clipped so appeared to have greater dynamic range.
Anyway….. below is a new set of frame grabs layered up in a single photoshop file showing how the various codecs and recorders and codecs perform. The levels in these have been normalised at 100% to avoid any dodgy clipping issues. I’ve included F3 Cinegamma 4, plus my AC-Log picture profile, plus Samurai ProRes, Gemini S-Log and F3 Internally recorded S-Log of a very extreme contrast situation. Use the link below to download the photoshop layers file. You’ll need to me a registered user to access the link.
[downloads_box title=”More codec test grabs.”]
Photoshop Layered Frame Grabs v3
Until a couple of years ago CMOS sensors were definitely the underdog, they tended to be very noisy due to electrical noise generated the on chip by the readout circuits and A/D converters. In addition they lacked sensitivity due to the electronics on the face of the chip leaving less room for the light sensitive parts. Today, on chip noise reduction has made it possible to produce CMOS sensors with very low noise and micro lenses and better design has mitigated most of the sensitivity problems. In terms of a static image there is very little difference between a CMOS sensor and a CCD sensor. Dynamic range is remarkably similar (both types of sensor use essentially the same light gathering methods), in some respects CMOS has the edge as they are less prone to overload issues. CCD’s are very expensive to manufacture as the way they are read out requires near lossless transfer of minute charges through a thousand or more (for HD) memory cells. The first pixel to be read passes down through over 1000 memory cells, if it was to loose 5% of it’s charge in each cell, the signal would be seriously reduced by the time it left the chip. The last pixel to be read out only passes through one memory cell, so it would be less degraded, this variation could ruin an image making it uneven. Although there is more electronics on a CMOS sensor, as each pixel is read directly a small amount of loss in the transfer is acceptable as each pixel would have a similar amount of loss. So the chips are easier to make as although the design is more complex, it is less demanding and most semiconductor plants can make CMOS sensors while CCD needs much more specialised production methods. Yes, CMOS sensors are more prone to motion artifacts as the sensor is scanned from top to bottom, one pixel at a time (A CCD is read in it’s entirety just about instantaneously). This means that as you pan, at the start of the pan the top of the sensor is being read and as the pan progresses the scan moves down the chip. This can make things appear to lean over and it’s known as skew. The severity of the skew is dependent on the readout speed of the chip. Stills cameras and mobile phone cameras suffer from terrible skew as they typically have very slow readout speeds, the sensors used in an EX have a much higher readout speed and in most real world situations skew is not an issue. However there may be some circumstances where skew can cause problems but my experience is that these are few and far between. The other issue is Flash Banding. Again this is caused by the CMOS scan system. As a flash gun or strobe light is of very short duration compared to the CMOS scan it can appear that only part of the frame is illuminated by the flash of light. You can reduce the impact of Flash Banding by shooting at the slowest possible shutter speed (for example shooting 25P or 24P with no shutter) but it is impossible to completely eliminate. When I shoot lightning and thunderstorms I often use a 2 frame shutter, shooting this way I get very few partial bolts of lightning, maybe 1 in 50. If you shoot interlace then you can use the Flash Band removal tool in Sony’s Clip Browser software to eliminate flash gun problems. CMOS sensors are becoming much more common in high end cameras. Arri’s new Alexa film replacement camera uses a CMOS sensor rated at 800asa with 13 stops of latitude. Red uses CMOS as does SI2K. Slumdog Millionaire (SI2K) was the first electronically shot film to get an Oscar for cinematography, so certainly CMOS has come a long way in recent years. CMOS is here to stay, it will almost certainly make bigger and bigger inroads at higher levels. Read speeds will increase and skew etc will become less of an issue. IMHO skew is not an issue to loose sleep over with the EX’s anyway. I shoot all sorts from hurricanes and tornadoes to fast jets and race cars. I have yet to come across a shot spoilt by skew, generally motion blur tends to mask any skew long before it gets noticeable. If you shoot press conferences or red carpet events where flash guns will be going off, then you may prefer a CCD camera as this is harder to deal with, but the EXs are such good value for the money and bring many other advantages such as lower power and less weight that you have to look at the bigger picture and ask what you expect from your budget.
All types of XDCAM
We were filming 9 replica first world war aircraft doing mock dog fights. The weather was near perfect. We had a couple of Sony PDW-700?s, 2x PMW-EX3?s and a couple of Sony’s mini-cams, the HXR-MC1P. It was a great day and we came away very pleased with the results, but we also came away with a smug feeling that with the camera kits that we now have (Me and DoP Dave Crute) that we could produce a programme about just about anything at top, no compromise quality. Ever since I picked up the prototype PDW-700 at IBC 2 years ago I new it was going to be a good camera. I am a big believer that when something looks and feels right then it generally is. The 700 is no exception to this. The balance is perfect, it sits on your shoulder like it belongs there. The camera controls are all where you would expect and the HDVF20 viewfinder is clear and sharp. One thing I would say is that having used the EX3 with its supurb colour viewfinder for some time it was a bit of a shock to go back to a black and white VF. Dave has a colour VF on his 700 and it is much nicer to use than the mono VF. We didn’t spend a lot of time setting up the paint settings on the 700?s yet the pictures they produced were superb. Back in the edit suite it was all but impossible to see the difference between the EX3 and 700, both cameras produce incredible, clear, sharp pictures. It has to be said that the EX3 represents incredible value for money and for some jobs the EX3 will be the better camera to have. Especially when portability is important such as on my current trip. On the flip side I do love the disc based workflow where you never have to delete your master clips as you do with the EX’s solid state workflow. The HXR-MC1P’s also produced amazing results and we have some really nice air to air shots of the dog fights. One shot was spoilt by a bug hitting the lens of the camera as the aircraft took off, but in terms of visual quality these little cameras are way better than anything that I have used before. These are for me exciting times. I have the tools available to produce top quality programmes. The whole workflow is smooth and easy. I can shoot, edit and output from my office at the bottom of my garden programmes to be proud of efficiently and quickly, without fuss or hassle. It’s taken a while to get here but file based workflows and NLE editing have finally come of age.