The Difference Between Detail Correction and Aperture.

Just to clarify the differences between Detail settings and the Aperture setting.

Detail has a sub set of settings including: frequency, level, crispening, knee aperture, black and white limit. These sub settings all affect the amount and level of detail correction applied.

Aperture is a completely separate type of adjustment.

Detail works on contrast. The higher the contrast in an image, the sharper it appears. A bright sunny day will look sharper that a dull cloudy day because there is more contrast. detail works by increasing contrast by adding black or white edges to any parts of the image where the contrast changes rapidly, for example the edge of an object silhouetted against the sky. This increases contrast still further, making the image appear sharper. The crispening setting sets the contrast threshold at which detail gets added, level adjusts the amount.

Aperture is a simple high frequency boost. As fine details and textures are normally represented by high frequencies within the image, boosting high frequencies can help compensate for the natural fall off in lens and sensor performance at higher frequencies. This helps enhance textures and other subtle, fine details within the image look clearer.

Neither setting will increase the cameras resolution. Both make the image “appear” sharper. Detail correction IMHO is very un-natural looking and electronic, while careful use of aperture can help sharpen the image without necessarily looking un-natural.

More on S-Log and Gamma Curves

A lot of the issues with any camera and the dynamic range it can record are not due to limitations of the cameras hardware but to retain compatibility with existing display technologies, in particular the good old fashioned TV set that has been around for half a century. The issue being that in order for all TV owners to see a picture that looks “natural” there has to be a common standard for the signal sent to the TV’s that will work with all sets from the very oldest to the most recent.

As even the most recent TV’s and monitors often struggle to display a contrast range greater than 7 stops there is no point in attempting to  feed them with more, Taking 12 stops and simply squashing it into 7 stops will create a disappointing, flat and dull looking image. So for productions where extensive grading is not taking place, it is not desirable to record information beyond that which the existing broadcast system can handle. This is why the vast majority of modern camcorders with the knee off and using a standard gamma curve all exhibit very similar dynamic ranges (7 to 8 stops typically), because the limitation is generally not that of the sensor, but that of the gamma curves used in broadcast television. By adding a bit of highlight compression through a cameras knee circuit we can stretch out the dynamic range a bit as our visual system is most acute to inaccuracies in the the mid ranges of an image where faces, people and natural subjects normally appear so we don’t tend to notice strong compression occurring in highlights such as the sky or reflections. A well designed knee circuit can help gain an extra 2 or 3 stops by compressing the hell out of highlights, but as most of us are probably aware it can create it’s own issues with the near complete loss of real detail in clouds and the sky as well as color saturation issues on skin highlights, this is gamma curve compression rearing it’s ugly head. Moving on, we come to cinegammas, hypergammas and other similar extended range gammas. One of the issues with a traditional aggressive knee circuit is that it is either on or off, compressing or not compressing, there is no middle ground and this makes grading problematic as it is all but impossible to extract any meaningful data from very highly compressed highlights. Cinegammas etc address this by slowly increasing the amount of compression used as image brightness increases. In addition the gamma curve compression starts much earlier, long before you get to what would traditionally be regarded as “highlights”. This slow and gentle onset of compression grades in a more pleasing manner than a conventional knee. If you don’t grade the added mid-to-highlight compression results in a picture that looks a little flat and lacks “punch”, but is not overly objectionable to view. There is however a limit to just how much data you can cram into a compressed codec or recording system. Cinegammas and Hypergammas are tailored to give optimum performance with existing 8 bit and 10 bit high compression systems and workflows so the design engineers chose to only record a range of about 11 stops as trying to extract more than this from systems essentially designed to only record 7 to 8 stops will lead to visible compression artefacts. Technologies have continued to advance and now it’s remarkably easy (compared to just a couple of years ago) to record 10 bits of 4:2:2 or 4:4:4 data without compression or with only minimal compression. By eliminating or at least significantly reducing the compression artefacts it’s now possible to extract more meaningful data from a compressed gamma curve than was possible previously. S-Log is in effect nothing more than a heavily modified gamma curve, taking cinegammas and hypergammas to the next level. S-Log needs 10 bit recording to work as the curve compression starts much lower in the curve, so when grading those crucial skin tones and natural objects will need to be un-compressed to look natural and 8 bits of data just would not give enough range. As the image brightness increases the amount of gamma curve compression is increased logarithmically. If you look at the data being recorded this means that the majority of the 10 bit data is allocated to shadow areas then mid tones with less and less data being used to record highlights.
Most modern cameras, not just the XDCAM’s simply ignore highlight information beyond what can be recorded, this results in the image getting clipped at a given point depending on the gamma curve being used. Interestingly using negative gain on a camcorder can act as a low end clip as very small brightness changes will be reduced by the negative gain, possibly to the point where they are no longer visible. This  normally results in a reduction in dynamic range (as well as noise). I suspect this is why the F3 has less noise using standard gammas because the sensor has excess dynamic range for theses curves and good sensitivity, so Sony can afford to set the arbitrary 0db point in negative space without impacting the recorded DR but giving a low noise floor benefit. For S-Log however it’s possible to record a greater dynamic range so 0db is returned to true zero and as a result the noise floor increases a little.
LUT’s are just a reverse gamma curve applied to the S-Log curve to restore the curve to one that approximates a standard gamma, normally REC-709. They are there for convenience to provide an approximation of what the finished image might look like. However applying an off the shelf LUT will impact the dynamic range as an assumption has to be made as to which parts of the image to keep and which to discard as we are back to squeezing 12 bits into 7 bits. As every project, possibly every shot will have differing requirements you would need an infinite number of LUT’s to be able to simply hit an “add LUT” button to restore your footage to something sensible. Instead it is more usual for the colorist or grader to generate their own curves to apply to the footage. Most NLE’s already have the filters to do this, it’s simply a case of using a curves filter or gamma curve correction to generate your own curves that can be applied to your clips in lieu of a LUT.

Hello from NAB land.

Hi all. I’ve been a little “off-the-air” the last few days while shooting a bunch of video blogs for Sony from NAB. Now that’s out of the way I’m going to get a bit of time to check out the show. If you want to learn about the basics of shooting in 3D, why not drop by the Manfrotto booth today at 4pm where I will be giving a brief into talk.

Some of the things that I have seen so fat that have caught my eye are of course the sexy Sony FS100 35mm camcorder, the teeny tiny NX3D1E 3D camcorder, again from Sony. looking at XDCAM HD422 there is a Sony Jukebox machine that can store and retrieve large numbers of XDCAM discs, the new PDW-U2 which is much like the U1 (which will still be sold) but accepts the new 4 layer write only discs plus the new 100Gb 3 layer discs. In addition the read speed is about 2.4x faster than the U1, so a big performance boost there.

There are a couple of new PL mount lenses for the F3, a 1.5 x ultra wide zoom (11-16mm I think)  plus a prototype of the monster 18-252 servo zoom with auto iris: Make no mistake.. this is a BIG lens…. I also guess it won’t come cheap, but it would be an amazing lens to have. Talking of the F3, most of the F3’s here have a beta of the S-Log firmware.  Sony have a working pre-production XDCAM 3D camcorder (PMW-TD300), like the one shown at IBC this looks like a twin lens PMW-350, I’ll try and grab some photos today.

Of course the really big Sony news is the F65. An 8k camcorder recording onto SR Memory. It records 16bit raw form which you can derive 4k, 2k and HD images plus “higher resolution” images. All of this at up to 120fps. When you watch the 4k film shot with the F65 in the theatre on the sony booth, at first you wonder what the fuss is about, the picture look gorgeous but they don’t leap out as being 4k. It’s not until you start looking at deeper into the image that you really start to see the incredible subtle detail and textures captured in the image, very nice indeed.

Of course it’s all very well having all these wonderful cameras but you also need a way to record the material. Sony have a range of SR Master records, the R1, R3 and R4. I’m not completely clued up on the differences between them, but they are capable of recording using the HDCAM SR codecs on to solid state memory sticks about the size of a small mobile phone. While these are excellent devices, they are a little overshadowed (for me at least) by the Convergent Design Gemini which can record 4:2:2 and 4:4:4 uncompressed on to low cost SSD’s. There is also the new BlackMagic designs recorder with a target price of just $345 USD for an uncompressed recorder. Wow.. how times are changing. More tomorrow, hopefully with pictures!

Two PMW-F3’s used on 3D Cinema Commercial.

Alister Managing the F3's on a Hurricane Rig.

I got back late last night from a big budget cinema commercial shoot where I was working with a pair of F3’s on a Hurricane Rig. All went very well and the DoP, (Denzil Armour-Brown)  was impressed by the F3’s. The overall light weight of the complete system really helped us when moving from position to position. We used a ton of Chapman grip equipment including sliders and dolly’s. I was responsible for the 3D rig, camera setup and alignment as well as assisting the DoP.
We shot using 2 sets of band new Zeiss Ultra’s, mainly at 32mm and 50mm (very nice) as well as some older and very heavy Arri 100mm macro primes. Our only small issue was that the follow focus motors were shifting the camera very slightly due to flex in the tripod base plate on the F3. You probably wouldn’t notice this at most normal focal lengths in 2D but in 3D small shifts are very obvious. So a stiffener plate for the base will be needed to prevent this (as well as general flex) or a pair of 15mm rails mounted to the top holes on the F3 body.

We were recording to a Nano3D (2 x Nano Flashes) as well as to a Mac workstation recording ProRes in the video village. The video village allowed for instant playback on 50″ 3D monitors in a blacked out tent for review and tech assessment.

It was an outdoor shoot in great weather. As well as the F3’s there was a second rig with a pair of Phanton HD Gold 35mm high speed cameras shooting 3D at 1000fps. So even thought the sun was shining brightly, many shots were done with 2 or 3 18kw lamps!!

Towards the end of day the F3 rig was tasked with shooting some blue screen and other effects shots and in effect I became 2nd unit DoP. The effects shots will be matted in to the finished commercial.
I’m under NDA so can’t talk about the subject just yet, or post any pictures that show the subject, but once the ad is released (2 weeks time!!) I’ll be able to post some grabs and more photos.

Alan Roberts F3 assessment. Confusing Reading.

Alan Roberts F3 assesment is now online: http://thebrownings.name/WHP034/pdf/WHP034-ADD68_Sony_PMW-F3.pdf

In the report Alan observes the aliasing that I have seen from the camera, in particular the high frequency moire, so no surprise there. But he also measures the noise at -48.5db. Now I don’t have the ability to measure noise as Alan does and I normally respect his results, but this noise figure does not make sense, nor does his comment that the camera has similar sensitivity to most 3 chip cameras. To my eyes, the F3 is more sensitive than any 3 chip camera I’ve used and it’s a lot less noisy. The implication of the test is that the F3 is noisier than the PMW-350. Well that’s not what my eyes tell me. Take a look at the noise graph Alan has prepared. The hump in the noise figure curves at 0db also appears to be dismissed as insignificant, yet it means a greater than 4db difference between what the curve implies the noise figure should be and the measured noise figure. It really doesn’t seem to fit and is very strange. Video amplifiers and processing are normally pretty linear with gain giving a consistent increase/decrease in noise that follows the gain curve. If you read off the noise figures from the graph, the F3 appears to have less noise at +6db gain (-49.5db)  than at 0db (-48.5db). So id we are to believe Alan’s test then we should be using +6db gain or -3db gain (-53.5db) but not 0db, sorry but that just does not add up and to dismiss the 0db noise bump as “not significant” is something I don’t really understand as too me it is significant. Either there is something very strange going on with the F3 at 0db, or there is something up with the test. I suspect the later, perhaps the individual camera had some odd settings, as my F3 is quieter (visually) at 0db than +6db. I would need to check it out on a scope back at home to verify this.

There are also assumptions made about the pixel size and sensor pixel count that are quite wrong. Alan suggests the sensor to be a 12 Mega Pixel sensor, this suggestion is based on Alan’s opinion that the F3 has similar sensitivity to a 2/3″ 3 chip camera, so therefore the pixel size must be similar and the bigger sensor means that it must have 12 MP, yet Sony have published that it is 3.3 mega Pixels (same sensor as FS100). 3.3MP equates to roughly 2422 x 1362 pixels, for a bayer sensor this is a little under the optimum for 1920 x1080 (IMHO) and may explain the aliasing as Sony are probably trying to squeeze every last bit of resolution out of the sensor.

Alans assessment of his zone plate results also concludes that the R, G and B resolutions are the same and that the sensor resolution must be much higher than 2200 x 1240. Well I would not call 2422 x 1362 “much” higher and if this is a bayer sensor (neither admitted or denied by Sony) then the G resolution should be higher than the R and B. So could this be a case of conventional conclusions about an unconventional sensor, or have Sony managed to completely wrong foot Mr Roberts?

An interesting finding was that detail at zero, frequency at +99 and aperture at +20 gives the least aliasing. This is quite different from my own findings and will need further exploration.

The F3 assessment is also missing the usual customary round up from Alan where he suggests whether the camera is suitable for HD broadcast or not. I’m really glad I got my F3 before reading the report as I have seen with my own eyes the beautiful clean images the F3 produces. I strongly recommend anyone considering the F3, but put off buy this report to take a look at the pictures for themselves before making any decisions.

Getting good SD from an HD camera.

This a recurring question that I get asked about time and time again. The main problem being that the SD pictures, shot with an HD camera look soft. So why is this and what can be done about it?

Well there are several issues to look at. First there is camera optimisation. Sadly what works for HD doesn’t always work well for SD. Secondly there is the downconversion process. If your shooting HD and simply outputting SD using the cameras built in downconverter than you really don’t have many options but if your using a software downconverter you may be able to improve the results your getting.

Starting with the camera, what can you do? Well first off let me say that a camera optimised for HD will always be a compromise when it comes to SD. As the native resolution of HD cameras increases then the problem of getting good looking SD actually gets worse. The problem is that a good high resolution camera will normally only have a very small amount of artificial sharpening via the detail or aperture circuits, because in HD it will look nice and sharp anyway. SD cameras and the SD TV system with it’s inherently low resolution and soft pictures has always relied very heavily on detail enhancement to try and make the pictures appear sharper than they really are. When you take the minimal additional sharpening of an HD camera and downconvert it to SD it all but disappears, the end result is a soft looking picture. There is no easy fix for this, you can either add additional extra thick detail correction edges to the HD pictures, which risks spoiling the HD image or you can add additional detail correction in post production. On a Sony camera the thickness of the detail correction edges is controlled using the “frequency” setting. Setting this to a negative number will thicken up the detail edges, very often you need to go all the way to -99 to get an appreciable difference. As an alternative you can add extra sharpening or detail correction in post, after the downconversion process. This is the way I would go whenever possible as I don’t want to compromise my HD pictures for the sake of the SD images.

The second issue is the quality of the downconversion. A simple rescale from HD to SD rarely works well as it can create a lot of aliasing. Aliasing is the result of taking too much detail and trying to record or represent it with too few pixels. See this article for more on aliasing. Imagine a diagonal line running through your image.

Diagonal Line Sampled in HD
Diagonal Line Sampled in HD

If you sample it at a high resolution, with your HD camera then the line looks reasonably good as you can see in the diagram to the left.

Simple SD Downconversion
Simple SD Downconversion

If you then take that HD captured edge and simply scale it down to SD, you quarter the number of samples and the end result is a jagged, stepped line. Not pretty. In addition, if the line moves through the image it will flicker and “buzz”. This is far from ideal.

Same Line, Blurred Before conversion to SD
Same Line, Blurred Before conversion to SD

A better approach is to blur the HD image before down converting using a 4 pixel (or similar) blur, or to use a downconversion programme that will include smoothing during the conversion. The final image shows the kind of improvement that can be gained by softening the image before down conversion. The blur around the edges of the line soften it and make it appear less jagged. This will result in a much more pleasing SD image.  Next you then add in some detail correction to restore the apparent sharpness of the image and viola! A decent looking SD image from an HD source. In compressor to get a good downconversion you need to activate the advanced scale tools and use the “better” or “best” scaling options.