I got the chance to check out the Sony PXW-X200 while I was in Iceland for Sony. The video and written review are now online on the Sony web site. If you have any comments or questions please feel free to post them here.
Cameras with bayer CMOS sensors can in certain circumstances suffer from an image artefact that appears as a grid pattern across the image. The actual artefact is normally the result of red and blue pixels that are brighter than they should be which gives a magenta type flare effect. However sometimes re-scaling an image containing this artefact can result in what looks like a grid type pattern as some pixels may be dropped or added together during the re scaling and this makes the artefact show up as a grip superimposed over the image.
Grid type artefact.
The cause of this artefact is most likely off-axis light somehow falling on the sensor. This off axis light could come from an internal reflection within the camera or the lens. It’s known that with the F5/F55 and FS7 cameras that a very strong light source that is just out of shot, just above or below the image frame can in some circumstances with some lenses result in this artefact. But this problem can occur with almost any CMOS Bayer camera, it’s not just a Sony problem.
The cure is actually very simple, use a flag or lens hood to prevent off axis light from entering the lens. This is best practice anyway.
So what’s going on, why does it happen?
When white light falls on a bayer sensor it passes through color filters before hitting the pixel that measures the light level. The color filters are slightly above the pixels. For white light the amount of light that passes through each color filter is different. I don’t know the actual ratios of the different colors, it will vary from sensor to sensor, but green is the predominant color with red and blue being considerably lower, I’ve used some made up values to illustrate what is going on, these are not the true values, but should illustrate the point.
In the illustration above when the blue pixel see’s 10%, green see 70% and red 20%, after processing the output would be white. If the light falling on the sensor is on axis, ie coming directly, straight through the lens then everything is fine.
But if somehow the light falls on the sensor off axis at an oblique angle then it is possible that the light that passes through the blue filter may fall on the green pixel, or the light from the green filter may fall on the red pixel etc. So instead of nice white light the sensor pixels would think they are seeing light with an unusually high red and blue component. If you viewed the image pixel for pixel it would have very bright red pixels, bright blue pixels and dark green pixels. When combined together instead of white you would get Pink or Blue. This is the kind of pattern that can result in the grid type artefact seen on many CMOS bayer sensors when there are problems with off axis light.
This is a very rare problem and only occurs in certain circumstances. But when it does occur it can spoil an otherwise good shot. It happens more with full frame lenses than with lenses designed for super 35mm or APSC and wide angles tend to be the biggest offenders as their wide Field of View (FoV) allows light to enter the optical path at acute angles. It’s a problem with DSLR lenses designed for large 4:3 shaped sensors rather than the various wide screen format that we shoot video in today. All that extra light above and below the desired widescreen frame, if it isn’t prevented from entering the lens has to go somewhere. Unfortunately once it enters the cameras optical path it can be reflected off things like the very edge of the optical low pass filter, the ND filters or the face of the sensor itself.
The cure is very simple and should be standard practice anyway. Use a sun shade, matte box or other flag to prevent light from out of the frame entering the lens. This will prevent this problem from happening and it will also reduce flare and maximise contrast. Those expensive matte boxes that we all like to dress up our cameras with really can help when used and adjusted correctly.
I have found that adding a simple mask in front of the lens or using a matte box such as any of the Vocas matte boxes with eyebrows will eliminate the issue. Many matte boxes will have the ability to be fitted with a 16:9 or 2.40:1 mask ( also know as Mattes hence the name Matte Box) ahead of the filter trays. It’s one of the key reason why Matte Boxes were developed.
Note the clamp inside the hood for holding a mask in front of the filters on this Vocas MB216 Matte Box. Not also how the Matte Box’s aperture is 16:9 rather than square to help cut out of frame light.Arri Matte Box with Matte selection.
You should also try to make sure the size of the matte box you use is appropriate to the FOV of the lenses that you are using. An excessively large Matte Box isn’t going to cut as much light as a correctly sized one. I made a number of screw on masks for my lenses by taking a clear glass or UV filter and adding a couple of strips of black electrical tape to the rear of the filter to produce a mask for the top and bottom of the lens. With zoom lenses if you make this mask such that it can’t be seen in the shot at the wide end the mask is effective throughout the entire zoom range.
Many cinema lenses include a mask for 17:9 or a similar wide screen aperture inside the lens.
In the run up to NAB the rumour mill has been working hard and one of the cameras expected to be seen was an update to the Canon C300. Well, here it is, the C300 Mark II.
Externally it’s very similar to the original C300 but slightly larger and heavier overall. The key change to the camera body is ability to change the lens mount between EF, EF Lock and PL mounts. Other headline additions to the Mark II are a new sensor and a new codec that allow the camera to shoot in HD, UHD and 4K. I’m not going to go through all of the details here, for those you can take a look at the Canon web site.
NEW XF-AVC CODEC.
The new codec XF-AVC is very important to this camera and Canon as it adds the ability to go beyond the HD limitations of Mpeg2. It’s based on Mpeg4/AVC. The C300 Mark II can record UHD and 4K at up to 410Mb/s, 10 bit 422 and in HD can even record 12 bit 444. Now it is a little unclear in the released information what frame rates this does allow. But it appears that in 4K you are limited to 30fps but HD/2K goes to 60fps. The camera records to a pair of CFast 2.0 cards.
There is also a Long GoP version of XF-AVC for use at 2K/HD. This has a bit rate of 50Mb/s up to 60fps and in addition there are also Proxy version at 25Mb/s and 35Mb/s for recording compact HD files on SD cards within the camera.
Unfortunately this does mean yet another codec family to be added to everyone’s editing software, but Canon claim that support will be in place by the time the camera is released around September. You know I really wish ALL the camera manufacturers would get together and use one or two common codecs rather than each manufacturer having their own codec. Just imagine the chaos if every car new type of car used a different type of fuel!
Compared the the F5/F55/FS7 this new codec is very similar to XAVC. It’s interesting to note though that the C300 Mark II at the time of launch doesn’t appear to have any 4K frame rates faster than 30fps. It can record at up to 120fps in it’s slow motion mode, but this is limited to 2K/HD and uses center crop of the sensor, so all your lenses become much longer lenses. The plus to this is that it will minimise any aliasing artefacts, something that can sometimes be an issue on the FS7, although the FS7 will get center crop functionality in a future firmware update. Of course the F5/F55 have the benefit of both full frame and crop modes as well as the ability to change the optical filter depending on the mode you wish to use. In addition the FS7/F5/F55 can all go up to 180fps internally and 240fps in raw.
The C300 Mark II can also provide a simultaneous raw output to feed to an external raw recorder such as the Convergent Design Odyssey or Atomos Shogun.
NEW SENSOR, NEW LOG.
The sensor on the C300 Mark II is also new. It is a 9.84 Mega Pixel sensor with 8.85 active pixels (each pixel has two photosites) which Canon claim can provide up to 15 stops of dynamic range, which is very impressive. To record this Canon have a new log curve Canon Log 2 and the native ISO is 800, with the best dynamic range from 800 ISO and up. According to Canon above 800 ISO the dynamic range remains a constant 15 stops. It will be interesting to see what actually happens with the dynamic range at high gain levels as most cameras see a drop off in dynamic range above the native ISO. I’m not sure how you can increase the gain and have a constant dynamic range in a fixed recording range as sensor noise is always a limiting factor. It will be interesting to see this in the real world. From what I can tell from some of the web clips about the camera Canon Log 2 has a fairly low mid grey point around 36% with 8.7 stops below mid grey and 6.3 above. With 8.7 stops crammed into a pretty small range it will be interesting to see just what can be squeezed out of the shadows. I’m sure it will be good, but the question is just how useable is it?
As well as a new log curve there is also a wide range of gamuts including 709, 2020, P3 and “film gamut”. Some of these gamuts are huge. Film Gamut appears to extend beyond the visible spectrum according to the chart in the video above. I have some doubts as to whether the camera can actually fill them (in the same way that an FS7 or F5 cannot fill the included SGamut and SGamut3 gamuts, only the F55 can fill them). Again this will be an area to look at closely once the camera is launched, but the options are certainly good looking.
With this wide range of gammas and gamuts, LUT’s will be important and the C300 Mark II does have LUT’s but I can’t find any information beyond the facts that it has LUT’s and that the LUT’s can be output over the SDI and HDMI outputs as well as baked in.
One area where Canon do have some very clever technology is in autofocus. The C300 uses a technology called Dual Pixel AF and this is also included and improved upon on the C300 Mark II. You can do some clever things like move the autofocus target area and change the response times of the AF (see the video above for more details). It’s a clever system and I’m sure many shooters will find it helpful when shooting 4K.
All in all the C300 Mark II does look like a very interesting and capable camera. I’ll be sure to check it out at NAB. It’s obviously going to go up against the cheaper Sony FS7 as well as the F5. It has some very nice features. I think not being able to shoot above 30fps in 4K is a bit limiting and you only have a single, new type codec so there is no legacy codec support. Lets hope Canon get good XF-AVC support supported quickly.
I didn’t really get on with the ergonomics of the original C300, that’s a personal thing, for me as a traditional video shooter I just don’t like the top heavy layout, but I know many shooters that love the layout. In particular it’s well suited to those from a DSLR background.
It won’t be available until September and the price is set to be around $16k or £11K. I’m sure it will be very popular so get your pre-orders in now. I will have to get hold of one to figure out the best way to use the new Canon Log 2 curve and the cameras LUT system as I’m sure many people will want some in depth workflow and exposure help with this camera.
I’ve been running a lot of workshops recently looking at creating LUT’s and scene files for the FS7, F5 and F55. One interesting observation is that when creating a stylised look, almost always the way the footage looks before you grade can have a very big impact on who far you are prepared to push your grade to create a stylised look.
What do I mean by this? Well if you start off in your grading suite looking at some nicely exposed footage with accurate color and a realistic representation of the original scene, when you start to push and pull the colors in the image the pictures start to look a little “wrong” and this might restrict how far you are prepared to push things as it goes against human nature to make things look wrong.
If on the other hand you were to bring all your footage in to the grading suite with a highly stylised look straight from the camera, because it’s already unlike the real world, you are probably going to be more inclined to further stylise the look as you have never seen the material accurately represent the real world so don’t notice that it doesn’t look “right”.
An interesting test to try is to bring in some footage into the grade and apply a very stylised look via a LUT and then grade the footage. Try to avoid viewing the footage with a vanilla true to life LUT if you can.
Then bring in the same or similar footage with a vanilla true to life LUT and see how far you are prepared to push the material before you star getting concerned that it no longer looks right. You will probably find that you will push the stylised footage further than the normal looking material.
As another example if you take almost any recent blockbuster movie and start to analyse the look of the images you will find that most use a very narrow palette of orange skin tones along with blue/green and teal. Imagine what you would think if your TV news was graded this way, I’m sure most people would think that the camera was broken. If a movie was to intercut the stylised “look” images with nicely exposed, naturally colored images I think the stylised images would be the ones that most people would find objectionable as they just wouldn’t look right. But when you watch a movie and everything has the same coherent stylised look it works and it can look really great.
In my workshops when I introduce some of my film style LUT’s for the first time (after looking at normal images), sometimes people really don’t like them as they look wrong. The colors are off, it’s all a bit blue, it’s too contrasty, are all common comments. But if you show someone a video that uses the same stylised look throughout the film then most people like the look. So when assessing a look or style try to look at it in the right context and try to look at it without seeing a “normal” picture. I find it helps to go and make a coffee between viewing the normal footage and then viewing the same material with a stylised look.
Another thing that happens is the longer you view a stylised look the more “normal” it becomes as your brain adapts to the new look.
In fact while typing this I have the TV on. In the commercial break that’s just been on most of the ads used a natural color palette. Then one ad came on that used a film style palette (orange/teal). The film style palette looked really, really odd in the middle of the normal looking ads. But on it’s own that ad does have a very film like quality too it. It’s just that when surrounded by normal looking footage it really stands out and as a result looks wrong.
I have some more LUT’s to share in the coming days, so check back soon for some film like LUT’s for the FS7/F5/F55 and A7s.
I had the pleasure of listening to Pablo Garcia Soriano the resident DiT/Colorist at the Sony Digital Motion Picture Center at Pinewood Studios last week talk about grading modern digital cinema video cameras during the WTS event .
The thrust of his talk was about exposure and how getting the exposure right during the shoot makes a huge difference in how much you can grade the footage in post. His main observation was that many people are under exposing the camera and this leads to excessive noise which makes the pictures hard to grade.
There isn’t really any real way to reduce the noise in a video camera because nothing you normally do can change the sensitivity of the sensor or the amount of noise it produces. Sure, noise reduction can mask noise, but it doesn’t really get rid of it and it often introduces other artefacts. So the only way to change the all important signal to noise ratio, if you can’t change the noise, is to change the signal.
In a video camera that means opening the aperture and letting in more light. More light means a bigger video signal and as the noise remains more or less constant that means a better signal to noise ratio.
If you are shooting log or raw then you do have a fair amount of leeway with your exposure. You can’t go completely crazy with log, but you can often over expose by a stop or two with no major issues. You know, I really don’t like using the term “over-expose” in these situations. But that’s what you might want to do, to let in up to 2 stops more light than you would normally.
In photography, photographers shooting raw have been using a technique called exposing to the right (ETTR) for a long time. The term comes from the use of a histogram to gauge exposure and then exposing so the the signal goes as far to the right on the histogram as possible (the right being the “bright” side of the scale). If you really wanted to have the best possible signal to noise ratio you could use this method for video too. But ETTR means setting your exposure based on your brightest highlights and as highlights will be different from shot to shot this means the mid range of you shot will go up and down in exposure depending on how bright the highlights are. This is a nightmare for the colorist as it’s the mid-tones and mid range that is the most important, this is what the viewer notices more than anything else. If these are all over the place the colorist has to work very hard to normalise the levels and it can lead to a lot of variability in the footage. So while ETTR might be the best way to get the very best signal to noise ratio (SNR), you still need to be consistent from shot to shot so really you need to expose for mid range consistency, but shift that mid range a little brighter to get a better SNR.
Pablo told his audience that just about any modern digital cinema camera will happily tolerate at least 3/4 of a stop of over exposure and he would always prefer footage with very slightly clipped highlights rather than deep shadows lost in the noise. He showed a lovely example of a dark red car that was “correctly” exposed. The deep red body panels of the car were full of noise and this made grading the shot really tough even though it had been exposed by the book.
When I shoot with my F5 or FS7 I always rate them a stop slower that the native ISO of 2000. So I set my EI to 1000 or even 800 and this gives me great results. With the F55 I rate that at 800 or even 640EI. The F65 at 400EI.
If you ever get offered a chance to see one of Pablo’s demos at the DMPCE go and have a listen. He’s very good.
What do we really mean when we talk about exposure?
If you come from a film background you will know that exposure is the measure of how much light is allowed to fall on the film. This is controlled by two things, the shutter speed and the aperture of the lens. How you set these is determined by how sensitive the film stock is to light.
But what about in the video world? Well exposure means exactly the same thing, it’s how much light we allow our video sensor to capture. Controlled by shutter speed and aperture. The amount of light we need to allow to fall on the sensor is dependant on the sensitivity of the sensor, much like film. But with video there is another variable and that is the gamma curve…. or is it????
This is an area where a lot of video camera operators have trouble, especially when you start dealing with more exotic gamma curves such as log. The reason for the problem is down to the fact that most video camera operators are taught or have learnt to expose their footage at specific video levels. For example if you’re shooting for TV it’s quite normal to shoot so that white is around 90%, skin tones are around 70% and middle grey is in the middle, somewhere around the 45% mark. And that’s been the way it’s been done for decades. It’s certainly how I was taught to expose a video camera.
If you have a video camera with different gamma curves try a simple test. Set the camera to its standard TV gamma (rec-709 or similar). Expose the shot so that it looks right, then change the gamma curve without changing the aperture or shutter speed. What happens? Well the pictures will get brighter or darker, there will be brightness differences between the different gamma curves. This isn’t an exposure change, after all you haven’t changed the amount of light falling on the sensor, this is a change in the gamma curve and the values at which it records different brightnesses.
An example of this would be setting a camera to Rec-709 and exposing white at 90% then switching to S-log3 (keeping the same ISO for both) and white would drop down to 61%. The exposure hasn’t changed, just the recording levels.
It’s really important to understand that different gammas are supposed to have different recording levels. Rec-709 has a 6 stop dynamic range (without adding a knee). So between 0% and around 100% we fit 6 stops with white falling at 85-90%. So if we want to record 14 stops where do we fit in the extra 8 stops that S-Log3 offers when we are already using 0 to 100% for 6 stops with 709?? The answer is we shift the range. By putting the 6 stops that 709 can record between around 15% and 68% with white falling at 61% we make room above and below the original 709 range to fit in another 8 stops.
So a difference in image brightness when changing gamma curves does not represent a change in exposure, it represents a change in recording range. The only way to really change the exposure is to change the aperture and shutter speed. It’s really, really important to understand this.
Furthermore your exposure will only ever look visibly correct when the gamma curve of the display device is the same as the capture gamma curve. So if shooting log and viewing on a normal TV or viewfinder that typically has 709 gamma the picture will not look right. So not only are the levels different to those we have become used to with traditional video but the picture looks wrong too.
As more and more exotic (or at least non-standard) gamma curves become common place it’s very important that we learn to think about what exposure really is. It isn’t how bright the image is (although this is related to exposure) it is about letting the appropriate amount of light fall on the sensor. How do we determine the correct amount of light? Well we need to measure it using a waveform scope, zebras etc, BUT you must also know the correct reference levels for the gamma you are using for a white or middle grey target.
When you want two cameras to have matching timecode you need to synchronise not just the time code but also the frame rates of both cameras. Remember timecode is a counter that counts the frames the camera is recording. If one camera is recording more frames than the other, then even with a timecode cable between both cameras the timecode will drift during long takes. So for perfect timecode sync you must also ensure the frame rates of both cameras is also identical by using Genlock to synchronise the frame rates.
Genlock is only going to make a difference if it is always kept connected. As soon as you disconnect the Genlock the cameras will start to drift. If using genlock first connect the Ref output to the Genlock in. Then while this is still connected connect the TC out to TC in. Both cameras should be set to Free-run timecode with the TC on the master camera set to the time of day or whatever time you wish both cameras to have. If you are not going to keep the genlock cable connected for the duration of the shoot, then don’t bother with it at all, as it will make no difference just connecting it for a few minutes while you sync the TC.
In the case of a Sony camera when the TC out is connected to the TC in of the slave camera, the slave camera will normally display EXT-LK when the timecode signals are locked.
Genlock: Synchronises the precise timing of the frame rates of the cameras. So taking a reference out from one camera and feeding it to the Genlock input of another will cause both cameras to run precisely in sync while the two cameras are still connected together. While connected by genlock the frame counts of both camera (and the timecode counts) will remain in sync. As soon as you remove the genlock sync cable the cameras will start to drift apart. The amount of sync (and timecode) drift will depend on many factors, but with a Sony camera will most likely be in the order of a at least a few seconds a day, sometimes as much as a few minutes.
Timecode: Connecting the TC out of one camera to the TC in of another will cause the time code in the receiving camera to sync to the nearest possible frame number of the sending camera when the receiving camera is set to free run while the camera is in standby. When the TC is disconnected both cameras timecode will continue to count according to the frame rate that the camera is running at. If the cameras are genlocked, then as the frame sync and frame count is the same then so too will be the timecode counts. If the cameras are not genlocked then the timecode counts will drift by the same amount as the sync drift.
Timecode sync only and long takes can be problematic. If the timecodes of two cameras are jam sync’d but there is no genlock then on long takes timecode drift may be apparent. When you press the record button the timecodes of both cameras will normally be in sync, forced into sync by the timecode signal. But when the cameras are rolling the timecode will count the actual frames recorded and ignor the timecode input. So if the cameras are not synchronised via genlock then they may not be in true sync so one camera may be running fractionally faster than the other and as a result in long clips there may be timecode differences as one camera records more frames than the other in the same time period.
IMPORTANT UPDATE: There are two different speeds of S series cards. You should only use the faster E stream cards. You can tell which is which by the part number. QDS64E and QDS32E are second generation fast S series. Any other S series is a slower first generation S card and should be treated as H Class cards.
DO NOT USE THE NEWER 2933x CARDS THESE DO NOT WORK WITH THE FS7.
It was brought to my attention at the BVE show last week that Sony XQD cards were in short supply. This is probably due to the run away success of the PXW-FS7. More cameras sold means more media required.
So I decided to test out a Lexar XQD card in my PXW-FS7 and in my PMW-F5 (via a QDA-EX1 SxS to XQD adapter).
The good news is that it appears to work just fine, which shouldn’t really be a surprise as the Lexar cards are bonafide XQD cards. It’s also worth noting at this point that you don’t have to use the latest and greatest, extremely fast “G” series XQD cards from Sony. You can also use the slower H, N and S series cards. Although I personally would stick to just the faster G and S series cards as these can be used for all modes and frame sizes.
G Series – 400MB/S OK All Frame Rates/Modes.
S Series (QDS64E and QDS32E only) – 180MB/S OK All Frame Rates/Modes.
N Series – 125MB/S OK for XAVC-L all frame sizes/rate. XAVC-I HD up to 30fps plus 24fps 3840×2160. OK for Mpeg2, No S&Q, No ProRes or DNxHD.
H Series – 125MB/S OK for XAVC-L HD up to 60fps. No XAVC-I, No 4k/UHD, No S&Q. No ProRes/DNxHD
Lexar 1100x -168MB/S – Not tested, but should be OK, as Sony H series, maybe N series.
Lexar 1333x – 200MB/S – Tested all modes and frame rates.
Lexar have two classes of card a slower 1100x – 168MB/s card and a faster 1333x – 200MB/s card. For my tests I chose the faster 1333x card as this wasn’t much more expensive than the slower card and on paper at least matches or betters the Sony S series cards which can be used for all modes and frame rates. The 1100x card should also work just at least as well as an H series card, maybe N series, but I have not tested one and would recommend testing before use.
I tested the card across a large range of frame rates and resolutions going all the way up to UHD 60fps on the FS7 and SStP on the F5 as well as S&Q all the way to 180fps. I had no errors or other major problems. I did notice in the F5 that the it takes a little longer for the red light above the record slot to return to green at the end of a recording. While the slot light is red you cannot start a new recording so you do need to be aware that you may have a momentary delay before you can record the next clip.
I purchased the Lexar 1333x card via Amazon in the UK and it cost me £127 inc VAT for a 64GB card, which is quite a bit cheaper than a Sony G series card (currently around £220 in VAT). So the Lexar cards offer a perfectly good alternative to the Sony cards at a lower cost with only a slight decrease in off-load speed. As well as the PMW-F5/F55 and PXW-FS7 I see no reason why these cards should not also work in the PMW-Z100, FDR-AX1 or via the QDA-EX1 adapter in the PMW-200, PXW-X160, X180 and X200, but I have not tested this.
I can’t comment on long term reliability as I’ve only had the card a couple of days. I see no reason why the Lexar cards should last as long as the Sony cards. Heck looking at a Sony G series card and the Lexar card side by side the materials appear to be identical, it looks to be exactly the same plastic (even the texturing is the same) and the same brushed metal. The Sony card is marked as made in Japan and the Lexar card as a product of Taiwan.
CARD READERS.
The new Sony G series cards have a different interface to the older Sony XQD cards and the Lexar cards. Currently when you buy a G series card it comes with a USB3 card reader. This reader will ONLY read G series XQD cards.
You can buy USB3 card readers for the other XQD cards for around £25.00. These readers will normally work with any XQD card, including the G series. But when you use a G series card in the non G series readers you get a reduced read speed of up to 168MB/s, which is fast, but not as fast as the dedicated G Series reader.
One of the great features of the PXW-FS7 is the ability to be able to change the look of the images when shooting in Custom Mode. You can change many settings including the gamma curve, matrix and sharpness setting. The gamma settings change the contrast, the matrix the color and the detail and aperture settings change how sharp the pictures look.
Once you’ve made some changes you can save these settings as a Scene File using the File menu on an SD card.
I am a big fan of Sony’s Hypergammas. There are 6 in the FS7. Hypergamma 3 is very good for getting a nicer highlight roll off when shooting in lower light situations. Hypergamma 4 is good for brighter scenes and Hypergammas 7 and 8 really extends the cameras dynamic range and handles high contrast scenes very well, but can look a little flat so will need some tweaking in post production. In fact all the hypergammas need a bit of a tweak in post as to get the very best from them you should expose your shots about 1 stop darker to keep skin tones etc out of the upper compressed part of the curve and then bring the brightness back up again in post.
Anyway here are some scene files for you to download and install in the camera.
AC-NEUTRAL-HG3 This is for flatter scenes, it provides a natural look with some yellow/green removed to provide a more neutral look.
AC-NEUTRAL-HG4 This is for brighter or high contrast scenes, it provides a natural look with some yellow/green removed to provide a more neutral look.
AC-FILMLIKE1 A high dynamic range film like look.
AC-FILMLIKE2 A high dynamic range film like look with an increased blue and red response with decreased yellow/green. A little more block-buster like.
AC-VIBRANT-HG3 A vivid matrix with good dynamic range. Good for punchy direct to air images where strong colours are wanted.
AC FS7 Scene Files, set of 5.
If you find these scene files useful, please consider buying me a beer or a coffee. All donations are really appreciated and allow me to spend more time on the blog creating new guides and scene files etc.
I’m doing some work on some scene files for the FS7 and one little thing I found is that the default white clip of the camera is set to 105% and if you use HG3, HG4, HG7 or HG8 this means that the camera clips before you reach the near flat top of the hypergamma curves. This results in hard clipping of highlights rather than a more gentle roll-off.
So if using Hypergammas it’s also a good idea to turn off the white clip for the best results.
Manage your privacy
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.