Category Archives: Shooting Tips

ProRes Raw Webinar – How to use ProRes Raw with the FS5 and FS7

Last week I presented a Webinar in conjunction with Visual Impact on how to shoot ProRes Raw with Sony’s FS5 and FS7. The Webinar was recorded, so if you missed it you can now watch it online. It’s almost 2 hours long and contains what I hope is a lot of useful information including what you need, exposure and how to get the footage in to FCP-X. I tried to structure the FCP-X part presentation in such a way that those that don’t normally use FCP-X (like me) will be able to get started quickly and understand what is going on under the hood.

Since the webinar it has been brought to my attention by Felipe Baez (thanks Felipe) that it is possible to add a LUT after the color panel and grading tools by adding the “custom LUT” effect to your clip. To do this you will set the raw input conversion to S-log3. Then add your color correction, then add the Custom LUT effect.

A big thank you to Visual Impact for making this possible, do check them out!

Here is the link to the video of the webinar:

 

When is a viewfinder a viewfinder and when is it just a small monitor?

It’s interesting to see how the term viewfinder is now used for small monitors rather than monocular viewfinders or shrouded dedicated viewfinders. Unless the a monitor screen is properly shielded from external light then you can only guess at the contrast and brightness of the images feeding it in anything other than a dim/dark room.

This is one of the key reasons why for decades viewfinders have been in fully shrouded hoods, snoots or loupes. As one of the key roles of a viewfinder is to show how your recordings will look for exposure assessment, if it doesn’t have a full shroud then in my opinion it isn’t a viewfinder, it is simply a monitor and exactly what your images will look like is anyones guess depending on the ambient light conditions. Furthermore even a young person with perfect can’t focus properly at less than 6″/150mm and that distance increases with age or in low ambient light. So most people will need a loupe or magnifying lens to be able to make full use of a small HD LCD for critical focus. In order to be able to see the sharpness of an image you need contrast, so an unshaded LCD screen on a sunny day will be next to useless for focus – perhaps this is why I see so many out of focus exterior shots on TV these days?

To be truly useful a viewfinder needs to be viewed in a controlled and dark environment. That’s why for decades it has been normal to use a monocular viewfinder. The eyepiece creates a tightly controlled, nice and dark,  viewing environment. This isn’t always convenient. I will often flip up or remove the eyepiece for certain types of shot. But – if you don’t have the option to fully shade the viewfinder – how do you work with it on a sunny day? On a camera like the FS5 I often find myself using the small, enclosed viewfinder on the back of the camera when the sun is bright. These tiny built in viewfinders are not ideal, but I’d rather have that than a totally washed out LCD or trying to shoot with a jacket over my head as my only option.

So next time you are looking at upgrading the monitor or viewfinder on your camera do try out a good 3rd party monocular viewfinder such as the Zacuto Gratical or Zacuto Eye. Perhaps consider a Small HD monitor with the  Side Finder option. Or an add-on monocular for the existing LCD panel. Without that all important shading and magnification it isn’t really a viewfinder, it’s just a small LCD monitor and in anything other than a very dim environment it’s always going to be tough to judge focus and exposure.

Beware the LC709 LUT double exposure offset.

The use o f the LC709 Type A LUT in Sony’s Cinealta cameras such as the PXW-FS7 or PMW-F55 is very common. This LUT is popular because it was designed to mimic the Arri cameras when in their Rec-709 mode. But before rushing out to use this LUT and any of the other LC709 series of LUT’s there are some things to consider.

The Arri cameras are rarely used in Rec-709 mode for anything other than quick turn around TV. You certainly wouldn’t normally record this for any feature or drama productions. It isn’t the “Arri Look” The Arri look normally comes as a result of shooting using Arri’s LogC and then grading that to get the look you want. The reason it exists is to provide a viewable image on set. It has more contrast than LogC and uses Rec 709 color primaries so the colors look right, but it isn’t Rec-709. It squeezes almost all of the cameras capture range into a something that can be viewed on a 709 monitor so it looks quite flat.

Because a very large dynamic range is being squeezed into a range suitable to be viewed on a regular, standard dynamic range monitor the white level is much reduced compared to regular Rec-709. In fact, white (such as a white piece of paper) should be exposed at around 70%. Skin tones should be exposed at around 55-60%.

If you are shooting S-Log on a Sony camera and using this LUT to monitor, if you were to expose using conventional levels, white at 85-90% skin tones at 65-70%, then you will be offsetting your exposure by around +1.5 stops. On it’s own this isn’t typically going to be a problem. In fact I often come across people that tell me that they always shoot at the cameras native EI using this LUT and get great, low noise pictures. When I dig a little deeper I often find that they are exposing white at 85% via the LC709 LUT. So in reality they are actually shooting with an exposure the equivalent of +1 to +1.5 stops over the base level.

Where you can really run into problems is when you have already added an exposure offset. Perhaps you are shooting on an FS7 where the native ISO is 2000 ISO and using an EI of 800. This is a little over a +1 stop exposure offset. Then if you use one of the LC709 LUT’s and expose the LUT so white is at 90% and skin tones at 70% you are adding another +1.5 stops to the exposure, so your total exposure offset is approaching 3 stops. This large an offset is rarely necessary and can be tricky to deal with in post. It’s also going to impact your highlight range.

So just be aware that different LUT’s require different white and grey levels and make sure you are exposing the LUT at it’s correct level so that you are not adding an additional offset to your desired exposure.

How to get the best White Balance (Push Auto WB).

Getting a good white balance is critical to getting a great image, especially if you are not going to be color correcting or grading your footage. When shooting traditionally ie – not with log or raw – A common way to set the cameras white balance is to use the one push auto white balance combined with a white target. You point the camera at the white target, then press the WB button (normally found at the front of the camera just under the lens).
The white target needs to occupy a good portion of the shot but it doesn’t have to completely fill the shot. It can be a pretty small area, 30% is normally enough. The key is to make sure that the white or middle grey target is obvious enough and at the right brightness that the camera uses the right part of the image for the white balance. For example, you could have a white card filling 50% of the screen, but there might be a large white car filling the rest of the shot. The camera could be confused by the car if the brightness of the car is closer to the brightness the camera wants than the white/grey card.
The way it normally works is that the camera looks for a uniformly bright part of the image with very little saturation (color) somewhere between 45 and 90IRE. The camera will then assume this area to be the white balance target. The camera then adjusts the gain of the red and blue channels so that the saturation in that part of the image becomes zero and as a result there is no color over the white or grey target.
 
If you fill the frame with your white/grey card then there can be no confusion. But that isn’t always possible or practical as the card needs to be in the scene and under the light you are balancing for rather than just directly in front of the lens. The larger your white or grey card is the more likely it is that you will get a successful and accurate white balance – provided it’s correctly exposed and in the right place.
 
The white target needs to be in reasonable focus as if it is out of focus this will create a blurred edge with color from any background objects blending into the blur. This could upset the white balance as the camera uses an average value for the whole of white area, so any color bleed at the edges due to defocussing may result in a small color offset.
 
You can use a white card or grey card (white paper at a push, but most paper is bleached slightly blue to make it look whiter to our eyes and this will offset the white balance). The best white balance is normally achieved by using a good quality photography grey card. As the grey card will be lower down in the brightness range, if there is any color, it will be more saturated. So when the camera offsets the R and B gain to eliminate the color it will be more accurate.
 
The shiny white plastic cards often sold as white balance cards are often not good choices for white balance. They are too bright and shiny. Any reflections off a glossy white card will seriously degrade the cameras ability to perform an accurate white balance as the highlights will be in the cameras knee or roll-off and as a result have much reduced saturation and also reduced R and B gain, making it harder for the camera to get a good white balance. In addition the plastics used tend to yellow with age, so if you do use a plastic white balance card make sure it isn’t past it’s best.
Don’t try to white balance off clouds or white cars, they tend to introduce offsets into the white balance.
 
Don’t read too much into the Kelvin reading the camera might give. These values are only a guide, different lenses and many other factors will introduce inaccuracies. It is not at all unusual to have two identical cameras give two different Kelvin values even though both are perfectly white balance matched. If you are not sure that your white balance is correct, repeat the process. If you keep getting the same kelvin number it’s likely you are doing it correctly.

Use the cameras media check to help ensure you don’t get file problems.

Any of the Sony cameras that use SxS or XQD cards include a media check and media restore function that is designed to detect any problems with your recording media or the files stored on that media.
However the media check is only normally performed when you insert a card into the camera, it is not done when you eject a card as the camera never knows when you are about to do that.
So my advice is: When you want to remove the card to offload your footage ensure you have a green light next to the card, this means it should be safe to remove. Pop the card out as you would do normally but then re-insert the card and wait for the light to go from red, back to green. Check the LCD/VF for any messages, if there are no messages, take the card out and do your offload as normal.
 
Why? Every time you put an XQD or SxS card into the camera the card and files stored on it are checked for any signs of any issues. If there is a problem the camera will give you a “Restore Media” warning. If you see this warning always select OK and allow the camera to repair whatever the problem is. If you don’t restore the media and you then make a copy from the card, any copy you make will also be corrupt and the files may be inaccessible.
Once the files have been copied from the card it is no longer possible to restore the media.  If there is a problem with the files on the card, the restore can only be done by the camera, before offload. So this simple check that takes just a few seconds can save a whole world of hurt. I wish there was a media check button you could press to force the check, but there isn’t. However this method works.
It’s also worth knowing that Catalyst Browse and the old Media Browser software performs a data integrity check if you directly attach an SxS card or XQD card to the computer and access the card from the software. If a problem is found you will get a message telling you to return the media to the camera and perform a media restore. But if this is some time after the shoot and you don’t have the camera to hand, this can be impossible. Which is why I like to check my media in the camera by re-inserting it back into the camera so that it gets checked for problems before the end of the shoot.

What shutter speed to use if shooting 50p or 60p for 50i/60i conversion.

An interesting question got raised on Facebook today.

What shutter speed should I use if I am shooting at 50p so that my client can later convert the 50p to 50i? Of course this would also apply to shooting at 60p for 60i conversion.

Lets first of all make sure that we all understand that what’s being asked for here is to shoot at 50(60) progressive frames per second so that the footage can later be converted to 25(30) frames per second interlace – which has 50(60) fields.

If we just consider normal 50p or 60p shooing the the shutter speed that you would chooses on many factors including what you are shooting and how much light you have and personal preference.

1/48 or 1/50th of a second is normally considered the slowest shutter speed at which motion blur in a typical frame no longer significantly softens the image. This is why old point and shoot film cameras almost always had a 1/50th shutter, it was the slowest you could get away with.

Shooting with a shutter speed that is half the duration of the cameras frame rate is also know as using a 180 degree shutter, a very necessary practice with a film movie camera due to the way the mechanical shutter must be closed while the film is physically advanced to the next frame. But it isn’t essential that you have the closed shutter period with an electronic camera as there is no film to move, so you don’t have to use a 180 degree shutter if you don’t want to.

There is no reason why you can’t use a 1/50th or 1/60th shutter when shooting at 50fps or 60fps, especially if you don’t have a lot of light to work with. 1/50(1/60) at 50fps(60fps) will give you the smoothest motion as there are no breaks in the motion between each frame. But many people like to sharpen up the image still further by using 1/100th(1/120th) to reduce motion blur.  Or they prefer the slightly steppy cadence this brings as it introduces a small jump in motion between each frame. Of course 1/100th needs twice as much light. So there is no hard and fast rule and some shots will work better at 1/50th while others may work better at 1/100th.

However if you are shooting at 50fps or 60fps so that it can be converted to 50i or 60i, with each frame becoming a field, then the “normal” shutter speed you should use will be 1/50th or 1/60th to mimic a 25fps-50i camera or 30fps-60i camera which would typically have it’s shutter running at 1/50 or 1/60th. 1/100th(120th) at 50i(60i) can look a little over sharp due to an increase in aliasing due to the way a interlace video field only has half the resolution of the full frame. Particularly with 50p converted to 50i as there is no in-camera anti-aliasing and each frame will simply have it’s resolution divided by 2 to produce the equivalent of a single field. When you shoot with a “real” 50i camera line pairs on the sensor are combined and read out together as a  single field line and this slightly softens and anti-aliases each of fields. 50i has lower vertical resolution than 25p. But with simple software conversions from 50p to 50i this anti-aliasing does not occur. If you combine that with a faster than typical shutter speed the interlaced image can start to look over sharp and may have jaggies or color moire not present in the original 50/60p footage.

Using LUT’s for exposure – choosing the right LUT.

If using a LUT to judge the exposure of a camera shooting log or raw it’s really important that you fully understand how that LUT works.

When a LUT is created it will expect a specific input range and convert that input range to a very specific output range. If you change the input range then the output will range will be different and it may not be correct. As an example a LUT designed and created for use with S-Log2 should not be used with S-Log3 material as the the higher middle grey level used by S-Log3 would mean that the mid range of the LUT’s output would be much brighter than it should be.

Another consideration comes when you start offsetting your exposure levels, perhaps to achieve a brighter log exposure so that after grading the footage will have less noise.

Lets look at a version of Sony’s 709(800) LUT designed to be used with S-Log3 for a moment. This LUT expects middle grey to come in at 41% and it will output middle grey at 43%. It will expect a white card to be at 61% and it will output that same shade of white at a little over 85%. Anything on the S-Log3 side brighter than 61% (white) is considered a highlight and the LUT will compress the highlight range (almost 4 stops) into the output range between 85% and 109% resulting in flat looking highlights. This is all perfectly fine if you expose at the levels suggested by Sony. But what happens if you do expose brighter and try to use the same LUT either in camera or in post production?

Well if you expose 1.5 stops brighter on the log side middle grey becomes around 54% and white becomes around 74%. Skin tones which sit half way between middle grey and white will be around 64% on the LUT’s input. That’s going to cause a problem! The LUT considers anything brighter than 61% on it’s input to be a highlight and it will compresses anything brighter than 61%. As a result on the output of your LUT your skin tones will not only be bright, but they will be compressed and flat looking. This makes them hard to grade. This is why if you are shooting a bit brighter it is much, much easier to grade your footage if your LUT’s have offsets to allow for this over exposure.

If the camera has an EI mode (like the FS7, F5, F55 etc) the EI mode offsets the LUT’s input so you don’t see this problem in camera but there are other problems you can encounter if you are not careful like unintentional over exposure when using the Sony LC709 series of LUTs.

Sony’s  709(800) LUT closely matches the gamma of most normal monitors and viewfinders, so 709(800) will deliver the correct contrast ie. contrast that matches the scene you are shooting plus it will give conventional TV brightness levels when viewed on standard monitors or viewfinders.

If you use any of the LC709 LUT’s you will have a miss-match between the LUT’s gamma and the monitors gamma so the images will show lower contrast and the levels will be lower than conventional TV levels when exposed correctly. LC709 stands for low contrast gamma with 709 color primaries, it is not 709 gamma!

Sony’s LC709 Type A LUT is very popular as it mimics the way an Arri Alexa might look. That’s fine but you also need to be aware that the correct exposure levels for this non-standard LC gamma are middle grey at around 41% and white at 70%.

An easy trap to fall into is to set the camera to a low EI to gain a brighter log exposure and then to use one of the LC709 LUT’s and try to eyeball the exposure. Because the LC709 LUT’s are darker and flatter it’s harder to eyeball the exposure and often people will expose them as you would regular 709. This then results in a double over exposure. Bright because of the intentional use of the lower EI but even brighter because the LUT has been exposed at or close to conventional 709 brightness. If you were to mistakenly expose the LC709TypeA LUT with skin tones at 70%, white at 90% etc then that will add almost 2 stops to the log exposure on top of any EI offset.

Above middle grey with 709(800) a 1 stop exposure change results in an a 20% change in brightness, with LC709TypeA the same exposure change only gives a just over 10% change, as a result over or under exposure is much less obvious and harder to measure or judge by eye with LC709. The cameras default zebra settings for example have a 10% window. So with LC709 you could easily be a whole stop out, while with 709(800) only half a stop.

Personally when shooting I don’t really care too much about how the image looks in terms of brightness and contrast. I’m more interested in using the built in LUT’s to ensure my exposure is where I want it to be. So for exposure assessment I prefer to use the LUT that is going to show the biggest change when my exposure is not where it should be. For the “look” I will feed a separate monitor and apply any stylised looks there. To understand how my highlights and shadows, above and below the LUT’s range are being captured I use the Hi/Low Key function.

If you are someone that creates your own LUT’s an important consideration is to ensure that if you are shooting test shots, then grading these test shots to produce a LUT it’s really, really important that the test shots are very accurately exposed.

You have 2 choices here. You can either expose at the levels recommended by Sony and then use EI to add any offsets or you can offset the exposure in camera and not use EI but instead rely on the offset that will end up in the LUT. What is never a good idea is to add an EI offset to a LUT that was also offset.

More on frame rate choices for todays video productions.

This is another of those frequent questions at workshops and online.
What frame rate is the best one to use?
First – there is no one “best” frame rate. It really depends on how you want your video to look. Do you want the slightly juddery motion of a feature film or do you want silky smooth motion?
You also need to think about and understand how your video will be viewed. Is it going to be watched on a modern TV set or will it be watched on a computer? Will it only be watched in one country or region or will it be viewed globally?
Here are some things to consider:
TV in Europe is normally 50Hz, either 25p or 50i.
TV in the North America is 60Hz, either 30p or 60i (both actually 29.97fps).
The majority of computer screens run at 60Hz.
Interlaced footage looks bad on most LCD screens.
Low frame rates like 24p and 25p often exhibit judder.
Most newer, mid price and above TV’s use motion estimation techniques to eliminate judder in low frame rate footage.
If you upload 23.98fps footage to YouTube and it is then viewed on a computer it will most likely be shown at 24p as you can’t show 0.98 of a frame on a 60Hz computer screen.
Lets look first at 25p, 50i and 50p.
If you live in Europe or another 50Hz/Pal area these are going to be frame rates you will be familiar with. But are they the only frame rates you should use? If you are doing a broadcast TV production then there is a high chance that you will need to use one of these standards (please consult whoever you are shooting for). But if your audience is going to watch your content online on a computer screen, tablet or mobile phone these are not good frame rates to use.

Most computer screens run at 60Hz and very often this rate can’t be changed. 25p shown on most computer screens requires 15 frames to be shown twice and 10 frames to be shown 3 times to create a total of 60 frames every second. This creates an uneven cadence and it’s not something you can control as the actual structure of the cadence depends on the video subsystem of the computer the end user is using.

The odd 25p cadence is most noticeable on smooth pans and tilts where the pan speed will appear to jump slightly as the cadence flips between the 10 frame x3 and 15 frame x 2 segments. This often makes what would otherwise be smooth motion appear to stutter unevenly. 24p material doesn’t exhibit this same uneven stutter (see the 24p section). 50p material will exhibit a similar stutter as again the number of padding frames needed is uneven, although the motion should be a bit more fluid.
So really 25p and 50p are best reserved for material that will only ever be seen on televisions that are running at 50Hz. They are not the best choices for online distribution or viewing on computers etc.
24p, 30p or 60p (23.98p, 29.97p)
If you are doing a broadcast TV show in an NTSC/60Hz area then you will most likely need to use the slightly odd frame rates of 23.98fps or 29.97fps. These are legacy frame rates specifically for broadcast TV. The odd frame rates came about to avoid problems with the color signal interfering with the luma (brightness) signal in the early days of analog color TV.
If you show 23.98fps or 29.97fps footage on a computer it will normally be shown at the equivalent of 24p or 30p  to fit with the 60Hz refresh rate of the computer screen. In most cases no one will ever notice any difference.
24p Cadence.
23.98p and 24p when shown on a 60Hz screen are shown by using 2:3 cadence where the first frame is shown twice, the next 3 times, then 2, then 3 and so on. This is very similar to the way any other movie or feature film is shown on TV and it doesn’t look too bad.
30p or 29.97p footage will look smoother than 24p as all you need to do is show each frame twice to get to 60Hz there is no odd cadence and the slightly higher frame rate will exhibit a little less judder. 60p will be very smooth and is a really good choice for sports or other fast action. But, higher frame rates do require higher data rates to maintain the same image quality. This means larger files and possibly slower downloads and must be considered. 30p is a reasonable middle ground choice for a lot of productions, not as juddery as 24p but not as smooth as 60p.
24p or 23.98p for “The Film Look”.
Generally if you want to mimic the look of a feature film then you might choose to use 23.98p or 24p as films are normally shot at 24fps. If your video is only going to be viewed online then 24p is a good choice. If your footage might get shown on TV the 23.98p may be the better choice as 23.98fps works well on 29.97fps TV’s in 60Hz/NTSC areas.
BUT THERE IS A NEW CATCH!!!
A lot of modern, new TV’s feature motion compensation processes designed to eliminate judder. You might see things in the TV’s literature such as “100 Hz smooth motion” or similar.  If this function is enabled in the TV it will take any low frame rate footage such as 24p or 25p and use software to create new frames to increase the frame rate and smooth out any motion judder.
So if you want the motion judder typical of a 24fps movie and you create at 24fps video, you may find that the viewer never sees this juddery, film like motion as the TV will do it’s best to smooth it out! Meanwhile someone watching the same clip on a computer will see the judder. So the motion in the same clip will look quite different depending on how it’s viewed.
Most TV’s that have this feature will disable it it when the footage is 60p as 60p footage should look smooth anyway. So a trick you might want to consider is to shoot at 24p or 30p and then for the export file create a 60p file as this will typically cause the TV to turn off the motion estimation.
In summary, if you are doing a broadcast TV project you should use the frame rate specified by the broadcaster. But for projects that will be distributed via the internet I recommend the use of 23.98p or 24p for film style projects and 30p for most other projects. However if you want very smooth motion you should consider using 60p.

Low Light Performance – It’s all about the lens!

This post follows on from my previous post about sensors and was inspired by one of the questions asked following that post.

While sensor size does have some effect on low light performance, the biggest single factor is really the lens. It isn’t really bigger sensor that has revolutionised low light performance. It’s actually the lenses that we can use that has chnge our ability to shoot in low light. When we used to use 1/2″ or 2/3″ 3 chip cameras for most high end video production the most common lenses were the wide range zoom lenses. These were typically f1.8 lenses, reasonably fast lenses.

But the sensors were really small, so the pixels on those sensors were also relatively small, so having a fast lens was important.

Now we have larger sensors, super 35mm sensors are now common place. These larger sensors often have larger pixels than the old 1/2″ or 2/3″ sensors, even though we are now cramming more pixels onto the sensors. Bigger pixels do help increase sensitivity, but really the biggest change has been the types of lenses we use.

Let me explain:

The laws of physics play a large part in all of this.
We start off with the light in our scene which passes through a lens.

If we take a zoom lens of a certain physical size, with a fixed size front element and as a result fixed light gathering ability, for example a typical 2/3″ ENG zoom. You have a certain amount of light coming in to the lens.
When the size of the image projected by the rear of the lens is small it will be relatively bright and as a result you get an effective large aperture.

Increase the size of the sensor and you have to increase the size of the projected image. So if we were to modify the rear elements of this same lens to create a larger projected image (increase the image circle) so that it covers a super 35mm sensor what light we have. is spread out “thinner” and as a result the projected image is dimmer. So the effective aperture of the same lens becomes smaller and because the image is larger the focus more critical and as a result the DoF narrower.

But if we keep the sensor resolution the same, a bigger sensor will have bigger pixels that can capture more light and this makes up for dimmer image coming from the lens.

So where a small sensor camera (1/2″, 2/3″) will typically have a f1.8 zoom lens when you scale up to a s35mm sensor by altering the projected image from the lens, the same lens becomes the equivalent of around f5.6. But because for like for like resolution the pixels size is much bigger, the large sensor will be 2 to 3 stops more sensitive, so the low light performance is almost exactly the same, the DoF remains the same and the field of view remains the same (the sensor is larger, so DoF decreases, but the aperture becomes smaller so DoF increases again back to where we started). Basically it’s all governed by how much light the lens can capture and pass through to the sensor.

It’s actually the use of prime lenses that are much more efficient at capturing light has revolutionised low light shooting as the simplicity of a prime compared to a zoom makes fast lenses for large sensors affordable. When we moved to sensors that are much closer to the size of sensors used on stills cameras the range and choice of affordable lenses we could use increased dramatically. We were no longer restricted to expensive zooms designed specifically for video cameras.

Going the other way. If you were to take one of todays fast primes like a common and normally quite affordable 50mm f1.4 and build an optical adapter of the “speedbooster” type so you could use it on a 2/3″ sensor you would end up with a lens the equivalent of a f0.5 10mm lens that would turn that 2/3″ camera into a great low light system with performance similar to that of a s35mm camera with a 50mm f1.4.

Adjusting the Color Matrix

Every now and again I get asked how to adjust the color matrix in a video camera. Back in 2009 I made a video on how to adjust the color matrix in the Sony’s EX series of cameras. This video is just as relevant today as it was then. The basic principles have not changed.

The exact menu settings and menu layout may be a little different in the latest cameras, but the adjustment of the matrix setting (R-G, G-R etc) have exactly the same effect in the latest camera that provide matrix adjustments (FS7, F5, F55 and most of the shoulder mount and other broadcast cameras). So if you want a better understanding of how these settings and adjustment works, take a look at the video.

I’ll warn you now that adjusting the color matrix is not easy as each setting interacts with the others. So creating a specific look via the matrix is not easy and requires a fair bit of patience and a lot of fiddling and testing to get it just right.