MTF Services to provide 4/3rds and FZ mount options for Fujinon MK lenses.

In addition to the recent announcement from Duclos, MTF services in the UK have just announced that they will be introducing a conversion service for the popular new Fujinon MK zoom lenses. Currently these lenses are E-Mount only, but MTF will be producing kits to convert the lens mounts so that they can be used with cameras that use the micro 4/3rds mount as well as the Sony FZ mount. This is great news for owners of Sony’s F5 and F55 cameras.

Here’s the text of their press release:

Since the launch of Fujinon’s MK lenses in February this year, users have continued to request a greater range of mounts beyond the native E-Mount that can be found on the 18-55mm and 50-135mm lenses. MTF have now addressed these requests and have designed brand new solutions to convert the lenses to Micro four thirds and FZ mount systems

Mike Tapa, Managing Director at MTF Services said: “Since these excellent lenses from Fujinon were launched earlier this year, we’ve had more and more of our customers asking for us to produce a solution to open up the use of the MK lenses for use across a broader range of camera bodies. We’re now pleased to announce that we have designed brand-new adapter mounts for both micro four thirds and FZ systems to work seamlessly with both the 18-55mm and 50-135mm lenses.

Fitting service

 

Fitting the new mount options for the Fujinon MK lenses will be offered as a service from the team at MTF Services. Simply ship your lens over to MTF’s London-based workshop, and the team will fit your preferred mount before shipping back to you, fully tested, professionally adapted, cleaned and ready for use before being safely couriered back to your door.

2020 Color Clips from the PXW-FS5 Crash Adobe Premiere CC 2017-1

This is not good. Unfortunately any clips recorded in the FS5 using the Rec2020 color option in the new Picture Profile 10 cause Adobe Premiere CC 2017.1.2  to crash as soon as you try to play them back.  The clips play back fine in Resolve or in earlier versions of Premiere CC, but with the latest version of Premiere CC you get a near instant crash no matter what your playback settings.

If you are running an earlier version of CC then stay with that for now if you want to work with the new HLG clips and 2020 color. Rec 709 color works just fine so you can shoot HLG with Rec709 color and edit that in Premiere CC, but HLG + 2020 color will crash Premiere CC 2017.1.2. Hopefully this will get resolved soon by Adobe/Sony.

Want to shoot direct to HDR with the PXW-FS7, PMW-F5 and F55?

Sony will be releasing an update for the firmware in the Sony PXW-FS5 in the next few days. This update amongst other things will allow users of the FS5 to shoot to HDR directly using the Hybrid Log Gamma HDR gamma curve and Rec2020 color. By doing this you  eliminate the need to grade your footage and could plug the camera directly in to a compatible HDR TV (the TV must support HLG) and see an HDR image directly on the screen.

But what about FS7 and F5/F55 owners? Well, for most HDR productions I still believe the best workflow is to shoot in S-Log3 and then to grade the footage to HDR. However there may be times when you need that direct HDR output. So for the FS7, F5 and F55 I have created a set of Hybrid Log Gamma LUT’s that you can use to bake in HLG and Rec2020 while you shoot. This gives you the same capabilities as the FS5 (with the exception of the ability to add HLG metadata to the HDMI).

For a video explanation of the process please follow the link to my new Patreon page where you will find the video and the downloadable LUT’s.

Why you need to sort out your post production monitoring!

One of THE most common complaints I hear, day in, day out, is: There is banding in my footage.

Before you start complaining about banding or other image artefacts ask yourself one very simply, but very important question: Do I know EXACTLY what is happening to my footage within my computer or playback system? As an example, editing on a computer your footage will be starting of at it’s native bit depth. It might then be converted to a different bit depth by the edit or grading software for manipulation. Then that new bit depth signal is passed to the computers graphic card to be displayed. At this point it will possibly be converted to another bit depth as it passes through the GPU and then it will be converted to the bit depth of the computers desktop display. From there you might be passing it down an HDMI cable where another bit depth change might be needed before it finally arrives at your monitor at goodness knows what bit depth.

The two images below are very telling. The first is a photo of a high end TV connected to my MacBook ProRetina via HDMI playing back a 10 bit ProRes file in HD. The bottom picture is exactly the same file being played back out of an Atomos Shogun via HDMI to exactly the same TV. The difference is striking to say the least. Same file, same TV, same resolution. The only difference is the top one is playing back off the computer, the lower from a proper video player. I also know from experience that if I plug in a proper video output device such as a Blackmagic Mini-monitor to the laptops Thunderbolt port I will not see the same artefacts as I do when using the computers built in HDMI.

And this is a not just a quirk of my laptop, my grading suite is exactly the same. If I use the PC’s built in HDMI the pictures suck. Lots of banding and other unwanted artefacts. Play back the same clip via a dedicated, made for video, internal PCI card such as a Decklink card and almost always all of the problems go away. If you use SDI rather than HDMI things tend to be even better.

So don’t skimp on your monitoring path if you really want to know what your footage looks like. Get a proper video card, don’t rely on the computers GPU. Get a decent monitor with an SDI input and try to avoid HDMI for any critical monitoring.

Shot viewed on a good quality TV via HDMI from the computers built in graphics card. Notice all the banding.
Exactly the same shot/clip as above. But this time played back over HDMI from an Atomos Shogun Flame onto the very same TV. Not how all the banding has gone.

 

Thinking about frame rates.

Once upon a time it was really simple. We made TV programmes and videos that would only ever be seen on TV screens. If you lived and worked in a PAL area you would produce programmes at 25fps. If you lived in an NTSC area, most likely 30fps. But today it’s not that simple. For a start the internet allows us to distribute our content globally, across borders. In addition PAL and NTSC only really apply to standard definition television as they are the way the SD signal is broadcast with a PAL frame being larger than an NTSC one and both use non-square pixels. With HD Pal and NTSC does not exist, both are 1280×720 or 1920×1080 and both use square pixels, the only difference between HD in a 50hz country and a 60hz country is the frame rate.

Today with HD we have many different frame rates to choose from. For film like motion we can use 23.98fps or 24fps. For fluid smooth motion we can use 50fps or 60fps. In between sits the familiar 25fps and 30fps (29.97fps) frame rates. Then there is also the option of using interlace or progressive scan. Which do you choose?

If you are producing a show for a broadcaster then normally the broadcaster will tell you which frame rate they need. But what about the rest of us?

There is no single “right” frame rate to use. A lot will depend on your particular application, but there are some things worth considering.

If you are producing content that will be viewed via the internet then you probably want to steer clear of interlace. Most modern TV’s and all computer monitors use progressive scan and the motion in interlaced content does not look good on progressive TVs and monitors. In addition most computer monitors run by default at 60hz. If you show content shot at 25fps or 50fps on a 60hz monitor it will stutter slightly as the computer repeats an uneven number of  frames to make 25fps fit into 60Hz. So you might want to think about shooting at 30fps or 60fps for smoother less stuttery motion.

24fps or 23.98fps will also stutter slightly on a 60hz computer screen, but the stutter is very even as 1 frame gets repeated in every 4 frames shown.  This is very similar to the “pull-up” that gets added to 24fps movies when shown on 30fps television, so it’s a kind of motion that many viewers are used to seeing anyway. Because it’s a regular stutter pattern it tends to be less noticeable in the irregular conversion from 25fps to 60hz. 25 just doesn’t fit into 60 in a nice even manner. Which brings me to another consideration – If you are looking for a one fits all standard then 24 or 23.98fps might be a wise choice. It works reasonably well via the internet on 60hz monitors. It can easily be converted to 30fps (29.97fps) using the pull-up for television and it’s not too difficult to convert to 25fps simply by speeding it up by 4% (many feature films are shown in 25fps countries simply by being sped up and a pitch shift added to the audio).

So, even if you live and work in a 25fps (Pal) area, depending on how your content will be distributed you might actually want to consider 24, 30 or 60fps for your productions. 25fps or 50fps looks great on a 50hz TV, but with the majority of non broadcast content being viewed on computers, laptops and tablets 24/30/60fps may be a better choice.

What about the “film look”? Well I think it’s obvious to say that 24p or 23.98p will be as close as you can get to the typical cadence and motion seen in most movies. But 25p also looks more or less the same. Even 30p has a hint of the judder that we see in a 24p movie, but 30p is a little smoother. 50p and 60p will give very smooth motion, so if you shoot sports or fast action and you want it to be smooth you may need to use 50/60p. But 50/60p files will be twice the size of 24/25 and 30p files in most cases, so then storage and streaming bandwidth have to be considered. It’s much easier to stream 24p than 60p.

For almost all of the things that I do I shoot at 23.98p, even though I live in a 50hz country. I find this gives me the best overall compatibility. It also means I have the smallest files sizes and the clips will normally stream pretty well. One day I will probably need to consider shooting everything at 60fps, but that seems to be some way off for now, HDR and higher resolutions seem to be what people want right now rather than higher frame rates.

Why do I always shoot at 800 EI (FS7 and F5)?

This is a question that comes up time and time again. I’ve been using the F5 and FS7 for almost 5 years. What I’ve discovered in that time is that the one thing that people notice more than anything from these cameras is noise if you get your exposure wrong. In addition it’s much harder to grade a noisy image than a clean one.
Lets take a look at a few key things about how we expose and how the F5/FS7 works (note the same principle applies to most log based cameras, the FS5 also benefits from being exposed brighter than the suggested base settings).
What in the image is important? What will your audience notice first? Mid-range, shadows or highlights?
I would suggest that most audiences first look at the mid range – faces, skin tones, building walls, plants etc. Next they will notice noise and grain or perhaps poor, muddy or murky shadows. The last thing they will notice is a few very brightly highlights such as specular reflections that might be clipped.
The old notion of protecting the highlights comes from traditional gamma curves with a knee or highlight roll off where everything brighter than a piece of white paper (90% white) is compressed into a very small recording range. As a result when shooting with conventional gamma curves ALL of the brighter parts of the image are compromised to some degree, typically showing a lack of contrast and texture, often showing some weird monotone colors. Log is not like that, there is no highlight roll off, so those brighter than white highlights are not compromised in the same way.
 
In the standard gammas at 0dB the PXW-FS7, like the PMW-F5 is rated at 800 ISO. This gives a good balance between noise and sensitivity. Footage shoot at 0dB/800ISO with the standard gammas or Hypergammas generally looks nice and clean with no obvious noise problems. However when we switch to log the native ISO rating of the cameras becomes 2000 ISO, so to expose “correctly” we need to stop the aperture down by 1.3 stops. This means that compared to 709 and HG1 to HG4, the sensor is being under exposed by 1.3 stops. Less light on the sensor will mean more noise in the final image. 1.3 stops is the equivalent of 9dB. Imagine how Rec709 looks if it is under exposed by 1.3 stops or has to have +9dB of gain added in. Well – thats what log at 2000 ISO will look like.
 
However log has lots of spare headroom and no highlight compression. So we can choose to expose brighter than the base ISO because pushing that white piece of paper brighter in exposure does not cause it to become compressed.
If you open the aperture back up by 1.3 stops you get back to where you would be with 709 in terms of noise and grain. This would be “rating” the camera at 800 ISO or using 800 EI. Rating the camera at 800EI you still have 4.7 stops of over exposure range, so the only things that will be clipped will in most cases be specular reflections or extreme highlights. There is no TV or monitor in existence that can show these properly, so no matter what you do, they will never be true to life. So don’t worry if you have some clipped highlights, ignore them. Bringing your exposure down to protect these is going to compromise the mid range and they will never look great anyway.
 
You should also be extremely cautious about ever using an EI higher that 2000. The camera is not becoming more sensitive, people are often misslead by high EI’s into thinking somehow they are capturing more than they really are. If you were to shoot at 4000 EI you will end up with footage 15dB noisier than if you were shooting the same scene using 709 at 800 ISO. That’s a lot of extra noise and you won’t necessarily appreciate just how noisy the footage will be while shooting looking at a small monitor or viewfinder.
 
I’ve been shooting with the F5 and then the FS7 for almost 5 years and I’ve never found a situation where I going to an EI higher than 800 would have resulted in a better end result. At the same time I’ve seen a lot of 2000 EI footage where noise in the mid range has been an issue, one particular example springs to mind of a high end car shoot where 2000 EI was used but the gloss and shine of the car bodywork is spoilt because it’s noisy, especially the darker coloured cars.
 
Of course this is just my opinion, based on my own experience, others may differ and the best thing you can do is test for yourself.

Canada Workshops.

I’m running workshops across Canada in the next couple of weeks. Tomorrows workshop at Fusion Cine in Vancouver is sold out. But there is space in Calgary on Thursday: https://www.eventbrite.ca/e/pxw-fs5-and-pxw-fs7m2k-tour-with-alister-chapman-calgary-tickets-34889128322

From Calgary I travel to Montreal for a workshop on the 20th: https://www.eventbrite.ca/e/pxw-fs5-and-pxw-fs7m2k-tour-with-alister-chapman-montreal-tickets-34890635831

And finish up in Toronto on June 22nd: https://www.eventbrite.ca/e/pxw-fs5-and-pxw-fs7m2k-tour-with-alister-chapman-toronto-tickets-34891259697

The workshops are sponsored by Sony and are free. I’ll be covering a full end to end workflow from composition, to lighting, exposure with different gammas, workflow and post production as well as HDR.

What’s the difference between raw and S-Log ProRes – Re: FS5 raw output.

This is a question that comes up a lot.

Raw is the unprocessed (or minimally processed) data direct from the sensor. It is just the brightness value for each of the pixels, it is not a color image, but we know which color filter is above each pixel, so we are able to work out the color later. In the computer you take that raw data and convert it into a conventional color video signal defining the gamma curve and colorspace in the computer.  This gives you the freedom to choose the gamma and colorspace after the shoot and retains as much of the original sensor information as possible.Of course the captured dynamic and color range is determined by the capabilities of the sensor and we can’t magically get more than the sensor can “see”. The quality of the final image is also dependant on the quality of the debayer process in the computer, but as you have the raw data you can always go back and re-encode the footage with a better quality encoder at a later date. Raw can be compressed or uncompressed. Sony’s 12 bit FS-raw when recorded on an Odyssey or Atomos recorder is normally uncompressed so there are no additional artefacts from compression, but the files are large. The 16 bit raw from a Sony F5 or F55 when recorded on an R5 or R7 is made about 3x smaller through a proprietary algorithm.

ProRes is a conventional compressed color video format. So a ProRes file will already have a pre-determined gamma curve and color space, this is set in the camera through a picture profile, scene file or other similar settings at the time of shooting. The quality of the ProRes file is dependant on the quality of the encoder in the camera or recorder at the time of recording, so there is no way to go back and improve on this or change the gamma/colorspace later. In addition ProRes, like most commonly used codecs is a lossy compressed format, so some (minimal) picture information may be lost in the encoding process and artefacts (again minimal) are added to the image. These cannot easily be removed later, however they should not normally present any serious problems.

It’s important to understand that there are many different types of raw and many different types of ProRes and not all are equal. The FS-raw from the FS5/FS7 is 12 bit linear and 12 bit’s are not really enough for the best possible quality from a 14 stop camera (there are not enough code values so floating point math and/or data rounding has to take place and this effects the shadows and low key areas of the image). You really need 16 bit data for 14 stops of dynamic range with linear raw, so if you are really serious about raw you may want to consider a Sony F5 or F55. ProRes is a pretty decent codec, especially if you use ProResHQ and 10 bit log approaches the quality of 12 bit linear raw but without the huge file sizes.  Incidentally there is very little to be gained by going to ProRes 444 when recording the 12 bit raw from an FS5/FS7, you’ll just have bigger files and less record time.

Taking the 12 bit raw from an FS5 and converting it to ProRes in an external recorder has potential problems of it’s own. The quality of the final file will be dependant on the quality of the debayer and encoding process in the recorder, so there may be differences in the end result from different recorders. In addition you have to add a gamma curve at this point so you must be careful to choose the correct gamma curve to minimise concatenation where you add the imperfections of 12 bit linear to the imperfections of the 10 bit encoded file (S-Log2 appears to be the best fit to Sony’s 12 bit linear raw).

Despite the limitations of 12 bit linear, it is normally a noticeable improvement over the FS5’s 8 bit internal UHD recordings, but less of a step up from the 10 bit XAVC that an FS7 can record internally. What it won’t do is allow you to capture anything extra. It won’t improve the dynamic range, won’t give you more color and won’t enhance the low light performance (if anything there will be a slight increase in shadow noise and it may be slightly inferior in under exposed shots). You will have the same dynamic and color range, but recorded with more “bits” (code values to be precise). Linear raw excels at capturing highlight information and what you will find is that compared to log there will be more textures in highlights and brighter parts of your captured scenes. This will become more and more important as HDR screens are better able to show highlights correctly. Current standard dynamic range displays don’t show highlights well, so often the extra highlight data in raw is of little benefit over log. But that’s going to change in the next few years so linear recording with it’s extra highlight information will become more and more important.

Watch your viewfinder in bright sunshine (viewfinders with magnifiers or loupes).

Just a reminder to anyone using a viewfinder fitted with an eyepiece, magnifier or loupe not to leave it pointing up at the sun. Every year I see dozens of examples of burnt  and damaged LCD screens and OLED displays caused by sunlight entering the viewfinder eyepiece and getting focussed onto the screen and burning or melting it.

It can only take a few seconds for the damage to occur and it’s normally irreversible. Even walking from shot to shot with the camera viewfinder pointed towards the sky can be enough to do damage if the sun is out.

So be careful, cover or cap the viewfinder when you are not using it. Tilt it down when carrying the camera between locations or shots. Don’t turn to chat to someone else on set and leave the VF pointing at the sun. If you are shooting outside on a bright sunny day consider using a comfort shade such as an umbrella or large flag above your shooting position to keep both you and the camera out of the sun.

Damage to the viewfinder can appear as a smudge or dark patch on the screen that does not wipe off. If the cameras was left for a long period it may appear as a dark line across the image. You can also sometimes melt the surround to the LCD or OLED screen.

As well as the viewfinder don’t point your camera directly into the sun. Even an ND filter may not protect the sensor from damage as most regular ND filters allow the infra red wavelengths that do much of the damage straight through.  Shutter speed makes no difference to the amount of light hitting the sensor in a video camera, so even at a high shutter speed damage to the cameras sensor or internal ND’s can occur. So be careful when shooting into the sun. Use an IR ND filter and avoid shooting with the aperture wide open, especially with static shots such as time-lapse.