ProResRaw for Windows Released (beta)

Some time ago Atomos indicated that you would be able to use ProResRaw in Adobe Premiere. Well – now you can, on a Windows machine at least.

Apple have just released a beta version of ProResRaw for Windows that is designed to work with Adobe’s creative suite applications including Premiere, After Effects, Premiere Pro and Premiere Rush.

I haven’t been able to try it yet as I’m mainly Mac based. But if you want to give it a go you can download the beta from Apple by clicking here.

Edit: It appears that as well as this plugin you may need to be running the latest beta versions of the Adobe applications. Please don’t ask me where to get the beta applications as I have no idea.

Facebook Live – Streaming with the FS5.

Facebook Live stream Thursday 26th March 4pm GMT/UTC on how to stream to Facebook and YouTube with the Sony PXW-FS5 (also applies to many other Sony cameras with similar streaming options).
I will show you how to connect the camera to a network via Wi-Fi, how to send the stream from the camera to a computer. I will show you how to set up VLC to receive the stream from the camera and then how to use OBS to convert the FS5’s stream (via VLC) and send it to YouTube.
The “How To” live stream will be on my Facebook page: https://www.facebook.com/alister.chapman.9. But if you also have YouTube you will be able to see the stream from the FS5 once it is connected and setup. Links will be shared during the presentation.
If you install VLC: https://www.videolan.org/
Ahead of the session you should be able to set everything up as we go.
Thursday March 26th 16:00 GMT/UTC, 17:00 EU,  12:00 EDT,  9:00AM PDT.

Live Streaming From an FS5

Do you have an FS5 and want to stream to Facebook or YouTube? It’s actually fairly straight forward and you don’t even need to buy anything extra! You can even connect a couple of FS5’s to a single computer and switch between them.

How do you do it?

First you will need to download and install two pieces of free software on your computer. The first is VLC. VLC is an open source video player but it also has the ability to act as a media server that can receive the UDP video streams that the FS5 sends and convert them into a live video clip on the computer. The computer and the camera will both need to be connected to the same wifi network and you will need to enter the IP address of the computer into the streaming server settings in the FS5. By connecting the FS5 to your computer via the network you can use VLC to decode the UDP stream . Go to “file” “open network” and click on “open RTP/UDP stream” and enter the computers IP address and the stream port, you should then save the FS5 stream as a playlist in VLC.

The next piece of software that you need is OBS Open Broadcast Studio. 

OBS is a clever open source streaming application that can convert any video feed connected to a computer into a web stream. From within OBS you can set the signal source to VLC and then the stream from the FS5 will become one of the “scenes” or inputs that OBS can stream to Facebook, YouTube etc.

For multi-camera use a different port for each of the UDP streams and then in VLC save each stream as a different playlist. Then each playlist can be attached to a different scene in OBS so that you can switch. cut and mix between them.

 

Streaming and Live Feeds.

With some difficult times ahead and the need for most of us to minimise contact with others there has never been a greater need for streaming and online video services that now.

I’m setting up some streaming gear in my home office so that I can do some presentations and online workshops over the coming weeks.

I am not an expert on this and although I did recently buy a hardware RTMP streaming encoder, like many of us I didn’t have a good setup for live feeds and streaming.

So like so many people I tried to buy a Blackmagic Design Atem, which is a low cost all in one switcher and streaming device. But guess what? They are out of stock everywhere with no word on when more will become available. So I have had to look at other options.

The good news is that there are many options. There is always your mobile phone, but I want to be able to feed several sources including camera feeds, the feed from my laptop and the video output from a video card. 

OBS to the rescue!

The good news is that there is a great piece of open source software called OBS – Open Broadcast System and the Open Broadcast Studio streaming software.

Open Broadcast Studio Software.

 

OBS is s great piece of software that can convert almost any video source connected to a computer into a live stream that can be sent to most platforms including Facebook and YouTube etc. If the computer is powerful enough it can switch between different camera sources and audio sources. If you follow the tutorials on the OBS website it’s pretty quick and easy to get it up and running.

So how am I getting video into the laptop that’s running OBS? I already had a Blackmagic Mini Recorder which is an HDMI and SDI to thunderbolt input adapter and I shall be using this to feed the computer. There are many other options but the BM Mini Recorders are really cheap and most dealers stock them as well as Amazon. it’s HD only but for this I really don’t need 4K or UHD.

Blackmagic Mini Recorder HDMI and SDI to thunderbolt input adapter.

 

Taking things a step further I also have both an Atomos Sumo and an Atomos Shogun 7. Both of these monitor/recorders have the ability to act as a 4 channel vision switcher. The great thing about these compared to the Blackmagic Atem is that you can see all your sources on a single screen and you simply touch on the source that you wish to go live. A red box appears around that source and it’s output from the device. 

The Atomos Sumo and the Shogun 7 can both act as 4 input vision switchers.

 

So now I have the ability to stream a feed via OBS from the SDI or HDMI input on the Blackmagic Mini Recorder, fed from one of 4 sources switched by the Atomos Sumo or Shogun 7. A nice little micro studio setup. My sources will be my FS5 and FX9. I can use my Shogun as a video player. For workflow demos I will use another laptop or my main edit machine feeding the video output from DaVinci Resolve via a Blackmagic Mini Monitor which is similar to the mini recorder but the mini monitor is an output device with SDI and HDMI outputs. The final source will be the HDMI output of the edit computer so you can see the desktop.

Don’t forget audio. You can probably get away with very low quality video to get many messages across. But if the audio is hard to hear or difficult to understand then people won’t want to watch your stream. I’m going to be feeding a lavalier (tie clip) mic directly into the computer and OBS.

I think really my main reason for writing this was really to show that many of us probably already have most of the tools needed to put together a small streaming package. Perhaps you can offer this as a service to clients that need to now think about online training or meetings. I was lucky enough to have already had all the items listed in this article, the only extras I have had to but are an extra thunderbolt cable as I only had one. But even if you don’t have a Sumo or Shogun 7 you can still use OBS to switch between the camera on your laptop and any other external inputs. The OBS software is free and very powerful and this really is the keystone to making this all work.

I will be starting a number of online seminars and sessions in the coming weeks. I do have some tutorial videos that I need to finish editing first, but once that’s done expect to see lots of interesting online content from me.  Do let me know what topics you would like to see covered and subject to a little bit of sponsorship I’ll see what I can do.

Stay well people. This will pass and then we can all get back on with life again.

Making Sony Log Cameras Behave Like Arri Cameras When Shooting Cine EI and S-Log3.

Arri have a little trick in their cameras when shooting log to ProRes that the Sony Log cameras don’t have. When you change the Exposure Index in an Arri camera they modify the position of the exposure mid point and the shape of the Log-C gamma curve. There is actually a different Log-C curve for each EI. When you take this into post it has the benefit that the brightness at each EI will appear similar. But as the curve changes for each EI a different LUT is needed for each exposure if you want something shot at say 800EI to look the same as something shot at 200EI.

With a Sony camera the same S-Log curve is used for each Exposure Index and the LUT brightness is changed so that you end up altering the mid point of the recording as well as the highlight and shadow range. In post each EI will appear to be a different brightness. You can use the same LUT for each EI provided you do an exposure correction prior to adding the LUT or you can use dedicated offset LUT’s for each exposure.

But what you need to remember is that you are always working within a restricted recording range with either system. You can’t go darker than the black recording level or brighter than the highest value the codec can record.

If you do it in camera, as Arri do and change the log curve, at a low EI you seriously constrict the recording range (at 200 EI peak only reaches around 78IRE). This happens because at a low EI you put more light on to the sensor. So to keep the mid range looking a normal brightness in post it must be recorded at at a level that is offset downwards compared to normal. So with all the levels now offset downwards to compensate for the brighter exposure you end up recording your entire capture range with a reduced or compressed recording range. In addition to avoid clipping the blacks at a low EI the shadows are rolled off so you lose some detail and textures in the shadows. You can see the different Log-C curves in this Arri White paper.

Most people choose a low EI for 2 reasons, better signal to noise ratio and improved shadow range. The Arri method gives you the better SNR but while the dynamic range is preserved it’s recorded using less data and in particular the shadow data decreases compared to shooting at the base ISO.

Shoot at a high EI, you put less light on to the sensor. So to maintain similar looking mids in post everything has to be recorded at a higher level. Now you have a problem because the highlights will extend beyond the upper limits of the recording range so  Arri have to add a highlight roll off at the top of the Log-C curve. This can present some grading challenges as the curve is now very different to regular Log-C. In addition the highlights are compressed.

Most people choose to shoot at a high EI to extend the highlight range or to work in lower light levels.

The latter is a bit of a pointless exercise with any log camera as the camera sensitivity isn’t actually any different, you are only fooling yourself into thinking it’s is more sensitive and this can result in noisy footage. If you using a high EI to extend the highlight range then really the last thing you want is the extra highlight roll off that Arri have to add at 3200 EI to fit everything in.

One thing here in Arri’s favour is that they can record 12 bit ProRes 444. 12 bits helps mitigate the compressed recording range of low EI’s provided the post workflow is managed correctly.

The beauty of the Sony method is the recording range never changes, so low EI’s and brighter recordings deliver better shadow ranges with more data in the shadows and mids and high EI’s with darker recordings deliver better highlight ranges with no additional data restrictions or additional roll-offs giving the cinematographer more control to choose the exposure mid point without compromise to the data at either end.

But it does mean that post need to be awake and that the shooter needs to communicate with post regarding the brighter/darker looking images. But to be honest if post don’t understand this and recognise what you have done either buy just looking at the footage or checking the metadata what chance is there of post actually doing a decent job of grading your content? This should be fundamental and basic stuff for a colourist/grader. For a colourist/grader to not understand this and how to work with this is like hiring a camera operator that doesn’t know what an ND filter is.

The Sony FS7/FX9/F5/F55/Venice cameras can do something similar to an Arri camera by baking in the S-Log3 LUT. Then in post the exposure will look the same at every EI. BUT you will lose some highlight range at a low EI’s and some shadow range at a high EI’s without gaining any extra range at the opposite end. As a result the total dynamic range does  reduce as you move away from the base ISO.

In addition on the Venice, FS7/F5/F55 (and I suspect in a future update the FX9) you can bake in a user LUT to the SxS recordings. If you  create a set of S-Log3 to S-Log3 LUT’s with EI offsets included in the LUT you could replicate what Arri do by having an offset and tweaked S-Log3 User LUT for each EI that you want to shoot at. You would not use the cameras EI control you would leave the camera st the base ISO. The LUT’s themselves will include the exposure offset. These will maintain the full dynamic range but just like Arri they will  need to roll off the shadows or highlights within the LUT.

But monitoring will be tricky as you won’t have the benefit of a 709 type LUT for monitoring so you you may need to use an external monitor or viewfinder that can apply a LUT to it’s image. The good news is the same LUT would be used in the monitor for every version on the offset S-Log3 LUT that you are baking in as the exposure brightness levels will be the same for each offset.

So here you are a set of 4 S-Log3/S-Gamut3.cine offset LUT’s for those Sony cameras that will take a user LUT. I have named the LUT’s – 2S Down SL3C, 1S Down SL3C,  1S UP SL3C, 2S UP SL3C.

The name means (Number of Stops) (Down or Up) (Slog3.Cine).

So if the cameras base ISO is 2000 (F5/FS7 etc) and you want to shoot at the equivalent of 1000EI, which is 1 stop down from base you would use “1S Down SL3C”.

As always (to date at least) I offer these as a free download available by clicking on the links below.  But I always appreciate a contribution if you find them useful and make use of them. I will let you pay what you feel is fair, all contributions are greatly appreciated and it really does help keep this website up and running. If you can’t afford to pay, then just download the LUT’s and enjoy using them. If in the future you should choose to use them on a paying project, please remember where you got them and come back and make a contribution. More contributions means more LUT offerings in the future.

Please feel free to share a link to this page if you wish to share these LUT’s with anyone else or anywhere else. But it’s not OK to to share or host these on other web sites etc.

Here’s the link to download my offset S-Log3 Camera LUTs

To make a contribution please use the drop down menu here, there are several contribution levels to choose from.


Your choice:



The PXW-FX9 is NOT made out of plastic.

I’ve been quite surprised by the number of people out there that seem to think the PXW-FX9 is made out of plastic. It isn’t. It’s made from an incredibly strong material call magnesium alloy. This metal is stiffer and stronger than aluminium. It’s highly impact and corrosion resistant but still extremely light. There are only a couple of places where plastic is used, one being the cover over the WiFi antenna where a metal cover would block the wireless signals. Here are some pictures of the FX9’s chassis. Note the box shaped areas used to isolate the electronics from the air that flows through the camera to ensure the electronics are weather sealed. Also note all the ribbing and reinforcing on the inside at the front of the camera where the lens mount an sensor block are attached keep it all very solid.

to Check Or Not To Check In Your Camera?

Your going on an overseas shoot and trying to decide whether to check in your camera or take it as carry-on on the flight. What should you do, which is best?

24.8 million checked bags went missing in 2018, so it’s not a small problem.

Europe is the worst with 7.29 bags per 1,000 passengers annually, then it’s 2.85 in North America and only 1.77 in Asia.
So if you’re in Europe and travelling with say 3 bags – camera, tripod, lights. Then statistically your going to lose a bag around around once every 45 flights (22.5 return journeys). The statistics actually fit well with my own experience of a checked in bag going missing about once every 2 years. Most of the time they do turn up eventually, but if you need the gear for a shoot this can often be too late, especially if the location is remote or a long way from an airport.
Some years back I had a huge flightcase with a complete edit system in it disappear on a flight. It didn’t show up again until a couple of years later, found by the airline quite literally on the wrong side of the planet. How you lose something that size for years is beyond me. But stuff does go missing. This case eventually found it’s way back because my name and address was inside it.  And that’s an important point. Make sure your contact details are on your luggage and IN your luggage. On the outside I only put my mobile phone number as there are criminal gangs that will look for addresses on luggage knowing that there’s a higher than normal chance that your home or business property may be unattended while you are out of the country.

Another thing to think about is how tags get attached to your luggage. If the bag is a hold-all type bag with two straps, often the check-in agent will put the baggage tag around both carry handles. If a baggage handler then picks the bag up by a single handle this can cause the tag to come off. Also baggage tags also have little additional bar code on the very end of the tag. These are supposed to be stuck onto the luggage so that if the tag comes off the luggage it can still be scanned and tracked. But often the check-in agents don’t bother sticking them on to your luggage.

If you have ever worked airside at an airport, as you move around you’ll often see small piles of luggage stacked in corners from where it’s fallen off luggage belts or worse still are the bags on the outside of bends on the airport service roads, often in the rain or snow, that have fallen from luggage bins or luggage trucks. Many airports employ people just to drive around to pickup up this stuff , throw it into a truck and then dump in a central area for sorting. Most will eventually find their owners but many won’t which is why they are now many specialist auction houses that sell off lost luggage on behalf of the airlines and airports.

Also what happens if you get caught up in an IT failure or baggage handlers industrial action? You valuable kit could end up in limbo for weeks.

So, I recommend where you can you take your camera as carry-on. Also do remember any lithium batteries MUST be taken as carry on. Tripod, lights etc, that can go in the hold. If they go missing it is a complete pain, but you can probably still shoot if you have the camera a lens and couple of batteries.

Are LUT’s Killing Creativity And Eroding Skills?

I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.

In my distant past I worked in and helped run  a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with  the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.

But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.

Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.

You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.

This creates two issues.

1: Everything looks the same – BORING!!!!

2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.

How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?

If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.

Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.

LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.

Temporal Aliasing – Beware!

As camera resolutions increase and the amount of detail and texture that we can record increases we need to be mindful more and more of temporal aliasing. 

Temporal aliasing occurs when the differences between the frames in a video sequence create undesirable sequences of patterns that move from one frame to the next, often appearing to travel in the opposite direction to any camera movement. The classic example of this is the wagon wheels going backwards effect often seen in old cowboy movies. The cameras shutter captures the spokes of the wheels in a different position in each frame but the timing of the shutter relative to the position of the spokes means that the wheels appear to go backwards rather than forwards. This was almost impossible to prevent with film cameras that were stuck with a 180 degree shutter as there was no way to blur the motion of the spokes so that they were contiguous from one frame to the next. A 360 degree shutter would have prevented this problem in most cases. But it’s also reasonable to note that at 24fps a 360 degree shutter would have introduced an excessive amount of motion blur elsewhere.

Another form of temporal aliasing that often occurs is when you have rapidly moving grass, crops, reeds or fine branches. Let me try to explain:

You are shooting a field of wheat, the stalks are very small in the frame, almost too small to discern individually. As the stalks of wheat move left, perhaps blown by the wind, each stalk will be captured in each frame a little more to the left, perhaps by just a few pixels. But in the video they appear to be going the other way. This is  because every stalk looks the same as all the others and in the following captured frame,  the original stalk may have moved  say 6 pixels to the left. But now there is also a different stalk just 2 pixels to the right of where the original was. Because both stalks look the same it appears that the stalk has moved right instead of left. As the wind speed and the movement of the stalks changes they may appear to move randomly left or right or a combination of both. The image looks very odd, often a jumbled mess, as perhaps the tops of the stalks appear to move one way while lower parts appear to go the other.

There is a great example of temporal aliasing here in this clip on Pond5 https://www.pond5.com/stock-footage/item/58471251-wagon-wheel-effect-train-tracks-optical-illusion-perception

Notice in the pond 5 clip how it’s not only the railway sleepers that appear to move in the wrong direction or at the wrong speed but notice how the stones between the sleepers appear to look like some kind of boiling noise.

Like the old movie wagon wheels one thing that makes this worse is the use of too fast a shutter speed. The more you freeze the motion of the offending objects or textures in each frame the higher the risk of temporal aliasing with moving textures or patterns. Often a slower shutter speed will introduce enough motion blur that the motion looks normal again. You may need to experiment with different shutter speeds to find the sweet spot where the temporal aliasing goes away or is minimised.  If shooting at 50fps or faster try a 360 degree 1/50th shutter as by the time you get to a 1/50th shutter motion is already starting to be as crisp as it needs to be for most types of shots unless you are intending to do some for of frame by frame motion analysis.

Using User Files and All Files to Speed Up Switching Modes on the FX9.

Sometimes changing modes or frame rates on the FX9 can involve the need to change several settings. For example if you want to go from shooting Full Frame 6K at 23.98fps to shooting 120fps then you need to change the sensor scan mode before you can change the frame rate. One way to speed up this process is to use User Files or All Files to save your normal operating settings. Then instead of going through pages of menu settings you just load the appropriate file.

All Files save just about every single adjustable setting in the camera, everything from you white balance settings to LUT’s to Network settings to any menu customisations.  User Files save a bit less. In particular User Files can be set so that they don’t change the white balance. For this reason for things like changing the scan mode and frame rate I prefer to use User Files.

You can add the User File and/or All File menu items to the user menu. If you place them at the top of the user menu, when you enter the cameras menu system for the first time after powering it on they will be the very first items listed.

Both User Files and All Files are found under the “project” section in the FX9 menu system. The files are saved to an SD card in the SD Card Utility slot. This means you can easily move them from one camera to another.

Before you save a file, first you have to give it a name. I recommend that your name includes the scan mode, for example “FF6K” or “2KS35”, the frame rate and whether it’s CineEI or not.

Then save your file to the SD card. When loading a User File the “load customize data” option determines whether the camera will load any changes you have made to the user menu. “Load white data” determines whether the camera will load and overwrite the current white balance setting with ones saved in the file. When loading an All File the white balance and any menu customizations are always loaded regardless, so your current white balance setting will be overwritten by whatever is in the All File. You can however choose whether to load any network user names and passwords.