Category Archives: Workflow

How To Live Stream With The Sony PXW-Z90 and NX80.

The Sony PXW-Z90 is a real gem of a camcorder. It’s very small yet packs a 1″ sensor , has real built in ND filters, broadcast codecs and produces a great image. On top of all that it can also stream live directly to Facebook and other similar platforms. In this video I show you how to set up the Z90 to stream live to YouTube. Facebook is similar. The NX80 from Sony is very similar and can also live stream in the same way.

 

Advertisements

ProResRaw Now In Adobe Creative Cloud Mac Versions!

Hooray! Finally ProRes Raw is supported in both the Mac and Windows versions of Adobe Creative Cloud. I’ve been waiting a long time for this. While the FCP workflow is solid and works, I’m still not the biggest fan of FCP-X. I’ve been a Premiere user for decades although recently I have switched almost 100% to DaVinci Resolve. What I would really like to see is ProRes Raw in Resolve, but I’m guessing that while Black Magic continue to push Black Magic Raw that will perhaps not come. You will need to update your apps to the latest versions to gain the ProRes Raw functionality.

ProResRaw for Windows Released (beta)

Some time ago Atomos indicated that you would be able to use ProResRaw in Adobe Premiere. Well – now you can, on a Windows machine at least.

Apple have just released a beta version of ProResRaw for Windows that is designed to work with Adobe’s creative suite applications including Premiere, After Effects, Premiere Pro and Premiere Rush.

I haven’t been able to try it yet as I’m mainly Mac based. But if you want to give it a go you can download the beta from Apple by clicking here.

Edit: It appears that as well as this plugin you may need to be running the latest beta versions of the Adobe applications. Please don’t ask me where to get the beta applications as I have no idea.

Live Streaming From an FS5

Do you have an FS5 and want to stream to Facebook or YouTube? It’s actually fairly straight forward and you don’t even need to buy anything extra! You can even connect a couple of FS5’s to a single computer and switch between them.

How do you do it?

First you will need to download and install two pieces of free software on your computer. The first is VLC. VLC is an open source video player but it also has the ability to act as a media server that can receive the UDP video streams that the FS5 sends and convert them into a live video clip on the computer. The computer and the camera will both need to be connected to the same wifi network and you will need to enter the IP address of the computer into the streaming server settings in the FS5. By connecting the FS5 to your computer via the network you can use VLC to decode the UDP stream . Go to “file” “open network” and click on “open RTP/UDP stream” and enter the computers IP address and the stream port, you should then save the FS5 stream as a playlist in VLC.

The next piece of software that you need is OBS Open Broadcast Studio. 

OBS is a clever open source streaming application that can convert any video feed connected to a computer into a web stream. From within OBS you can set the signal source to VLC and then the stream from the FS5 will become one of the “scenes” or inputs that OBS can stream to Facebook, YouTube etc.

For multi-camera use a different port for each of the UDP streams and then in VLC save each stream as a different playlist. Then each playlist can be attached to a different scene in OBS so that you can switch. cut and mix between them.

 

Streaming and Live Feeds.

With some difficult times ahead and the need for most of us to minimise contact with others there has never been a greater need for streaming and online video services that now.

I’m setting up some streaming gear in my home office so that I can do some presentations and online workshops over the coming weeks.

I am not an expert on this and although I did recently buy a hardware RTMP streaming encoder, like many of us I didn’t have a good setup for live feeds and streaming.

So like so many people I tried to buy a Blackmagic Design Atem, which is a low cost all in one switcher and streaming device. But guess what? They are out of stock everywhere with no word on when more will become available. So I have had to look at other options.

The good news is that there are many options. There is always your mobile phone, but I want to be able to feed several sources including camera feeds, the feed from my laptop and the video output from a video card. 

OBS to the rescue!

The good news is that there is a great piece of open source software called OBS – Open Broadcast System and the Open Broadcast Studio streaming software.

OBSDemoApp2321 Streaming and Live Feeds.
Open Broadcast Studio Software.

 

OBS is s great piece of software that can convert almost any video source connected to a computer into a live stream that can be sent to most platforms including Facebook and YouTube etc. If the computer is powerful enough it can switch between different camera sources and audio sources. If you follow the tutorials on the OBS website it’s pretty quick and easy to get it up and running.

So how am I getting video into the laptop that’s running OBS? I already had a Blackmagic Mini Recorder which is an HDMI and SDI to thunderbolt input adapter and I shall be using this to feed the computer. There are many other options but the BM Mini Recorders are really cheap and most dealers stock them as well as Amazon. it’s HD only but for this I really don’t need 4K or UHD.

Blackmagic-mini-recorder Streaming and Live Feeds.
Blackmagic Mini Recorder HDMI and SDI to thunderbolt input adapter.

 

Taking things a step further I also have both an Atomos Sumo and an Atomos Shogun 7. Both of these monitor/recorders have the ability to act as a 4 channel vision switcher. The great thing about these compared to the Blackmagic Atem is that you can see all your sources on a single screen and you simply touch on the source that you wish to go live. A red box appears around that source and it’s output from the device. 

atomos_atomsumo19_on_set_in_studio_4kp60_1576110181000_1334246-e1584607746226 Streaming and Live Feeds.
The Atomos Sumo and the Shogun 7 can both act as 4 input vision switchers.

 

So now I have the ability to stream a feed via OBS from the SDI or HDMI input on the Blackmagic Mini Recorder, fed from one of 4 sources switched by the Atomos Sumo or Shogun 7. A nice little micro studio setup. My sources will be my FS5 and FX9. I can use my Shogun as a video player. For workflow demos I will use another laptop or my main edit machine feeding the video output from DaVinci Resolve via a Blackmagic Mini Monitor which is similar to the mini recorder but the mini monitor is an output device with SDI and HDMI outputs. The final source will be the HDMI output of the edit computer so you can see the desktop.

Don’t forget audio. You can probably get away with very low quality video to get many messages across. But if the audio is hard to hear or difficult to understand then people won’t want to watch your stream. I’m going to be feeding a lavalier (tie clip) mic directly into the computer and OBS.

I think really my main reason for writing this was really to show that many of us probably already have most of the tools needed to put together a small streaming package. Perhaps you can offer this as a service to clients that need to now think about online training or meetings. I was lucky enough to have already had all the items listed in this article, the only extras I have had to but are an extra thunderbolt cable as I only had one. But even if you don’t have a Sumo or Shogun 7 you can still use OBS to switch between the camera on your laptop and any other external inputs. The OBS software is free and very powerful and this really is the keystone to making this all work.

I will be starting a number of online seminars and sessions in the coming weeks. I do have some tutorial videos that I need to finish editing first, but once that’s done expect to see lots of interesting online content from me.  Do let me know what topics you would like to see covered and subject to a little bit of sponsorship I’ll see what I can do.

Stay well people. This will pass and then we can all get back on with life again.

Are LUT’s Killing Creativity And Eroding Skills?

I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.

In my distant past I worked in and helped run  a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with  the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.

But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.

Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.

You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.

This creates two issues.

1: Everything looks the same – BORING!!!!

2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.

How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?

If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.

Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.

LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.

New s709 LUT For The FX9 That’s Less Green Than The Sony LUT.

Many users of the FX9 that have been shooting S-Log3 are finding that when they add the standard Sony version of the s709 LUT that their pictures have a slight green tint. I believe that this is because originally the s709 LUT was designed for the Sony Venice camera and the FX9 is very slightly different.  I recently created an experimental LUT to minimise this tint but some people found this tended to push some images slightly magenta.

So I now have a new version of the LUT which really does help combat the green tint. The difference between this LUT and Sony’s original s709 LUT is very small. The idea isn’t to create a new look, just to help get rid of the tint. So you won’t see a big difference, it’s subtle, but I think it really is better.

Click Here to download the ACs709 For FX9 LUT set.

Note: These LUTs are for S-Log3 and SGamut3.cine from the FX9. As usual I have include different versions of the LUT. There are 65x LUT’s suitable for grading as well as 33x LUT’s for monitors or grading software that doesn’t support the higher quality 65x LUTs. There are also minus1 and minus2 LUTS that have 1 and 2 stop exposure shifts for footage that has been shot brighter than the base exposure. In addition I have include the same LUTs but with Legal range input levels for use on Atomos and other recorders that record ProRes in using Legal Range.

Please feel free to share a link to this page if you wish to share these LUT’s with anyone else or anywhere else. But only share via a link to this page please.

If you find these LUT’s useful please consider buying me a coffee or other drink. To make a contribution please use the drop down menu here, there are several contribution levels to choose from.


 

Your choice:



pixel New s709 LUT For The FX9 That's Less Green Than The Sony LUT.

Why Does S-Log Recorded Internally Look Different To S-Log Recorded On An External Recorder?

I have written about this many times before, but I’ll try to be a bit more concise here. So – You have recorded S-Log2 or S-Log3 on your Sony camera and at the same time recorded on an external ProRes Recorder such as an Atomos, Blackmagic or other ProRes recorder. But the pictures look different and they don’t grade in the same way. It’s a common problem. Often the external recording will look more contrasty and when you add a LUT the blacks and shadow areas come out very differently. Video signals can be recorded using a several different data ranges. S-Log2 and S-Log3 signals are always Data Range.  When you record in the camera the cameras adds information to the recording called metadata that tells your editing or grading software that the material is Data Range. This way the edit and grading software knows how to correctly handle the footage and how to apply any LUT’s. However when you record to an external recorder the external recorder doesn’t have this extra metadata. So the recorder will record the Data Range signal that comes from the camera but it doesn’t add the metadata. The ProRes codec is normally used for Legal Range video and by default, unless there is metadata that says otherwise, edit and grading software will assume any ProRes recordings to be Legal Range. So what happens is that your edit software takes the file, assumes it’s Legal Range and handles it as a Legal Range file when in fact the data in the file is Data Range. This results in the recording levels being transposed into incorrect levels for processing. So when you add a LUT it will look wrong, perhaps with very dark shadows or very bright over exposed looking highlights. It can also limit how much you can grade the footage. What Can We Do About It? Premiere CC. You don’t need to do anything in Premiere for the internal .mp4 or MXF recordings. They are handled correctly but Premiere isn’t handling the ProRes files correctly. My approach for this has always been to use the legacy fast color corrector filter to transform the input range to the required output range. If you apply the fast color corrector filter to a clip you can use the input and output level sliders to set the input and output range. In this case we need to set the output black level to CV16 (as that is legal range black) and we need to set output white to CV235 to match legal range white. If you do this you will then see that the external recording appears to have almost exactly the same values as the internal recording. However there is some non-linearity in the transform, it’s not quite perfect.
Screenshot-2019-03-01-at-11.04.04 Why Does S-Log Recorded Internally Look Different To S-Log Recorded On An External Recorder?
Using the legacy “fast color corrector” filter to transform the external recording to the correct range within Premiere.
Now when you apply a LUT the picture and the levels are more or less what you would expect and almost identical to the internal recordings. I say almost because there is a slight hue shift. I don’t know where the hue shift comes from. In Resolve the internal and external recordings look pretty much identical and there is no hue shift. In Premiere they are not quite the same. The hue is slightly different and I don’t know why. My recommendation – use Resolve, it’s so much better for anything that needs any form of grading or color correction. DaVinci Resolve: It’s very easy to tell Resolve to treat the clips as Data Range recordings. In the media bin, right click on the clip and under “clip attributes” change the input range from “auto” to “full”. If you don’t do this DaVinci Resolve will assume the ProRes file to be legal range and it will scale the clip incorrectly in the same way as Premiere does. But if you tell Resolve the clip is full range then it is handled correctly.

Don’t Convert Raw to ProRes Before You Do Your recording.

This comes up again and again, hence why I am writing about it once again.
Raw should never be converted to log before recording if you want any benefit from the raw. You may as well just record the 10 bit log that most cameras are capable of internally. Or take log and output it via the cameras 10 bit output (if it has one) and record that directly on the ProRes recorder. It doesn’t matter how you do it but if you convert between different recording types you will always reduce the image quality and this is as bad a way to do it as you can get. This mainly relates to cameras like the PXW-FS7. The FS5 is different because it’s internal UHD recordings are only 8 bit, so even though the raw is still compromised by converting it to ProRes log, this can still be better than the internal 8 bit log.
S-Log like any other log is a compromise recording format. Log was developed to squash a big dynamic range into the same sized recording bucket as would normally be used for conventional low dynamic range gammas. It does this by discarding a lot of tonal and textural information from everything brighter than 1 stop above middle grey, instead of the amount of data doubling for each stop up you go in exposure, it’s held at a constant amount. Normally this is largely transparent as human vision is less acute in the highlight range, but it is still a compromise.
The idea behind Linear raw is that it should give nothing away, each stop SHOULD contain double the data as the one below. But if you only have 12 bit data that would only allow you to record 11 stops of dynamic range as you would quickly run out of code values. So Sony have to use floating point math or something very similar to reduce the size of each stop by diving down the number of code values each stop has. This has almost no impact on highlights where you start off with 100’s or 1000’s values but in the shadows where a stop may only have 8 or 16 values dividing by 4 means you now only have 2 or 4 tonal levels. So once again this is a compromise recording format. To record a big dynamic range using linear what you really need is 16 bit data.
In summary so far:
S-Log reduces the number of highlight tonal values to fit it a big DR in a normal sized bucket.
Sony’s FSRaw, 12 Bit Linear reduces the number of tonal Values across the entire range to fit it in a compact 12 bit recording bucket, but the assumption is that the recording will be at least 12 bit. The greatest impact of the reduction is in the shadows.
Convert 12 bit linear to 10 bit S-Log and now you are compromising both the highlight range and the shadow range. You have the worst of both, you have 10 bit S-Log but with much less shadow data than the S-log straight from the camera. It’s really not a good thing to do and the internally generated S-Log won’t have shadows compromised in the same way.
If you have even the tiniest bit of under exposure or you attempt to lift the shadows in any way this will accentuate the reduced shadow data and banding is highly likely as the values become stretched even further apart as you bring them up the output gamma range.
If you expose brightly and then reduce the shadows this has the effect of compressing the values closer together or pushing them further down the output curve, closing them together as they go down the output gamma range, this reduces banding. This is one of the reasons why exposing more brightly can often help both log and raw recordings. So a bit of over exposure might help, but any under exposure is really, really going to hurt. Again, you would probably be better off using the internally generated S-Log.
To make matters worse there is also often an issue with S-Log in a ProRes file.
If all that is not enough there is also a big problem in the way ProRes files record S-Log. S-Log should always be recorded as full range data. When you record an internal XAVC file the metadata in the clips tells the edit or grading software that the file is full range. Then when you apply a LUT or do your grading the correct transforms occur and all shadow textures are preserved. But ProRes files are by default treated as legal range files. So when you record full range S-Log inside a ProRes file there is a high likelihood that your edit or grading software will handle the data in the clip incorrectly and this too can lead to problems in the shadows including truncated data, clipping and banding, even though the actual recorded data may be OK. This is purely a metadata issue, grading software such as DaVinci resolve can be forced to treat the ProRes files as full range.
 
 
more on S-Log and ProRes files here: http://www.xdcam-user.com/2019/03/sonys-internal-recording-levels-are-correct/

DaVinci Resolve 16.1.2 Released.

Blackmagic Design have just released the latest update to DaVinci Resolve. If you have been experiencing crashes when using XAVC material from the PXW-FX9 I recommend you download and install this update.

If you are not a Resolve user and are struggling with grading or getting the very best from any log or raw camera, then I highly recommend you take a look at DaVinci Resolve. It’s also a very powerful edit package. The best bit is the free version supports most cameras. If you need full MXF support you will need to buy the studio version, but with a one off cost of only $299 USD it really is a bargain and gets you away from any horrid subscription services.

https://www.blackmagicdesign.com/support/family/davinci-resolve-and-fusion