I received this timely reminder from the guys at Pag Batteries and it contains important information even if you don’t have one of Pag’s excellent batteries. The main one being that you should not store lithium batteries full charged.
I received this timely reminder from the guys at Pag Batteries and it contains important information even if you don’t have one of Pag’s excellent batteries. The main one being that you should not store lithium batteries full charged.
With some difficult times ahead and the need for most of us to minimise contact with others there has never been a greater need for streaming and online video services that now.
I’m setting up some streaming gear in my home office so that I can do some presentations and online workshops over the coming weeks.
I am not an expert on this and although I did recently buy a hardware RTMP streaming encoder, like many of us I didn’t have a good setup for live feeds and streaming.
So like so many people I tried to buy a Blackmagic Design Atem, which is a low cost all in one switcher and streaming device. But guess what? They are out of stock everywhere with no word on when more will become available. So I have had to look at other options.
The good news is that there are many options. There is always your mobile phone, but I want to be able to feed several sources including camera feeds, the feed from my laptop and the video output from a video card.
OBS to the rescue!
The good news is that there is a great piece of open source software called OBS – Open Broadcast System and the Open Broadcast Studio streaming software.
OBS is s great piece of software that can convert almost any video source connected to a computer into a live stream that can be sent to most platforms including Facebook and YouTube etc. If the computer is powerful enough it can switch between different camera sources and audio sources. If you follow the tutorials on the OBS website it’s pretty quick and easy to get it up and running.
So how am I getting video into the laptop that’s running OBS? I already had a Blackmagic Mini Recorder which is an HDMI and SDI to thunderbolt input adapter and I shall be using this to feed the computer. There are many other options but the BM Mini Recorders are really cheap and most dealers stock them as well as Amazon. it’s HD only but for this I really don’t need 4K or UHD.
Taking things a step further I also have both an Atomos Sumo and an Atomos Shogun 7. Both of these monitor/recorders have the ability to act as a 4 channel vision switcher. The great thing about these compared to the Blackmagic Atem is that you can see all your sources on a single screen and you simply touch on the source that you wish to go live. A red box appears around that source and it’s output from the device.
So now I have the ability to stream a feed via OBS from the SDI or HDMI input on the Blackmagic Mini Recorder, fed from one of 4 sources switched by the Atomos Sumo or Shogun 7. A nice little micro studio setup. My sources will be my FS5 and FX9. I can use my Shogun as a video player. For workflow demos I will use another laptop or my main edit machine feeding the video output from DaVinci Resolve via a Blackmagic Mini Monitor which is similar to the mini recorder but the mini monitor is an output device with SDI and HDMI outputs. The final source will be the HDMI output of the edit computer so you can see the desktop.
Don’t forget audio. You can probably get away with very low quality video to get many messages across. But if the audio is hard to hear or difficult to understand then people won’t want to watch your stream. I’m going to be feeding a lavalier (tie clip) mic directly into the computer and OBS.
I think really my main reason for writing this was really to show that many of us probably already have most of the tools needed to put together a small streaming package. Perhaps you can offer this as a service to clients that need to now think about online training or meetings. I was lucky enough to have already had all the items listed in this article, the only extras I have had to but are an extra thunderbolt cable as I only had one. But even if you don’t have a Sumo or Shogun 7 you can still use OBS to switch between the camera on your laptop and any other external inputs. The OBS software is free and very powerful and this really is the keystone to making this all work.
I will be starting a number of online seminars and sessions in the coming weeks. I do have some tutorial videos that I need to finish editing first, but once that’s done expect to see lots of interesting online content from me. Do let me know what topics you would like to see covered and subject to a little bit of sponsorship I’ll see what I can do.
Stay well people. This will pass and then we can all get back on with life again.
As camera resolutions increase and the amount of detail and texture that we can record increases we need to be mindful more and more of temporal aliasing.
Temporal aliasing occurs when the differences between the frames in a video sequence create undesirable sequences of patterns that move from one frame to the next, often appearing to travel in the opposite direction to any camera movement. The classic example of this is the wagon wheels going backwards effect often seen in old cowboy movies. The cameras shutter captures the spokes of the wheels in a different position in each frame but the timing of the shutter relative to the position of the spokes means that the wheels appear to go backwards rather than forwards. This was almost impossible to prevent with film cameras that were stuck with a 180 degree shutter as there was no way to blur the motion of the spokes so that they were contiguous from one frame to the next. A 360 degree shutter would have prevented this problem in most cases. But it’s also reasonable to note that at 24fps a 360 degree shutter would have introduced an excessive amount of motion blur elsewhere.
Another form of temporal aliasing that often occurs is when you have rapidly moving grass, crops, reeds or fine branches. Let me try to explain:
You are shooting a field of wheat, the stalks are very small in the frame, almost too small to discern individually. As the stalks of wheat move left, perhaps blown by the wind, each stalk will be captured in each frame a little more to the left, perhaps by just a few pixels. But in the video they appear to be going the other way. This is because every stalk looks the same as all the others and in the following captured frame, the original stalk may have moved say 6 pixels to the left. But now there is also a different stalk just 2 pixels to the right of where the original was. Because both stalks look the same it appears that the stalk has moved right instead of left. As the wind speed and the movement of the stalks changes they may appear to move randomly left or right or a combination of both. The image looks very odd, often a jumbled mess, as perhaps the tops of the stalks appear to move one way while lower parts appear to go the other.
There is a great example of temporal aliasing here in this clip on Pond5 https://www.pond5.com/stock-footage/item/58471251-wagon-wheel-effect-train-tracks-optical-illusion-perception
Notice in the pond 5 clip how it’s not only the railway sleepers that appear to move in the wrong direction or at the wrong speed but notice how the stones between the sleepers appear to look like some kind of boiling noise.
Like the old movie wagon wheels one thing that makes this worse is the use of too fast a shutter speed. The more you freeze the motion of the offending objects or textures in each frame the higher the risk of temporal aliasing with moving textures or patterns. Often a slower shutter speed will introduce enough motion blur that the motion looks normal again. You may need to experiment with different shutter speeds to find the sweet spot where the temporal aliasing goes away or is minimised. If shooting at 50fps or faster try a 360 degree 1/50th shutter as by the time you get to a 1/50th shutter motion is already starting to be as crisp as it needs to be for most types of shots unless you are intending to do some for of frame by frame motion analysis.
Sometimes changing modes or frame rates on the FX9 can involve the need to change several settings. For example if you want to go from shooting Full Frame 6K at 23.98fps to shooting 120fps then you need to change the sensor scan mode before you can change the frame rate. One way to speed up this process is to use User Files or All Files to save your normal operating settings. Then instead of going through pages of menu settings you just load the appropriate file.
All Files save just about every single adjustable setting in the camera, everything from you white balance settings to LUT’s to Network settings to any menu customisations. User Files save a bit less. In particular User Files can be set so that they don’t change the white balance. For this reason for things like changing the scan mode and frame rate I prefer to use User Files.
You can add the User File and/or All File menu items to the user menu. If you place them at the top of the user menu, when you enter the cameras menu system for the first time after powering it on they will be the very first items listed.
Both User Files and All Files are found under the “project” section in the FX9 menu system. The files are saved to an SD card in the SD Card Utility slot. This means you can easily move them from one camera to another.
Before you save a file, first you have to give it a name. I recommend that your name includes the scan mode, for example “FF6K” or “2KS35”, the frame rate and whether it’s CineEI or not.
Then save your file to the SD card. When loading a User File the “load customize data” option determines whether the camera will load any changes you have made to the user menu. “Load white data” determines whether the camera will load and overwrite the current white balance setting with ones saved in the file. When loading an All File the white balance and any menu customizations are always loaded regardless, so your current white balance setting will be overwritten by whatever is in the All File. You can however choose whether to load any network user names and passwords.
This came out of a discussion about viewfinder brightness where the compliant was that the viewfinder on the FX9 was too bright when compared side by side with another monitor. It got me into really thinking about how we judge exposure when purely looking at a monitor or viewfinder image.
To start with I think it’s important to thing understand a couple of things:
1: Our perception of how bright a light source is depends on the ambient light levels. A candle in a dark room looks really bright, but outside on a sunny day it is not perceived as being so bright. But of course we all know that the light being emitted by that candle is exactly the same in both situations.
2: Between the middle grey of a grey card and the white of a white card there are about 2.5 stops. Faces and skin tones fall roughly half way between middle grey and white. Taking that a step further between what most people will perceive as black, something like a black card, black shirt and a white card there are around 5 to 6 stops and faces will always be roughly 3/4 of the way up that brightness range at somewhere around about 4 stops above black . It doesn’t matter whether that’s outside on a dazzlingly bright day in the desert in the middle East or on a dull overcast winters day in the UK, those relative levels never change.
Now think about this:
If you look at a picture on a screen and the face is significantly brighter than middle grey and much closer to white than middle grey what will you think? To most it will almost certainly appear over exposed because we know that in the real world a face sits roughly 3/4 of the way up the relative brightness range and roughly half way between middle gray and white.
What about if the face is much darker than white and close to middle grey? Then it will generally look under exposed as relative to black, white and middle grey the face is too dark.
The key point here is that we make these exposure judgments based on where faces and other similar things are relative to black and white. We don’t know the actual intensity of the white, but we do know how bright a face should be relative to white and black.
This is why it’s possible to make an accurate exposure assessment using a 100 Nit monitor or a 1000 Nit daylight viewable monitor. Provided the contrast range of the monitor is correct and black looks black, middle grey is in the middle and white looks white then skin tones will be 3/4 of the way up from black and 1/4 down from white when the image is correctly exposed.
But here’s the rub: If you put the 100 Nit monitor next to the 1000 Nit monitor and look at both at the same time, the two will look very, very different. Indoors in a dim room the 1000 Nit monitor will be dazzlingly bright, meanwhile outside on a sunny day the 100 Nit monitor will be barely viewable. So which is right?
The answer is they both are. Indoors, with controlled light levels or when covered with a hood or loupe then the 100 Nit monitor might be preferable. In a grading suite with controlled lighting you would normally use a monitor with white at 100 nits. But outside on a sunny day with no shade or hood the 1000 Nit monitor might be preferable because the 100 nit monitor will be too dim to be of any use.
Think of this another way: Take both monitors into a dark room and take a photo of each monitor with your phone. The phone’s camera will adjust it’s exposure so both will look the same and the end result will be two photos where the screens will look the same. Our eyes have iris’s just like a cameras and do exactly the same thing, adjust so that the brightness is with the range our eyes can deal with. So the actual brightness is only of concern relative to the ambient light levels.
This presents a challenge to designers of viewfinders that can be used both with or without a loupe or shade such as the LCD viewfinder on the FX9 that which be used both with the loupe/magnifier and without it. How bright should you make it? Not so bright it’s dazzling when using the loupe but bright enough to be useful on a sunny day without the loupe.
The actual brightness isn’t critical (beyond whether it’s bright enough to be seen or not) provided the perceived contrast is right.
When setting up a monitor or viewfinder it’s the adjustment of the black level and black pedestal which alters the contrast of the image (the control of which is confusingly called the brightness control). This “brightness” control is the critical one because if the brightness adjustment raises the blacks by too much then you make the shadows and mids brighter relative to white and less contrasty, so you will tend to expose lower in an attempt to have good contrast and a normal looking mid range. Exposing brighter makes the mids look excessively bright relative to where white is and the black screen surround is.
If the brightness is set too low it pulls the blacks and mids down then you will tend to over expose in an attempt to see details and textures in the shadows and to make the mids normal.
It’s all about the monitor or viewfinders contrast and where everything stits between the darkest and brightest parts pf the image. The peak brightness (equally confusingly set by the contrast control) is largely irrelevant because our perception of how bright this is depends entirely on the ambient light level, just don’t over drive the display.
We don’t look at a VF and think – “Ah that face is 100 nits”. We think – “that face is 3/4 of the way up between black and white” because that’s exactly how we see faces in all kinds of light conditions – relative levels – not specific brightness.
So far I have been discussing SDR (standard dynamic range) viewfinders. Thankfully I have yet to see an HDR viewfinder because an HDR viewfinder could actually make judging exposure more difficult as “white” such as a white card isn’t very bright in the world of HDR and an HDR viewfinder would have a far greater contrast range than just the 5 or 6 stops of an SDR finder. The viewfinders peak brightness could well be 10 times or more brighter than the white of a white card. So that complicates things as first you need to judge and asses where white is within a very big brightness range. But I guess I’ll cross that bridge when it comes along.
So you have just taken delivery of a brand new PXW-FX9. Turned it on and plugged it in to a 4K TV or monitor – and shock horror there are little bright dots in the image – hot pixels.
First of all, don’t be alarmed, this is not unusual, in fact I’d actually be surprised if there weren’t any, especially if the camera has travelled in any airfreight.
Video sensors have millions of pixels and they are prone to disturbance from cosmic rays. It’s not unusual for some to become out of spec. So all modern cameras incorporate various methods of recalibrating or re-mapping those pesky problem pixels. On the Sony professional cameras this is called APR. Owners of the Sony F5, F55, Venice and FX9 will see a “Perform APR” message every couple of weeks as this is a function that needs to be performed regularly to ensure you don’t get any problems.
You should always run the APR function after flying with the camera, especially on routes over the poles as cosmic rays are greater in these areas. Also if you intend to shoot at high gain levels it is worth performing an APR run before the shoot.
If your camera doesn’t have a dedicated APR function, typically found in the maintenance section of the the camera menu system, then often the black balance function will have a very similar effect. On some Sony cameras repeatedly performing a black balance will active the APR function.
If there are a lot of problem pixels then it can take several runs of the APR routine to sort them all out. But don’t worry, it is normal and it is expected. All cameras suffer from it. Even if you have 1000 dead pixels that’s still only a teeny tiny fraction of the 19 million pixels on the sensor.
APR just takes 30 seconds or so to complete. It’s also good practice to black balance at the beginning of each day to help minimise fixed pattern noise and set the cameras black level correctly. Just remember to ensure there is a cap on the lens or camera body to exclude all outside light when you do it!
It’s a common problem. You are shooting a performance or event where LED lighting has been used to create dramatic coloured lighting effects. The intense blue from many types of LED stage lights can easily overload the sensor and instead of looking like a nice lighting effect the blue light becomes an ugly splodge of intense blue that spoils the footage.
Well there is a tool hidden away in the paint settings of many recent Sony cameras that can help. It’s called “adaptive matrix”.
When adaptive matrix is enabled, when the camera sees intense blue light such as the light from a blue LED light, the matrix adapts to this and reduces the saturation of the blue colour channel in the problem areas of the image. This can greatly improve the way such lights and lighting look. But be aware that if trying to shoot objects with very bright blue colours, perhaps even a bright blue sky, if you have the adaptive matrix turned on it may desaturate them. Because of this the adaptive matrix is normally turned off by default.
If you want to turn it on, it’s normally found in the cameras paint and matrix settings and it’s simply a case of setting adaptive matrix to on. I recommend that when you don’t actually need it you turn it back off again.
Most of Sony’s broadcast quality cameras produced in the last 5 years have the adaptive matrix function, that includes the FS7, FX9, Z280, Z450, Z750, F5/F55 and many others.
Having shot quite a bit of S-Log3 content on the new Sony PXW-FX9 I thought I would comment on my exposure preferences. When shooting with an FS5, FS7 or F5, which all use the same earlier generation 4K sensor I find that to get the best results I need to expose between 1 and 2 stops brighter than the 41% for middle grey that Sony recommend. This is because I find my footage to be noisier than I would like if I don’t expose brighter. So when using CineEI on these cameras I use 800EI instead of the base 2000EI
However the FX9 uses a newer state of the art back illuminated sensor. This more sensitive sensor produces less noise so with the FX9 I no longer feel it is necessary to expose more brightly than the base exposure – at either of the base ISO’s. So if I am shooting using CineEI and 800 base, I use 800EI. When shooting at 4000 base, I use 4000 EI.
This makes life so much easier. It also means that if you are shooting in a mode where LUT’s are not available (such as 120fps HD) then you can use the included viewfinder gamma assist function instead. Viewfinder gamma assist adds the same 709(800) look to the viewfinder as you would get from using the cameras built in 709(800) LUT. You can use the VF gamma assist to help judge your exposure just as you would with a LUT. Basically, if it looks right in the viewfinder, it almost certainly is right.
Testing various FX9’s against my Sekonic light meter the cameras CineEI ISO ratings seem to be spot on. So I would have no concerns if using a light meter to expose. The camera also has a waveform scope and zebras to help guide your exposure.
VF Gamma assist is available in all modes on the FX9, including playback. Just be careful that you don’t have both a LUT on and gamma assist at the same time.
From time to time someone will pop up on a forum or user group with tales of fried SDI boards, dead monitors or dead audio devices. Often the reason for the death of these units seems obscure. One day it all works fine, the next time the monitor is plugged in it stops working.
A common cause of these types of issue is the use of individual power supplies for each device. Most modern power supplies use a technology called “switch mode”. Most “wall wart” power supplies are switch mode. Computers use switch mode power supplies, they are probably the most common type of power supply in use today.
The problem with these power supplies is that the voltage they produce is not tied to a common earth or ground connection. A 12 volt power supply may have an output voltage that measures 12 volts across it’s positive and negative terminals, which is great. But the negative terminal might be many volts above “ground”. Used singly this is not normally a problem but if you use a couple of different power supplies with negative terminals floating at different voltages, if you connect them together current will flow from one to the other as the establish a common base voltage.
As an example if you have a monitor powered by one power supply and a camera powered by another, when you connect the monitor to the camera current may flow down the SDI or HDMI cable from one power supply to the other causing damage to the chips that process the SDI/HDMI signals.
Even if there is no damage this current can lead to audio hum or other electrical noise.
How can you prevent this?
First use only high quality power supplies. Wherever possible try to run everything off a single power supply. Powering the camera from a high capacity power supply and then feeding any connected accessories via D-Tap or Hirose outputs on the camera is good practice. Also powering everything by batteries helps. If you must use separate power supplies then connect everything together before connecting anything to the mains and before turning anything on. This should ensure that any current runs through the shield and ground paths in the cables rather than possibly travelling down the delicate signal part of a connection as you connect things together.
This has been asked a couple of times. How do I record the slow motion S&Q output of my PXW-FS5 to an external recorder if I don’t have the raw option or don’t want to use raw.
Well it is possible and it’s quite easy to do. You can do it with either an SDI or HDMI recorder, both will work. The example here is for the new Atomos Ninja V recorder, but the basic idea is the same for most recorders.
Just to be absolutely clear this isn’t a magic trick to give you raw with a conventional non raw recorder. But it will allow you to take advantage of the higher quality codec (normally ProRes) in the external recorder.
Oh and by the way – The Ninja V is a great external monitor and recorder if you don’t want raw or you need something smaller than the Inferno.
So here’s how you do it:
In the camera menu and “Rec Set” – set the file format to XAVC HD and the Rec Format to 1080/50p or 1080/60p it MUST be 50p or 60p for this to work correctly.
In “Video Out” select the HDMI (for the Ninja, if you recorder has SDI then this works with SDI too).
Set the SDI/HDMI to 1080p/480i or 1080p/560i it MUST be p not i
Set HDMI TC Output to ON
Set SDI/HDMI Rec Control to ON
Connect the Ninja (or other recorder) via HDMI and on the Ninja under the input settings set the record trigger to HDMI – ON. If you are using a recorder with SDI you should have similar options for the SDI input.
So now what will happen is when you use the S&Q mode at 100fps or higher the camera will act as normally, you will still need a SD card in the camera. But when the camera copies the slow motion footage from the internal buffer to the SD card the external recorder will automatically go into record at the same time and record the output stream of the buffer. Once the buffer stream stops, the recorder will stop.
The resulting file will be 50p/60p. So if you want to use it in a 24/25/30p project and get the full slow-mo benefit you will need to tell the edit software to treat the file as a 24/25/30p file to match the other clips in your project. Typically this is done by right clicking on the clip and using the “interpret footage” function to set the frame rate to match the frame rate of your project or other footage.
And that’s it. It’s pretty simple to do and you can improve the quality of your files over the internal recordings, although I have to say you’ll be hard pushed to see any difference in most cases as the XAVC is already pretty good.