Tag Archives: production

Multifunction, portable 19″ rack for live, DiT and post production applications.

I have a 6U tall, shallow depth 19″ rack that I put together as a general purpose system to cover a wide variety of applications. In its original form the main units in the rack were a Blackmagic Design 1ME 4K Production Studio switcher, a 4K Hyperdeck Studio and a 4K Ultrastudio.

My original multi function 19″ rack for live production, DIT and post production tasks.



These units worked very well for me, but they are older units and less efficient than the latest ones and this leads to 2 issues – heat and noise. Even after replacing the fans in the units with special very low noise fans they could still be quite noisy at times and the units always ran hot, necessitating the addition of additional fans in the back of the unit. When using it for DIT jobs the fan noise could be a problem on a quiet set. Additionally, the lower efficiency of these older units means they need more power than more recent similar units and one of the applications I have is portable live switching and DIT work, perhaps based out of my camper van and sometimes “off grid”. Another investment will be a portable power pack or “solar generator” to run the rack when I don’t have mains power, so efficiency is important. And finally these are 6G units, so limited to 4K 30p, for 4K 60p you need the more recent 12G units.

The rear of my original version of the multi function rack.



With a big job at the Glastonbury festival coming up, where I would be deeply involved with a special Circus spectacular that would open the festival in front of a crowd of around 70,000,  helping with the video feed to the side screens of the main stage and the BBC, I decided it was time to upgrade the units to the latest versions. 

The updated version of my 19″ multi function rack with new Blackmagic Design units.



My 1ME 4K Production studio was replaced with a Blackmagic Atem 4K Constellation with 10 inputs and 6 outputs. This has the wonderful benefit of being able to map any of the inputs or the program or preview bus to any of the 6 outputs. So, as well as being a great switcher it can also act as a 10 input, 6 output router. My previous 1ME switcher had 3 AUX outputs that could be assigned to any input, but having 6 gives me a lot more routing possibilities. The ability to act as a router is really useful in the DIT role where I may have feeds from multiple cameras that need to be routed to different monitors etc. The Constellation is also smaller than my old 1ME, it’s the same 1U high, but now less wide and this freed up some space in the rack for a new Thunderbolt 3 hub. The Thunderbolt 3 hub powers my Macbook Pro and thanks to the TB3 loop through also connects it to the new 4K UltraStudio Mini. The Ultrastudio Mini is half the size of my old Ultra studio, uses a lot less power and generates a lot less heat. It also has a more up to date Thunderbolt 3 interface . The Ultrastudio allows me to perform live grades with DaVinci Resolve when I’m working as a DIT. It also gives me that all important calibrated video output when I’m doing normal grading jobs.

One thing I’m not sure many actually realise is how important it is to have something like an Ultrastudio or Decklink card to provide an HDMI or SDI output when editing or grading rather than trying to use a computers built in HDMI output. Computers often use different gamuts to the gamuts we use in the world of video production. And while computers do a pretty reasonable job of adding corrections between gammas like Rc-709 or HDR10 and the computers internal colourspace, very often small errors will creep in and all the extra conversions introduce artefacts such as banding into the monitoring pipeline. It’s only by using a proper video production oriented output device or card that you can be sure that you are monitoring your content in the correct colourspace and without any gamma shifts. The Ultrastudio is an absolutely essential piece of kit in my workflow.

I also upgraded the Hyperdeck from a 6G Hyperdeck 4K to the latest Hyperdeck Studio 12G. This new 12G  Hyperdeck can also record to SD cards as well as SSD’s and adds H264 and H265 codec options. For a live switching job the Hyperdeck can be used to record the mixer output or it can be used as a playout device for clip playback and the playback can be controlled by the switcher, although most of the time for playback I prefer to play out of a laptop via the Ultrastudio. In the DIT role the Hyperdeck gets used as a backup recorder to record the cameras output at the DIT station. This then allows me to play back clips almost instantly without having to remove any media from the camera. It’s great to check for problems and the footage can also be looped into DaVinci Resolve via the Ultrastudio for live grading (one thing the old 6G Hyperdeck studio had was the ability to provide an input in to Resolve via Thunderbolt 2, but this one can’t do that). Again this new Hyperdeck runs cooler, quieter and uses less power.

What else is in the rack? 

Rear view of my updated multi function rack.



Well…. There is a 5 port ethernet switch and a small router. The router is there to create a network for the times the rack isn’t connected to an external network such as my home office network. The network is necessary to allow a computer and other devices to control the Constellation switcher and the Hyperdeck.  The router also provides  WiFi access so that a tablet can be used to control the switcher (more on that in a bit).
I have also included an SDI to HDMI mini converter to feed one of the switchers 6 SDI outputs to a cheap HDMI to USB dongle. The dongle is then connected to the Thunderbolt hub and this allows me to convert the switcher output to a UVC compatible video input on the connected laptop.

Using the MixEffect app to control the Constellation from an iPad Pro

 

The main way I use this is to take the switchers multiview output and feed that into the “MixEffect” application on the computer. MixEffect is a brilliant way to control a BlackMagic switcher as it is highly configurable and can run on many different types of device including tablets. On my Macbook I use MixEffect’s ability to overlay the switcher controls over the live Multiview feed, so I can monitor and control the switcher from a single screen. This is very handy for portable setups.

I also have a Blackmagic Bi-directional HDMI to SDI converter in the rack as often I need to bring in an HDMI signal such as the output from a PC for event productions. Being Bi-directional this can also be used to take any of the SDI outputs in the rack and convert it to HDMI. The old 6G switcher had a couple of HDMI outputs but the Constellation doesn’t have any, which is a shame.



There is also an Accsoon Seemo Pro. The Seemo is connected to one of the switchers SDI outputs and I can use it to provide a video feed into an iPad. This allows me to use the MixEffect app on an iPad to both monitor and control the switcher. I’ve been using an old 12.9″ iPad Pro as a low power monitor for those times when I’m running the system off grid. The iPad Pro makes a great monitor, it’s colour accurate and very low latency.

There is a 12v 10 amp power supply. It provides power  for the Seemo (which in turn will power and charge the iPad Pro). It also powers a small, 5 output USB power supply that I use to power the SDI and HDMI converters and the router.  The power supply is also connected to a 56v power converter that provides PPoE power that can feed an external 5G/4G router for those times when I need to get an internet connection via the cellphone network. It also gives me the ability to power a small 12v monitor or other 12v accessories from the rack.

On average, with everything running and a 16″ MacBook Pro connected (remember the thunderbolt hub in the rack also powers the MacBook) the updated rack draws between 175w and 200w of power. So, a relatively affordable 1000Wh Solar Generator should be able to run it and a monitor for around 4  hours. If I’m using it in the camper van, the van has a 1000Wh power system plus up to 300W of solar (on a bright and sunny day). With enough sun, I should be able to run all day, but we can’t rely on the sun in the UK. To be honest I think I will still need “shore power” mains power for longer jobs unless I get enough off grid projects to justify the expense of a +3000Wh solar generator.

Virtual Production With Venice 2. Dubai Workshop

I have a crazy few weeks coming up. This week I will be filming at the Glastonbury festival, then next week I will be in Dubai for a workshop on virtual production with Sony’s Venice 2 camera. This will be a great opportunity for those that have never been to a virtual studio to have a look at how it all works and what’s involved – nad to see how Venice 2 is an excellent camera for VR thanks to it’s very fast sensor readout speed, frame size flexibility and wide range of frame rates. To join one of the sessions please RSVP to Omar.Abuaisha@sony.com 

Are LUT’s Killing Creativity And Eroding Skills?

I see this all the time “which LUT should I use to get this look” or “I like that, which LUT did you use”. Don’t get me wrong, I use LUT’s and they are a very useful tool, but the now almost default reversion to adding a LUT to log and raw material is killing creativity.

In my distant past I worked in and helped run  a very well known post production facilities company. There were two high end editing and grading suites and many of the clients came to us because we could work to the highest standards of the day and from the clients description create the look they wanted with  the controls on the equipment we had. This was a digibeta tape to tape facility that also had a Matrox Digisuite and some other tools, but nothing like what can be done with the free version of DaVinci Resolve today.

But the thing is we didn’t have LUT’s. We had knobs, dials and switches. We had to understand how to use the tools that we had to get to where the client wanted to be. As a result every project would have a unique look.

Today the software available to us is incredibly powerful and a tiny fraction of the cost of the gear we had back then. What you can do in post today is almost limitless. Cameras are better than ever, so there is no excuse for not being able to create all kinds of different looks across your projects or even within a single project to create different moods for different scenes. But sadly that’s not what is happening.

You have to ask why? Why does every YouTube short look like every other one? A big part is automated workflows, for example FCPX that automatically applies a default LUT to log footage. Another is the belief that LUT’s are how you grade, and then everyone using the same few LUT’s on everything they shoot.

This creates two issues.

1: Everything looks the same – BORING!!!!

2: People are not learning how to grade and don’t understand how to work with colour and contrast – because it’s easier to “slap on a LUT”.

How many of the “slap on a LUT’ clan realise that LUT’s are camera and exposure specific, how many realise that LUT’s can introduce banding and other image artefacts into footage that might otherwise be pristine?

If LUT’s didn’t exist people would have to learn how to grade. And when I say “grade” I don’t mean a few tweaks to the contrast, brightness and colour wheels. I mean taking individual hues and tones and changing them in isolation. For example separating skin tones from the rest of the scene so they can be made to look one way while the rest of the scene is treated differently. People would need to learn how to create colour contrast as well as brightness contrast. How to make highlights roll off in a pleasing way, all those things that go into creating great looking images from log or raw footage.

Then, perhaps, because people are doing their own grading they would start to better understand colour, gamma, contrast etc, etc. Most importantly because the look created will be their look, from scratch, it would be unique. Different projects from different people would actually look different again instead of each being a clone of someone else’s work.

LUT’s are a useful tool, especially on set for an approximation of how something could look. But in post production they restrict creativity and many people have no idea of how to grade and how they can manipulate their material.

Can DaVinci Resolve steal the edit market from Adobe and Apple.

I have been editing with Adobe Premiere since around 1994. I took a rather long break from Premiere between 2001 and 2011 and switched over to Apple and  Final Cut Pro which in many ways used to be very similar to Premiere (I think some of the same software writers were used for FCP as Premiere). My FCP edit stations were always muti-core Mac Towers. The old G5’s first then later on the Intel Towers. Then along came FCP-X. I just didn’t get along with FCP-X when it first came out. I’m still not a huge fan of it now, but will happily concede that FCP-X is a very capable, professional edit platform.

So in 2011 I switch back to Adobe Premiere as my edit platform of choice. Along the way I have also used various versions of Avid’s software, which is another capable platform.

But right now I’m really not happy with Premiere. Over the last couple of years it has become less stable than it used to be. I run it on a MacBook Pro which is a well defined hardware platform, yet I still get stability issues. I’m also experiencing problems with gamma and level shifts that just shouldn’t be there. In addition Premiere is not very good with many long GOP codecs. FCP-X seems to make light work of XAVC-L compared to Premiere. Furthermore Adobe’s Media encoder which once used to be one of the first encoders to get new codecs or features is now lagging behind, Apples Compressor now has the ability to do at he full range of HDR files. Media Compressor can only do HDR10. If you don’t know, it is possible to buy Compressor on it’s own.

Meanwhile DaVinci Resolve has been my grading platform of choice for a few years now. I have always found it much easier to get the results and looks that I want from Resolve than from any edit software – this isn’t really a surprise as after all that’s what Resolve was originally designed for.

DaVinci Resolve a great grading software and it’s edit capabilities are getting better and better.

The last few versions of Resolve have become much faster thanks to some major processing changes under the hood and in addition there has been a huge amount of work on Resolves edit capabilities. It can now be used as a fully featured edit platform. I recently used Resolve to edit some simpler projects that were going to be graded as this way I could stay in the same software for both processes, and you know what it’s a pretty good editor. There are however a few things that I find a bit funky and frustrating in the edit section of Resolve at the moment. Some of that may simply be because I am less familiar with it for editing than I am Premiere.

Anyway, on to my point. Resolve is getting to be a pretty good edit platform and it’s only going to get better. We all know that it’s a really good and very powerful grading platform and with the recent inclusion of the Fairlight audio suite within Resolve it’s pretty good at handling audio too. Given that the free version of Resolve can do all of the edit, sound and grading functions that most people need, why continue to subscribe to Adobe or pay for FCP-X?

With the cost of the latest generations of Apple computers expanding the price gap between them and similar spec Windows machines – as well as the new Macbooks lacking built in ports like HDMI, USB3 that we all use every day (you now have to use adapters and dongles). The  Apple eco system is just not as attractive as it used to be. Resolve is cross platform, so an Mac user can stay with Apple if they wish, or move over to Windows or Linux whenever they want with Resolve. You can even switch platforms mid project if you want. I could start an edit on my MacBook and the do the grade on a PC workstation staying with Resolve through the complete process.

Even if you need the extra features of the full version like very good noise reduction, facial recognition, 4K DCI output or HDR scopes then it’s still good value as it currently only costs $299/£229 which is less than a years subscription to Premiere CC.

But what about the rest of the Adobe Creative suite? Well you don’t have to subscribe to the whole suite. You can just get Photoshop or After Effects. But there are also many alternatives. Again Blackmagic Design have Fusion 9 which is a very impressive VFX package used for many Hollywood movies and like Resolve there is also a free version with a very comprehensive tools set or again for just $299/£229 you get the full version with all it’s retiming tools etc.

Blackmagic Designs Fusion is a very impressive video effects package for Mac and PC.

For a Photoshop replacement you have GIMP which can do almost everything that Photoshop can do. You can even use Photoshop filters within GIMP. The best part is that GIMP is free and works on both Mac’s and PC’s.

So there you have it – It looks like Blackmagic Design are really serious about taking a big chunk of Adobe Premiere’s users. Resolve and Fusion are cross platform so, like Adobe’s products it doesn’t matter whether you want to use a Mac or a PC. But for me the big thing is you own the software. You are not going to be paying out rather a lot of money month on month for something that right now is in my opinion somewhat flakey.

I’m not quite ready to cut my Creative Cloud subscription yet, maybe on the next version of Resolve. But it won’t be long before I do.

ACES: Try it, it might make your life simpler!

ACES is a workflow for modern digital cinema cameras. It’s designed to act as a common standard that will work with any camera so that colourist can use the same grades on any camera with the same results.

A by-product of the way ACES works is that it can actually simplify your post production workflow as ACES takes care of an necessary conversions to and from different colour spaces and gammas. Without ACES when working with raw or log footage you will often need to use LUT’s to convert your footage to the right output standard. Where you place these LUT’s in your workflow path can have a big impact on your ability to grade your footage and the quality or your output. ACES takes care of most of this for you, so you don’t need to worry about making sure you are grading “under the LUT” etc.

ACES works on footage in Scene Referred Linear, so on import in to an ACES workflow conventional gamma or log footage is either converted on the fly from Log or Gamma to Linear by the IDT (Input Device Transform) or you use something like Sony’s Raw Viewer to pre convert the footage to ACES EXR. If the camera shoots linear raw, as can the F5/F55 then there is still an IDT to go from Sony’s variation of scene referenced linear to the ACES variation, but this is a far simpler conversion with fewer losses or image degradation as a result.

The IDT is a type of LUT that converts from the camera’s own recording space to ACES Linear space. The camera manufacturer has to provide detailed information about the way it records so that the IDT can be created. Normally it is the camera manufacturer that creates the IDT, but anyone with access to the camera manufacturers colour science or matrix/gamma tables can create an IDT. In theory, after converting to ACES, all cameras should look very similar and the same grades and effects can be applied to any camera or gamma and the same end result achieved. However variations between colour filters, dynamic range etc will mean that there will still be individual characteristics to each camera, but any such variation is minimised by using ACES.

“Scene Referred” means linear light as per the actual light coming from the scene. No gamma, no color shifts, no nice looks or anything else. Think of it as an actual measurment of the true light coming from the scene. By converting any camera/gamma/gamut to this we should be making them as close as possible as now the pictures should be a true to life linear representation of the scene as it really is. The F5/F55/F65 when shooting raw are already scene referred linear, so they are particularly well suited to an ACES workflow.

Most conventional cameras are “Display Referenced” where the recordings or output are tailored through the use of gamma curves and looks etc so that they look nice on a monitor that complies to a particular standard, for example 709. To some degree a display referenced camera cares less about what the light from the scene is like and more about what the picture looks like on output, perhaps adding a pleasing warm feel or boosting contrast. These “enhancements” to the image can sometimes make grading harder as you may need to remove them or bypass them. The ACES IDT takes care of this by normalising the pictures and converting to the ACES linear standard.

After application of an IDT and conversion to ACES, different gamma curves such as Sony’s SLog2 and SLog3 will behave almost exactly the same. But there will still be differences in the data spread due to the different curves used in the camera and due to differences in the recording Gamut etc. Despite this the same grade or corrections would be used on any type of gamma/gamut and very, very similar end results achieved. (According to Sony’s white paper, SGamut3 should work better in ACES than SGamut. In general though the same grades should work more or less the same whether the original is Slog2 or Slog3).

In an ACES workflow the grade is performed in Linear space, so exposure shifts etc are much easier to do. You can still use LUT’s to apply a common “Look” to a project, but you don’t need a LUT within ACES for the grade as ACES takes care of the output transformation from the Linear, scene referenced grading domain to your chosen display referenced output domain. The output process is a two stage conversion. First from ACES linear to the RRT or Reference Rendering Transform. This is a very computationally complex transformation that goes from Linear to a “film like” intermediate stage with very large range in excess of most final output ranges. The idea being that the RRT is a fixed and well defined standard and all the complicated maths is done getting to the RRT. From the RRT you then add a LUT called the ODT or Output Device Transform to convert to your final chosen output type. So Rec709 for TV, DCI-XYZ for cinema DCP etc. This means you just do one grading pass and then just select the type of output look you need for different types of master.

Very often to simplify things the RRT and ODT are rolled into a single process/LUT so you may never see the RRT stage.

This all sounds very complicated and complex and to a degree what’s going on under the hood of your software is quite sophisticated. But for the colourist it’s often just as simple as choosing ACES as your grading mode and then just selecting your desired output standard, 709, DCI-P3 etc. The software then applies all the necessary LUT’s and transforms in all the right places so you don’t need to worry about them. It also means you can use exactly the same workflow for any camera that has an ACES IDT, you don’t need different LUT’s or Looks for different cameras. I recommend that you give ACES a try.

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.