Beware the LC709 LUT double exposure offset.

The use o f the LC709 Type A LUT in Sony’s Cinealta cameras such as the PXW-FS7 or PMW-F55 is very common. This LUT is popular because it was designed to mimic the Arri cameras when in their Rec-709 mode. But before rushing out to use this LUT and any of the other LC709 series of LUT’s there are some things to consider.

The Arri cameras are rarely used in Rec-709 mode for anything other than quick turn around TV. You certainly wouldn’t normally record this for any feature or drama productions. It isn’t the “Arri Look” The Arri look normally comes as a result of shooting using Arri’s LogC and then grading that to get the look you want. The reason it exists is to provide a viewable image on set. It has more contrast than LogC and uses Rec 709 color primaries so the colors look right, but it isn’t Rec-709. It squeezes almost all of the cameras capture range into a something that can be viewed on a 709 monitor so it looks quite flat.

Because a very large dynamic range is being squeezed into a range suitable to be viewed on a regular, standard dynamic range monitor the white level is much reduced compared to regular Rec-709. In fact, white (such as a white piece of paper) should be exposed at around 70%. Skin tones should be exposed at around 55-60%.

If you are shooting S-Log on a Sony camera and using this LUT to monitor, if you were to expose using conventional levels, white at 85-90% skin tones at 65-70%, then you will be offsetting your exposure by around +1.5 stops. On it’s own this isn’t typically going to be a problem. In fact I often come across people that tell me that they always shoot at the cameras native EI using this LUT and get great, low noise pictures. When I dig a little deeper I often find that they are exposing white at 85% via the LC709 LUT. So in reality they are actually shooting with an exposure the equivalent of +1 to +1.5 stops over the base level.

Where you can really run into problems is when you have already added an exposure offset. Perhaps you are shooting on an FS7 where the native ISO is 2000 ISO and using an EI of 800. This is a little over a +1 stop exposure offset. Then if you use one of the LC709 LUT’s and expose the LUT so white is at 90% and skin tones at 70% you are adding another +1.5 stops to the exposure, so your total exposure offset is approaching 3 stops. This large an offset is rarely necessary and can be tricky to deal with in post. It’s also going to impact your highlight range.

So just be aware that different LUT’s require different white and grey levels and make sure you are exposing the LUT at it’s correct level so that you are not adding an additional offset to your desired exposure.

Guide to Hypegammas and Cinegammas Updated

I have just revisited a post I did some years ago about the correct way to expose Hypergammas and Cinegammas. I’ve revised some of it and updated some other bits. Hopefully the revisions make it  a little easier to understand why the brightness levels should be a little different and I’ve provided some suggested settings for zebra levels. You will find the revised guide here: Guide To Hypergamma and Cinegamma Exposure.

DaVinci Resolve, ACES and the “Sony Raw” input transform.

A quick heads up for users of Resolve with Sony Raw and X-OCN. Don’t make the same mistake I have been making. For some time I have been unhappy with the way the Sony raw looked in DaVinci Resolve and ACES prior to grading. Apparently there used to be a small problem with the raw input transform that could lead to a red/pink hue getting added to the footage. This problem was fixed some time ago. You should now not use the the “Sony Raw” input transform, if you do, it will tint your Raw or X-OCN files slightly pink/red. Instead you should select “no transform”. With no transform selected my images look so much nicer and match Sony’s own Raw Viewer so much better. Thanks to Nick Shaw of Antler Post for helping me out on this and all on the CML list.

How to get the best White Balance (Push Auto WB).

Getting a good white balance is critical to getting a great image, especially if you are not going to be color correcting or grading your footage. When shooting traditionally ie – not with log or raw – A common way to set the cameras white balance is to use the one push auto white balance combined with a white target. You point the camera at the white target, then press the WB button (normally found at the front of the camera just under the lens).
The white target needs to occupy a good portion of the shot but it doesn’t have to completely fill the shot. It can be a pretty small area, 30% is normally enough. The key is to make sure that the white or middle grey target is obvious enough and at the right brightness that the camera uses the right part of the image for the white balance. For example, you could have a white card filling 50% of the screen, but there might be a large white car filling the rest of the shot. The camera could be confused by the car if the brightness of the car is closer to the brightness the camera wants than the white/grey card.
The way it normally works is that the camera looks for a uniformly bright part of the image with very little saturation (color) somewhere between 45 and 90IRE. The camera will then assume this area to be the white balance target. The camera then adjusts the gain of the red and blue channels so that the saturation in that part of the image becomes zero and as a result there is no color over the white or grey target.
 
If you fill the frame with your white/grey card then there can be no confusion. But that isn’t always possible or practical as the card needs to be in the scene and under the light you are balancing for rather than just directly in front of the lens. The larger your white or grey card is the more likely it is that you will get a successful and accurate white balance – provided it’s correctly exposed and in the right place.
 
The white target needs to be in reasonable focus as if it is out of focus this will create a blurred edge with color from any background objects blending into the blur. This could upset the white balance as the camera uses an average value for the whole of white area, so any color bleed at the edges due to defocussing may result in a small color offset.
 
You can use a white card or grey card (white paper at a push, but most paper is bleached slightly blue to make it look whiter to our eyes and this will offset the white balance). The best white balance is normally achieved by using a good quality photography grey card. As the grey card will be lower down in the brightness range, if there is any color, it will be more saturated. So when the camera offsets the R and B gain to eliminate the color it will be more accurate.
 
The shiny white plastic cards often sold as white balance cards are often not good choices for white balance. They are too bright and shiny. Any reflections off a glossy white card will seriously degrade the cameras ability to perform an accurate white balance as the highlights will be in the cameras knee or roll-off and as a result have much reduced saturation and also reduced R and B gain, making it harder for the camera to get a good white balance. In addition the plastics used tend to yellow with age, so if you do use a plastic white balance card make sure it isn’t past it’s best.
Don’t try to white balance off clouds or white cars, they tend to introduce offsets into the white balance.
 
Don’t read too much into the Kelvin reading the camera might give. These values are only a guide, different lenses and many other factors will introduce inaccuracies. It is not at all unusual to have two identical cameras give two different Kelvin values even though both are perfectly white balance matched. If you are not sure that your white balance is correct, repeat the process. If you keep getting the same kelvin number it’s likely you are doing it correctly.

Sony Venice – A close look at the dynamic range and noise.

With Sony Venice X-OCN files to download!

I have been working with Sony’s colour science guru Pablo at the Digital Motion Picture Center at Pinewood, looking at the outer limits of what Sony’s Venice camera can do. A large part of the reason for this is that Pablo is developing some really nice LUT’s for use on dailies or even as a grade starting point (Pablo tells me the LUT’s are finished but he is waiting for approvals and feedback from Japan).

As part of this process we have shot test footage with the Venice camera for ourselves and also looked long and hard at test shots done by other cinematographers. Last week we were able to preview a beta version of the cameras dual ISO modes. This beta firmware allowed us to shoot tests at both 500 ISO and 2500 ISO and the results of both are equally impressive.

I can’t share any of the test footage shot at 2500 ISO at this stage. The firmware is still in it’s early stages and the final version may well perform a little differently (probably better). But I can share some of the footage shot at 500 ISO.

Please remember what we were exploring was the extreme ends of the exposure range. So our little test set was set up with some challenges for the camera rather than trying to make a pretty picture.

We have deep, deep shadows on the right behind the couch and we also have strong highlights coming off the guitar, the film can on the shelves and from the practical lamp in the background. The reds of the cushion on the couch look very different with most Rec-709 cameras as the colors are outside the Rec-709 gamut.

Another aspect of the test was to check the exposure rating. For this I used my Sekonic lightmeter to measure both the incident light and the light reflected by the Kodak grey card. My light meter gave me T4 at 1/48th for 500 ISO and this turned out to be pretty much spot on with what the scopes told us. So straight away we were able to establish that the 500 ISO exposure rating appears to be correct. We also found that when we stopped down by 2.3 stops we got the correct exposure at 2500 ISO, so that too appears to be correctly rated.

Once the base exposure was established we shot at 2 stops over and 2 stops under, so from T2 down to T8 using a Sony 35mm PL prime. We used the XOCN-ST codec as we felt this will be the most widely used codec.  When looking at the files do remember that the 16 bit XOCN-ST files are smaller than 10 bit ProResHQ. So these are files that are very easy to manage. There is the option to go up in quality to Sony’s linear raw codec or down to X-OCN LT. XOCN-ST sits in the middle and offers a nice balance between file size and image quality, it being very hard to find any visual difference between this and the larger raw files.

The files I’m providing here are single X-OCN frames. They have not been adjusted in any way, they are just as shot (including being perhaps a touch out of focus). You can view them using the latest version of Sony’s raw viewer software or the latest version of DaVinci Resolve. For the best quality preview, at this time I recommend using Sony’s Raw Viewer to view the clips.

Click here to download these Venice Samples

If you find these files useful please consider buying me a coffee or beer.


Type



So what do the files look like? First I recommend you download and play with them for yourself. Anything I do has to have a LUT,  grade or other process applied so that the linear data can be viewed on a normal computer screen. So it’s better to take a look at the original files and see what you can do with them rather than just accepting my word. The images here were create in DaVinci Resolve using ACES. ACES adds a film type highlight roll-off and uses film type levels, so the images look a touch dark as there were a lot of low light level areas in the test shots.

Venice at T4 The base exposure for the test.

Venice at T4 (From ACES). This was the “base” exposure for this test. Click on the image to enlarge.

Venice at T8 – 2 Stops under exposed (As exposed).

Venice at T8 (2 stops under). Click on the image to enlarge.

Venice at T8 – 2 Stops under exposed (Brightness corrected to match base exposure).

Venice at T8 (2 stops under). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T5.6 – 1 stop under exposed (brightness corrected to match base exposure).

Venice at T5.6 (1 stops under). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T4 The base exposure for the test.

Venice at T4 (From ACES). This was the “base” exposure for this test. Click on the image to enlarge.

Venice at T2.8 – 1 stop over exposed (brightness adjusted to match base exposure).

Venice at T2.8 (1 stops over). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T2.0 – 2 stops over exposed (brightness adjusted to match base exposure).

Venice at T2 (2 stops over). Brightness match to base exposure via metadata shift. Click on the image to enlarge.

Venice at T2.0 – 2 stops over exposed (as shot).

Venice at T2.0, 2 stops over, as shot. Click on the image to enlarge.

NOTES AND OBSERVATIONS:

I shouldn’t rush these tests! I should have set the focus at T2, not at T4. Focus is on the chart, not the dummy head. It would have been better if the eyes and chart were at the same distance.

It’s amazing how similar all the shots across this 5 stop range look. Just by adjusting the metadata ISO rating in Resolve I was able to get a near perfect match. There is more noise in the under exposed images and less in the over exposed images, that’s expected. But even the 2 under images are still pretty nice.

NOISE:

What noise there is, is very fine in structure. Noise is pretty even across each of the R, G and B channels so there won’t be a big noise change if skewing the white balance towards blue as can happen with some other cameras where the blur channel is noisier than red or green. Even at T8 and 2 stops under the noise is not unacceptable. A touch of post production NR would clean this up nicely. So shooting at 500 ISO base and rating the camera at 2000 EI would be useable if needed, or perhaps to deliberately add some grain. However instead of shooting at 500 ISO / 2000 EI you might be better off using the upper 2500 base ISO instead for low light shoots because that will give a nice sensitivity increase with no change to the dynamic range and only a touch (and it really is just a touch) more noise.

If shooting something super bright or with lot and lots of very important highlights  I would not be concerned about rating the camera at 1000EI.  For most projects I would probably rate the camera at 500EI. If the scene is generally dark I may choose 400EI just to be a touch cleaner. With such a clean image and so much dynamic range you really can pick and choose how you wish to rate the camera.

Venice has more dynamic range than an F55 and maybe a bit more than the F65. Most of the extra dynamic range is in the shadows. There is an amazing amount of picture information that can be pulled out of the darker parts of the images. The very low noise floor is a big help here. In the example below I have taken the base exposure sample and brought the metadata ISO up to 2000 ISO. Then I have used a luma curve to pull up the shadows still further. If you look at the shelves on the left, even in the deep shadow areas it’s possible to see the grain effect on the dark wood panels. In addition you can see both the white and black text on the back of the grey book on the bottom shelf. Yes, there is some noise but my meter put these areas at -6 stops, so being able to pull out so much detail from these areas is really impressive.

An amazing amount of information still exists in even the darkest shadow areas. This image adjusted up significantly from base exposure (at least +4 stops).

In the highlights the way the camera reaches it’s upper limit is very pleasing, it does seem to have a tiny roll off just before it clips and this looks really nice. If you look at the light bulbs in this test, at the base exposure, if you bring the highlights down in post you can see that not all of the bulb is completely over exposed they are only over exposed where the element is. Also the highlights reflecting off the guitar and film can on the shelf look very “real” and don’t have that hard clipped look that specular highlights on other cameras can sometimes have.

Another thing that is very nice is the colour tracking. As you go up and down in exposure there are no obvious colour shifts. It’s one of the things that really helps make it so easy to make all 5 exposures look the same.

The start up time of the Venice camera is very impressive at around 6 to 8 seconds. It’s up and running very quickly. The one stop steps in the ND filter system are fantastic. The camera is very simple to use and the menu seems logically laid out. It’s surprisingly small, it’s not much bigger than a PMW-F55, just a little taller and a little longer. Battery consumption is lower than most of the competition, the camera appears to consume around 50w which is half the power consumption of a lot of the competition. It can be run of either 12v or 24v. So all in all it can be rigged as a very compact camera with a standard V-Lock battery on the back.

Looking forward to shooting more with Venice in the very near future.

 

Banding in your footage. What Causes It, is it even there?

Once again it’s time to put pen to paper or fingers to keyboard as this is a subject that just keeps coming up again and again.

People really seem to have a lot of problems with banding in footage and I don’t really fully understand why as it’s something I only ever really encounter if I’m pushing a piece of material really, really hard in post production. General the vast majority of the content I shoot does not exhibit problematic banding, even the footage I shoot with 8 bit cameras.

First things first – Don’t blame it on the bits. Even an 8 bit recording  (from a good quality camera) shouldn’t exhibit noticeable banding. An 8 bit recording can contain up to 13 million tonal values. It’s extremely rare for us to shoot luma only, but even if you do it will still have 235 shades and these steps in standard dynamic range are too small for most people to discern so you shouldn’t ever be able to see them. I think that when most people see banding they are not seeing teeny, tiny almost invisible steps what most people see is something much more noticeable – so where is it coming from?

It’s worth considering at this stage that most TV’s, monitors and computer screens are only 8 bit, sometimes less! So if you are looking at one camera and it’s banding free and then you look at another and you see banding, in both cases you are probably looking at an 8 bit image, so it can’t just be the capture bit depth that causing the problem as you cant see 10 bit steps on an 8 bit monitor.

So what could it be?

A very common cause of banding is compression. DCT based codecs such as Jpeg, MJPEG, H264 etc break the image up into small blocks of pixels called macro blocks. Then all the pixels in each block is processed in a similar manner and as a result sometimes there may be a small step between each block or between groups of blocks across a gradient. This can show up as banding. Often we see this with 8 bit codecs because typically 8 bit codecs use older technology or are more highly compressed. It’s not because there are not enough code values. Decreasing the compression ratio will normally eliminate the stepping.

Scaling between bit depths or frame sizes is another very common cause of banding. It’s absolutely vital that you ensure that your monitoring system is up to scratch. It’s very common to see banding in video footage on a computer screen as the video data levels are different to computer data levels and in addition there may also be some small gamma differences so the image has to be scaled on the fly. In addition computer desktops runs at one bit range, the HDMI output another, so all kinds of conversions are taking place that can lead to all kinds of problems when you go from a video clip, to computer levels, to HDMI levels. See this article to fully understand how important it is to get your monitoring pipeline properly sorted. https://www.xdcam-user.com/2017/06/why-you-need-to-sort-out-your-post-production-monitoring/

Look Up Tables (LUT’s) can also introduce banding. LUT’s were never really intended to be used as a quick fix grade, the intention was to use them as an on-set reference or guide, not the final output. The 3D LUT’s that we typically use for grading break the full video range into bands and each band will apply a slightly different correction to the footage than the band above or below. These bands can show up as steps in the LUT’s output, especially with the most common 17x17x17 3D LUT’s. This problem gets even worse if you apply a LUT and then grade on top – a really bad practice.

Noise reduction – In camera or postproduction noise reduction will also often introduce banding. Very often pixel averaging is used to reduce noise. If you have a bunch of pixels that are jittering up and down taking an average value for all those pixels will reduce the noise, but then you can end up with steps across a gradient as you jump from one average value to the next. If you shoot log it’s really important that you turn off any noise reduction (if you can) when you are shooting because when you grade the footage these steps will get exaggerated. Raising the ISO (gain) in a camera also makes this much worse as the cameras built in NR will be working harder, increasing the averaging to compensate the increased noise.

Coming back to 8 bit codecs again – Of course a similar quality 10 bit codec will normally give you more picture information than an 8 bit one. But we have been using 8 bits for decades, largely without any problems. So if you can shoot 10 bit you might get a better end result. But also consider all the other factors I’ve mentioned above.

 

Shooting Flat – No it’s not!

I know that many of my readers like to shoot log. One of the most common terms used around shooting log is “shooting flat”. Lets take a look at that term and think about what it actually means.

One description of a flat image might be – “An image with low contrast”. Certainly an image with low contrast can be considered flat.

Once upon a time shooting flat meant lighting a scene so that there was very little contrast. The background in an interview might be quite well  lit. You would avoid deep shadows or strong highlights. This was done because cameras had very limited dynamic ranges. These flat images of low contrast scenes could then have the contrast boosted in post production to make them look better.

8 years ago, with the advent of DSLR cameras that could shoot with film like depths of field it became fashionable to shoot flat because digital film cameras  when shooting using log produced an image that looks flat when viewed on a conventional TV or monitor.

But lets think about that for a moment. A typical digital cinema camera can capture 14 stops of dynamic range. A scene with 14 stops of dynamic range contains a huge contrast range, perhaps a brilliant bright sky and deep shadows, you can possibly describe the capture a scene with 14 stops of dynamic range as “flat”?

The answer is you can’t – or at least you shouldn’t because the recording  isn’t flat. The dynamic range that most digital cinema cameras can capture is not flat, not at all.

The problem is that a normal TV or video monitor can’t show a very big dynamic range. A conventional TV can only show around 6 stops. If you take a log video signal with a 14 stop image and try to show that on a 6 stop screen you will be squashing the highlights and shadows closer together, so the highlight that was at +14 stops in the scene and is recorded at 100%, gets pushed closer to the deepest shadow in the scene that is recorded at 1%.

On a normal 6 stop TV the 100% recording level is shown at +6 stops while the deepest shadow will be at 1%, so now the 14 stop recording is being shown with only 6 stops between the deepest black and the brightest highlight. Instead of the highlight being dazzlingly bright it’s now just a bright white and not all that much brighter than the shadows. As a result the image on the screen looks all wrong, nothing like what you recorded and it appears to be “flat”.

BUT THE DATA IN THE FILE IS NOT FLAT – that recording contains a high contrast, 14 stop image – it’s the inability of the TV or monitor to show it correctly that makes it look wrong, not that you have shot flat.

In the early days of DSLR shooting many DSLR shooters decided to mimic the way the image from a digital cinema camera looks flat on a normal TV, perhaps in the miss-guided belief that a flat image must always have a greater dynamic range. This definitely isn’t always the case. I can take any regular dynamic range image and make it look flat by reducing the contrast, raising the blacks a bit, shifting the gamma perhaps, that’s easy. But that doesn’t increase the dynamic range that is captured. Changing the capture range of a camera typically requires fundamental changes to the way it operates rather than simple tweaks to the basic picture settings.

So we went through a period where shooting a flat looking image with a DSLR was the trendy way to shoot because on a normal TV or monitor the image recorded is reminiscent of the image from a true digital cinema camera shooting log, even though in practice the “flat look”  was often damaging the image rather than improving it.

Now there are many digital cinema cameras that can capture a very big dynamic range using log encoding and these images look washed out and flat on a normal monitor or TV because of the miss-match between the camera and the monitor, not because the captured scene is flat. But we still call this shooting flat (wrong)!

Why? In many cases people like to leave the image this way as they like this “incorrect” look. Flat is trendy, it’s fashionable, at least to those inside the TV and Video production world. I’m not sure that the wider general audience really understands why their pictures look washed out.

If you have a monitor with high dynamic range display capabilities such as a Atomos Shogun Flame or Inferno, that can show a large dynamic range then you’ll know that if you feed it log and set the display range to HDR and choose the right gamma curve, the picture on the screen is no longer flat, it’s bright and contrasty. This isn’t a LUT or any other cheat. The monitor is simply showing the image with a range much closer to the capture range and now it looks right again.

This is a high dynamic range image. View it on an HDR TV set to HDR10 and it will be brilliantly bright, highly colorfull and full of contrast. On a regular TV or monitor it looks flat and washed out because the regular TV can’t show it properly.

So next time you use the term “shooting Flat” think very carefully about what it actually means and whether you are really shooting flat or whether it’s simply a case of using the wrong monitor. Using words or terms like this incorrectly causes all kinds of problems. For example most people think that log footage is flat and that that’s how it’s supposed to look. But it isn’t flat and it’s not supposed to look flat, we are just using the wrong monitors!

Sony Cash-Back Offer Ends Soon (Europe).

Sony are offering up to £220/€250 cash back on accessories purchased with an FS5 and up to £400/€450 cash back on accessories purchased with  FS7 or FS7M2 if you purchase one before the end of March 2018. So there’s only 2 weeks left to take advantage of this offer!

So if your looking at investing in a nice camera kit with perhaps one of the excellent UWP-D radio mic kits that connect directly to the cameras MI shoe or some extra batteries this might be a great way to get some money back from Sony. There are various terms and conditions so please take a look at the promotion page for the full details. Here’s a link to the promotion page.

 

Skills and knowledge in TV and video production are not keeping up with the technology.

TV and video production, including digital cinema is a highly technical area. Anyone that tells you otherwise is in my opinion mistaken. Many of the key jobs in the industry require an in depth knowledge of not just the artistic aspects but also the technical aspects.
Almost everyone in the camera department, almost everyone in post production and a large portion of the planning and pre-production crew need to know how the kit we use works.
A key area where there is a big knowledge gap is gamma and color. When I was starting out in this business I had a rough idea of what gamma and gamut was all about. But then 10 years or more ago you didn’t really need to know or understand it because up to then we only ever had variations on 2.2/2.4 gamma. There were very few adjustments you could make to a camera yourself and if you did fiddle, generally you would often create more problems than you solved. So those things were just best left alone.
But now it’s vital that you fully understand gamma, what it does, how it works and what happens if you have a gamma miss-match. But sadly so many camera operators (and post people) like to bury their heads in the sand using the excuse “I’m an artist – I don’t need to understand the technology”. Worse still are those that think they understand it, but in reality do not, mainly I think, due to the spread of miss-information and bad practices that become normal. As an example shooting flat seems to mean something very different today to what it meant 10 years ago. 10 years ago it meant shooting with flat lighting so the editor or color grader could adjust the contrast in post production. Now though, shooting flat is often incorrectly used to describe shooting with log gamma (shooting with log isn’t flat, it’s a gamma miss-match that might fool the operator into thinking it’s flat). The whole “shooting flat” miss-conception comes from the overuse and incorrect use of the term on the internet until it eventually became the accepted term for shooting with log.
 
As only a very small portion of film makers actually have any formal training and even fewer go back to school to learn about new techniques or technologies properly this is a situation that isn’t going to get any better. As we move into an era where, in the short term at least, we will need to start delivering multiple versions of productions in both standard dynamic range as well as several different HDR versions, additionally saving the programme master in another intermediate format. Things are only going to get more complicated and more and more mistakes will be made, technology will be applied and used incorrectly.
Most people are quite happy to spend thousands on a new camera, new recorder or new edit computer. But then they won’t spend any money on training to learn how to get the very best from it. Instead they will surf the net for information and guides of unknown quality and accuracy.
When you hire a crew member you have no idea how good their knowledge is. As it’s normal for most not to have attended any formal courses we don’t ask for certificates and we don’t expect them. But they could be very useful. Most other industries that benefit from a skilled labour force have some form of formal certification process, but our industry does not, so hiring crew, booking an editor etc becomes a bit of a lottery.
Of course it’s not all about technical skills. Creative skills are equally important. But again it’s hard to prove that you do have such skills to a new client. Showreels are all to easy to fake.
Guilds and associations are a start. But many of these can be joined simply by paying the joining or membership fee. You could be a member of one of the highly exclusive associations such as the ASC or BSC, but even that doesn’t mean you know about technology “A” or technique “Z”.
We should all take a close look at our current skill sets. What is lacking, where do I have holes, what could I do better. I’ve been in this business for 30 years and I’m still learning new stuff almost every day. It’s one of the things that keeps life interesting. Workshops and training events can be hugely beneficial and they really can lead to you getting better results. Or it may simply be that a day of training helps give you the confidence that you are doing it right. They are also great opportunities to meet other similar people and network.
Whatever you do, don’t stop learning, but beware the internet, not everything you read is right. The key is to not just read and then do, but to read, understand why, ask questions if necessary, then do. If you don’t understand why, you’ll never be able to adapt the “do” to fit your exact needs.

Should I shoot 8 bit UHD or 10 bit HD?

This comes up so many times, probably because the answer is rarely clear cut.

First lets look at exactly what the difference between an 8 bit and a 10 bit recording is.
Both will have the same dynamic range. Both will have the same contrast. Both will have the same color range. One does not  necessarily have more color or contrast than the other. The only thing you can be sure of is the difference in the number of code values. An 8 bit video recording has a maximum of 235 code values per channel giving 13 million possible tonal values. 10 bit recording has up to 970 code values per channel giving up to 912 million tonal values.
 
There is a lot of talk of 8 bit recordings resulting in banding because there are only 235 luma shades. This is a bit of a half truth. It is true that if you have a monochrome image there would only be 235 steps. But we are normally making colour images so we are typically dealing with 13 million tonal values, not simply 235 luma shades. In addition it is worth remembering that the bulk of our current video distribution and display technologies are 8 bit – 8 bit H264, 8 bit screens etc. There are more and more 10 bit codecs coming along as well as more 10 bit screens, but the vast majority are still 8 bit.
Compression artefacts cause far more banding problems than too few steps in the recording codec. Most codecs use some form of noise reduction to help reduce the amount of data that needs to be encoded and this can result in banding. Many codecs divide the image data into blocks and  the edges of these small blocks can lead to banding and stepping.
 
Of course 10 bit can give you more shades. But then 4K gives you more shades too. So an 8 bit UHD recording can sometimes have more shades than a 10 bit HD recording. How is this possible? If you think about it, in UHD each color object in the scene is sampled with twice as many pixels. Imagine a gradient that spans 4 pixels. In 4K you will have 4 samples and 4 steps. In HD you will only have 2 samples and 2 steps, so the HD image might show a single big step while the 4K may have 4 smaller steps. It all depends on how steep the gradient is and how it falls relative to the pixels. It then also depends on how you will handle the footage in post production.
 
So it is not as clear cut as often made out. For some shots with lots of textures 4K 8 bit might actually give more data for grading than 10 bit HD. In other scenes 10 bit HD might be better.
 
Anyone that is getting “muddy” results in 4K compared to HD is doing something wrong. Going from 8 bit 4K to 10 bit HD should not change the image contrast, brightness or color range. The images shouldn’t really look significantly different. Sure the 10 bit HD recording might show some subtle textures a little better, but then the 8 bit 4K might have more texture resolution.
 
My experience is that both work and both have pro’s and con’s. I started shooting 8 bit S-log when the Sony PMW-F3 was introduced 7 years ago and have always been able to get great results provided you expose well. 10 bit UHD would be preferable, I’m not suggesting otherwise (at least 10 GOOD bits are always preferable), but 8 bit works too.