Film making workshops with London Film School, March 2025

London Film School Workshop

I’m running a couple of film making workshops for London Film School in association with Sony. These workshops are aimed at new content creators or others that wish to improve their production skills. It will cover the whole film making process from concept to storyboard to production. There will be sessions on project development, camera and lighting techniques as well as sound. One of the workshops will be held over 3 evenings and the other over a weekend.

For the weekend course 22/23 March click here.

For the evening course 18/25March, 1st April click here.

Osee Gostream Duet Vision mixer and streaming box.

I’m sure many of my readers will be familiar with the Blackmagic Design Atem Mini’s. But how many of you have looked into the alternatives? I’m in the process of putting together a mobile production unit that can be used in my camper van or transported in a few small flight cases. I was going to use one of the Atem Mini’s but they only have a single HDMI output and this means you can’t have both a multiview output and a program output for a high quality feed, perhaps for a screen or projection at the same time as the multiview. So, that led me to have a look at some of the alternatives and the one I finally settled on is the Osee Gostream Duet.

Osee Gostream Duet switcher and streaming device.



No one asked me to do this review and I purchased the device based on the manufacturers spec sheet from Amazon, just as anyone else would. This review exists simply because I actually really like the product.
The Osee Gostream Duet is a 4/5 channel vision mixer designed for live streaming. It supports 4 HDMI and 4 SDI inputs plus a further external source which can be a UVC camera via USB-C or an NDI camera over the connected network. You have to choose between SDI or HDMI on each individual input, you don’t get 8 inputs at the same time, but you can have a mix of both SDI and HDMI sources. The very easy way the setup menu can be accessed from the Gostreams front panel means that you could quickly change the input settings during a session if you needed to have access to an extra source or 2.

You get a free NDI licence when you update the units software, so as well as the 4 main inputs you could have a camera connected over the network attached to the units ethernet port.  Or you can add another camera via UVC as the device can accept a UVC input via one of the 2 USB-C ports. In total you can have up to 5 external sources at any one time.

It has a built in video player able to playback H264 HD files from an SD card. If you use a suitably fast SD card such as a V60 or V90 card it can also record to the same SD card as used for playback. Or you can attach an SSD or thumb drive to one of the USB-C ports.  You can connect it to a Mac or PC where it will be seen as a webcam making streaming very simple or the device can stream to 3 separate destinations at the same time. There is a companion program for Mac or Windows to control the unit remotely and you can also control it via a Streamdeck or similar via the companion software.

It has an upstream keyer as well as a downstream keyer. There’s a DVE that can be used to resize and reposition the input sources and still frames. The DVE’s and keyers can be used to create a multi-source  “supersource” which can be instantly recalled as needed. You can chroma and luma key with ease and the setup for the keyers and DVE’s can be done without a computer as you can access the menu system from the units front panel.

All in all I’m really impressed by the Osee Gostream Duet, it doesn’t cost much more than an Aten Mini, yet offers a lot more flexibility. Please watch the video to learn more.  

Pixels should not be confused for resolution.

Let me start with the definition of “resolution” as given by the Oxford English Dictionary:

“The smallest interval measurable by a telescope or other scientific instrument; the resolving power.
  • the degree of detail visible in a photographic or television image.”
     

OK, so that seems clear enough – measurable or visible degree of detail.

Expanding that a little further when we talk about the resolution of an image file such as a Jpeg, TIFF etc, or perhaps RGB or YCbCr* video frame, if we have a 4K image that will normally mean a 4K pixel wide image. It will have 4K wide of red, 4K wide of blue and 4K wide of green, three lots of 4K stacked on top of each other so it is capable of containing any colour or combination of colours at 4K of points or pixels across, in effect a 4K wide image will have 12K of values across the image.

Now we know what resolution means and how it is normally used when describing an image what does it mean when we say a camera has an 8K sensor? Generally this statement means that there will be 8K of pixels across the sensor. In the case of a single sensor that is used to make a colour image some of these pixels will be for Red, some for green and some for blue (or some other arrangement of a mix of colour and clear pixels).  But does this also mean that 8K sensor will be able to resolve a 8K of measurable of visible detail – no, it does not.



Typically a single sensor that uses a colour filter array (CFA) won’t be able to resolve fine details and textures anywhere close to the number of horizontal pixels. So, to say that a camera with a single 8K or 4K colour sensor is a camera that can resolve an 8K or 4K image will almost certainly be a lie. 

Would it be correct to call that 4K colour sensor a 4K resolution sensor? In my opinion no – it is not correct because if we use a bayer sensor as an example then it will only actually have 2K of green, 1K of red and 1K of blue pixels on any one row. If we compare that to a 4K image such as a Jpeg then the Jpeg image will be made up of 4K wide of green, 4K  wide of red, 4K wide of blue pixels. It has the ability to resolve any colour or combination of colours with 4K precision. Meanwhile that 4K bayer sensor can not, it simply doesn’t have sufficient pixels to sample each colour at 4K, in fact it doesn’t even get close.

Clever image processing can take the output from a 4K bayer sensor and use data from the differing pixels to calculate, estimate or guess what the brightness and colours are at each point across the whole sensor and the actual measurable luminance resolution will typically come out at around 0.7x the pixel count, the chroma resolution will be even lower.  So if we use the dictionary definition of resolution and the measured or visible details a 4K bayer sensor can resolve we can expect a camera with a 4K pixel across bayer sensor to have a resolution of around 2.8K. Your 4k camera is unlikely to actually be able to create an image that can truly be said to be 4k resolution.

But the camera manufacturers don’t care about this. They want you to believe that your 4K camera is a 4K resolution camera. While most are honest enough not to claim that the camera can resolve 4K they are also perfectly happy to let everyone assume that this is what the camera can do. It is also fair to say the most 4K bayer cameras perform similarly, so your 4K camera will resolve broadly similarly to every other 4K bayer camera and it will be much higher resolution than most HD cameras. But can it resolve 4K, no it can not.

The inconvenient truth that bayer sensor don’t resolve anywhere near the pixel count is why we see 6K or 8K sensors becoming more and more popular as these sensors can deliver visibly sharper, more detailed 4K footage than a camera with a 4K bayer sensor can.  In a 4K project the use of an 8K camera will deliver 4K luma and chroma resolution that is not far behind and as a result your 4K film will tend to have finer and more true to life textures. Of course all of this is subject to other other factors such as lens choices and how the signal from the camera is processed, but with like for like an 8K pixel camera can bring real, tangible benefits for a lot of 4K projects compared to a 4K pixel camera.  

At the same time we are seeing the emergence of alternative colour filter patterns to the tried and trusted bayer pattern. Perhaps adding white (or clear) pixels for greater sensitivity, perhaps arranging the pixels in novel and different ways. This muddies the water still further as you shouldn’t directly compare sensors with different colour filter arrays based on the specification sheet alone. When you start adding more alternately coloured pixels into the array you force the spacing between each individual colour or luma sample to increase. So, you can add more pixels but might not actually gain extra resolution, in fact the resolution might actually go down. As a result 12K of one pattern type cannot be assumed to be better than 8K of another type and vice versa. It is only through empirical testing that you can be sure of what any particular CFA layout can actually deliver. It is unsafe to simply rely on a specification sheet that simply quotes the number of pixels. And it is almost unheard of for camera manufacturers to actually publish verifiable resolution tests these days…….   ….. I wonder why that is?


* YCbCr video or component video can be recorded in a number of ways. A full 4:4:4  4K YCbCr image will have 4K of Y (luma or brightness), a full 4K of the chroma difference blue and a full 4K of chroma difference Red. The chroma difference values are a more efficient way to encode the colour data so the data takes less room but just like RGB etc there are 3 samples for each pixel within the image. Within a post production workflow if you work in YCbCr the image will normally be processed and handled as 4:4:4.

For further space savings many YCbCr systems can if desired subsample the chroma, this is when we see terms such as 4:2:2. The first digit is the luma and the 4 implies every pixel has a discrete value.  In 4:2:2 the 2:2 means that the chroma values are interleaved, every other pixel on every other line, so the chroma resolution is halved, this saves space. This is generally transparent to the viewer as our eyes have lower chroma resolution than luma.

But it is important to understand the 4:2:2 and 4:2:0 etc are normally only used for recording systems in cameras etc where saving storage space is considered paramount or in broadcasting and distribution systems and codecs where reducing the bandwidth required can be necessary. SDI and HDMI signals are typically passed as 4:2:2. The rest of the time YCbCr is normally 4:4:4. If we do compare 4K  4:2:2 YCbCr which is 4K x 2k x 2K to a 4K Bayer sensor which has 2K G, 1K R, 1K B it should be obvious that even after processing and reconstruction the image derived from a 4K bayer sensor won’t match or exceed the luma and chroma resolutions that can be passed via 4:2:2 SDI or recorded by a 4:2:2 codec. What you really want is a 6K or better still an 8K bayer sensor.

Burano Version 2 Gains More Features

Although it hasn’t been released yet, Sony have made it clear that the version 2 firmware update for Burano will include more additional features than previously stated. The version 2 update will be a huge update, particular adding the ability to shoot at up to 120fps in near full frame 4K.
Here’s the information from Sony, along with some of my own thoughts:


As previously announced, Version 2.0 will include new recording formats, including a new 3.8K Full Frame crop that leverages nearly the entire sensor and can shoot up to 120 fps and a 1.9K mode that can shoot up to 240 fps. These new recording modes allow the filmmaker to prioritize faster sensor performance depending on the needs of their application. Other new recording formats include the addition of 24.00 fps to X-OCN 16:9 imager modes and the following:

From Alister: It’s important to understand that whenever you take a sensor and optical filter sytem design for a higher resolution and use it at a lower resolution that you will almost always have an increase in aliasing and moire. So, with Sony reading out almost the full the 8K sensor at 4K there will likely be a greater risk of aliasing and moire issues in the 3.8K full frame crop mode. However, Burano likely uses the same sensor as the Sony A1’s 8K sensor. The A1 also has a near full frame 3.8K scan mode and it actually does a very good job of controlling aliasing and moire, in general the 3.8K full frame footage from the A1 looks very good and hopefull Burano will be similar. There can be times where you will see more  moire over certain pattern or textures and aliasing can sometimes be seen on certain hard edges but it is very well controlled.

Full Frame 3.8K 16:9 Mode Up to 120 fps XAVC and X-OCN

Super 35 4.3K 4:3 Mode (for Anamorphic) Up to 60 fps X-OCN only

Super 35 1.9K 16:9 Mode Up to 240 fpsi XAVC only

BURANO Version 2.0 will also add a 1.8x de-squeeze setting as well as additional high frame rate (S & Q) modes, including up to 66, 72, 75, 88, 90, 96, and 110 fps. It will also add proxy recording for 24.00 fps recording formats.

BURANO Version 2.0: Monitoring, SDI, and Metadata Improvements

From Alister: I don’t know what these improvements are yet, it will be interesting to see how these are implemented.

In addition to the new recording formats, Version 2.0 offers various monitoring and metadata improvements, including standardized SDI video output for monitoring across X-OCN and XAVCii. It adds breathing compensation and image stabilization metadata in X-OCN, time code and clip name metadata to SDI output.

Based on feedback from BURANO users, Version 2.0 will offer an improved on-screen display that places camera status information outside of the image and also includes View Finder Gamma Display Assist while using S-Log3 for monitoring.

Version 2.0 will add 24V output to the PL Mount Voltage menu. In addition, it adds compatibility with Focus/Iris/Zoom control for PL Mount lenses while using the BURANO’s optional GP-VR100 handgrip.

BURANO Version 2.0: Improved Image Output and Added Exposure Tools

BURANO Version 2.0 will also include several image output improvements, including enhanced image output when using the preset S-Log3 look or 3D User LUTs. Additionally, Version 2.0 will enhance Auto Focus performance when recording with the following frame rates: 23.98, 24, 25, and 29.97.

Version 2.0 also includes additional exposure tools (High/Low Key) derived from the flagship VENICE camera system. It will also expand white balance memory presets from 3 to 8 and support Active/High Image Stabilization in Full-Frame crop 6K and Super 35 1.9K 16:9 imager modes.

From Alister: High/Low key is such a useful tool for checking what is going on in the shadows and highlights of a shot, it’s a shame more of Sony’s cameras don’t have this.

In addition, BURANO Version 2.0 will improve ease of use functionality with the ability to format media from the status screen as well as set CAM ID and Reel Number, which is standard for documentary and reality TV applications.

Version 2.0 will also change the factory default frequency setting from 59.94 to 23.98p and will add a setting to “reset to factory defaults” setting.

Finally, BURANO Version 2.0 will add live event and multicam functionality, including variable ND control from RCPs, improved camera control from Camera Remote SDK, and tally control for devices connected via LAN.

Availability

The new BURANO Version 2.0 is planned to be released in March 2025. Filmmakers can easily download the update directly to their camera using a Mac or PC. For more information, visit sonycine.com or follow us on Instagram @sonycine for more information.

Shooting Against Windows

This came up as a question on one of the user groups I follow; How do you shoot something happening inside a room when you also want to retain the view outside the room and how do you track the changing exterior light level and colour temperature?

I’m going to assume that this isn’t a scenario where you can add silks or flags to reduce the light on the exterior elements that are needed to be seen. This type of shoot is always challenging. Generally the amount of light outside, during the day will be very high, especially in summer or when the sky is clear. If the windows are relatively small then they won’t let much light into the interior so we are going to have to do something to address the imbalance between the interior and exterior light.
If the space has large picture windows, then more natural light will come into the room, but with so much glass it may be difficult to avoid the interior elements such as the cast or performers from being silhouetted against the bright windows and exterior. So, what can we do?
A high dynamic range camera does help, especially if the window is small and can be isolated in post production for separate treatment compared to the rest of the shot. But often even a camera with a huge range doesn’t really help because the contrast in the shot is just too extreme to look nice. For a small window adding a little bit of diffusion via a filter such as a 1/4 or 1/8th black promist or supermist can help soften the edges of the otherwise bright window making the image look less forced or less digital.

One approach is to add ND film to the windows to reduce the amount of light coming through the windows. This is very effective and reduces the amount of light that needs to be added to the interior. But it can be expensive or impractical with very large windows. You can be super fancy and add a polariser film to the window and then another polariser on the camera, by turning the polariser on the camera you can quickly adjust how much of the exterior is visible! Instead of ND film you can also add black scrim (a mesh like material with lots of holes in it), this has some advantages as it tends to be a bit cheaper than ND gel and it isn’t shiny like ND. 

Even with large wrap around windows without ND or mesh over the windows the exterior light levels are likely to be significantly higher than the interior levels and it typically takes a significant amount of additional light to balance this. If you don’t achieve a good balance then the exterior will be over exposed or the interior in silhouette no matter how much DR the camera has and simply stopping down or ND’ing down won’t help.

The exterior colour temperature probably won’t vary that much during the day, other than early morning or late afternoon unless the weather changes significantly. The angles of the exterior shadows will shift as the sun moves across the sky, as will the amount of light from the sun that enters the space, but there is nothing you can do about that if you need to see the exterior. Dimmable lights will allow you to control the interior/exterior brightness balance and bi-colour lights should be sufficient to track the exterior colour temperature differences throughout the day. If using multiple lights that can be remotely controlled if there is an app for the lights you are using you can probably create a group for all the lights allowing quick global changes of the colour temp and intensity. Or perhaps you can get them all connected up via CMX.

You can use a colour meter to measure the colour temperature of the light coming through the windows and then compare this to the colour temperature of the lights and adjust the lights to match the exterior, then you should re-white balance the camera off a grey card or white card whenever you make a colour temperature change to the lights.

You can also a use a vectorscope to help get the correct colour temperature balance by white balancing the camera off a white card illuminated by only the light coming through the window so the camera is matched to the exterior light. Then using the same white card now facing the interior lights adjust the lights until the white point (the blob in the middle of the vector scope) is in the center, matching where it was when you did the white balance for the exterior. Use the waveform scope to keep any eye on your exposure throughout the day to keep it consistent but bear in mind that the angle of the sun will change so the exterior contrast as well as possibly the interior contrast will likely change.

This type of shot will always be challenging to pull of well. I normally use large soft lights to lift the overall interior light level, something like  the Nanlux Dyno’s or perhaps some Nanlux Evokes with softboxes. On a budget you might be able to get away with some Forza 720’s or FC-500B with soft boxes. The number of light you will need will depend on the size of the space, the size of the window and the time of year you are shooting.

Great deals on Sony Burano’s – Z-Systems

Z-Systems of Minneapolis in the USA have some great deals on the Burano camera. They have deals on used, as well as ex-demo and A-Grade cameras in stock and ready to ship with prices starting at $21,000 USD.

I know the guys at Z-Sytems very well and one of the really great things about buying from them is that they really do understand the products they sell. Their in-house engineer Keith Mullen is a full-on, very knowledgable camera geek and he’ll be able to help get you up and running if necessary.

For used Burano’s from $21,000.00 click here

For an ex-demo Burano at $22,499.00 click here.

And for brand new units at $24,999.00 click here.

 

Earth Ritual with the GFX-100 II, One Year On

This time last year I was preparing to shoot the amazing “Earth Ritual” performance by the Of The Wild Ecological Circus Collective. I have been incredibly fortunate over the last few years to have been involved with various circus acts and performers, from Glastonbury to traditional travelling circus. I filmed this performance using the then new Fujifim GFX-100 II. The GFX-100 is a large format camera with a 102 megapixel sensor that has an area around 1.7x greater than a full frame sensor (approx 1.3x wider/taller). This sensor can be used in many different ways, for example using it’s full width to shoot 4K or shooting at 8K with a frame size just fractionally smaller than Full Frame. For Earth Ritual I used the large format 4K mode with the large format Fujifilm GF 55mm f1.7 lens for an extremely shallow depth of field. 
I really like the way this camera looks, I shot using F-Log but it also has a number of different film emulations built in. Since I shot the film there have also been firmware updates to improve it’s autofocus. There is a more in depth write up about the shoot here.

Fujifilm Eterna digital cinema camera

But also since I shot this Fujifilm have announced that this year they will be releasing a large format cinema camera based on the GFX100 II.  The Fujifilm GFX Eterna looks like it will be a really interesting camera, especially if you are interested in larger than Full Frame formats. You can find more information about the Eterna here.

I used Nanlite and Nanlux lights to light this. Mostly utilising the Forza 720B with a projector lens and gobo as well as a number of Nanlite Pavotube lights for the background.

One thing about all of the circus people I know is that they are passionate about what they do and put a huge amount of effort into delivering entertaining performances. Yet circus is often seen as something seedy or second rate – I can assure you that most contemporary circus is hugely entertaining. Whether that’s a beautifully artistic performance, a funny comedy sketch or a show just for grown ups. So, next time a circus comes to your town, go and see a show. And I hope to bring you more circus later in the year. 

Colour Management In Adobe Premiere Pro

30 Years of Rec-709.

Most of us are familiar with Rec-709. It is the colourspace for standard dynamic range television that was introduced in the early 1990’s. Most of us have been working with and delivering Rec-709 content for the past 30 years.

Because Rec-709 has been around for so long, the majority of the editing and grading software that we use defaults to Rec-709. Rec-709 has a limited dynamic range and a limited colourspace. It was designed around the TV technologies available in the 1990’s when Cathode Ray TV’s were still dominant and LCD screens were nowhere near as good as they are today.

But things are changing. 

Most good quality phones, tablets, laptops and TV’s on sale today support bigger colourspaces than Rec-709. And most can now display high dynamic range HDR images. Streaming services such as Netflix, Amazon, Hulu etc are all streaming HDR content. Even YouTube and Vimeo fully support HDR. Eventually HDR will become completely normal and Rec-709 will fade into obscurity. 
But right now we are in a transition period where a lot of content is still mastered and delivered in SDR but at the same time there is a need to deliver more and more HDR content.

Colour Management.

One way to make this easy is to use colour management in your editing and grading software. In a colour managed workflow the editing/grading software will try to determine the colourspace of your source material  by reading the metadata contained in the file. Then it will determine the required delivery colorspace, usually based on the setup of your monitor or you can tell the software which colourspace you want to deliver your files in.

The software then converts from the source colourspace to the final output colourspace. The end result is that regardless of the colopurspace of your source footage, it will look correct on your monitor with the correct contrast and correct colour saturation. All your grading adjustments are applied to your footage in it’s original state which maximises the final image quality. This also makes it possible to change the output colourspace from one colourspace to another without having to change the grade – one grade can be used for many different output colourspaces. All you need to do is to change the output colourspace setting. 

OK, it isn’t always quite as simple as that – you might need to make some fine tuning adjustments to your grade if you switch your output between HDR and SDR. But overall, if you grade something correctly for HDR and then switch to an SDR output, it should still look pretty good and this greatly simplifies your workflow if you need to deliver in both HDR and SDR.

S-Log3 is not, and never has been flat!
 
S-Log3 and other log formats should never look flat. The only time they look flat is when there is a miss-match in the colour space between the material and the way it is being viewed.
 
S-Log3 in a Rec709 project looks flat because of this miss-match, this is NOT how it is supposed to look. But because in the early days most software only supported Rec-709, S-Log3 always looked wrong and as a result looked flat. In legacy workflows where there is no colour management the most commonly used solution to make S-Log3 look right is to add a LUT. The LUT transforms the S-Log3 to Rec-709 (and perhaps adding a creative look at the same time) so that the contrast and colour now looks correct.

Need To Deliver In HDR.
 
But we are now starting to need to deliver content in a lot more standards than just Rec-709.  HDR10, HLG, Rec2020 etc are now common and LUT’s designed to go from S-Log3 to Rec709 are no use for these alternate colourspace outputs.
 
So, most of the editing and grading software platforms we use are moving to colour managed workflows where the software detects the colourspace that the source material is in as well as the colourspace you are monitoring in and then automatically transforms the source material to the correct output colourspace. This eliminates the need to use LUTs, your footage will always look correct. This way all your grading adjustments are applied directly to the source material without being restricted to any particular colourspace and you can change the output colourspace to different spaces depending on what you need to deliver and you don’t need to change the grade for each alternative output.
 
By default in the latest version of Premiere Pro colour management is normally disabled, but some earlier versions enabled the colour management by default.

Enabling Colour Management.
 
To enable colour management in Premiere Pro 2025 (version 25.0 and later) go to the Lumetri panel and  under “Settings” “Project” enable (tick) the “log auto detect colour space” setting. Now Premiere Pro will detect the colourspace that your footage was shot in and colourspace transform it to you projects output colourspace. It is also worth noting that there is an additional setting that allows you to select the correct viewing gamma for the desired output, whether you are making content for the web or for broadcast TV.
 


If you want to deliver in HDR a bit further down in the settings you will also find the ability to change the timelines working colourspace, under these setting you can quickly change the timeline from Rec-709 to Rec2100 PQ (HDR10) or Rec2100 HLG for HDR. Changing this setting will change the working and output colourpace – note that if the colourspace of your monitor does not match these settings the image will not be correct – for example if you are using a Rec-709 monitor and you select Rec-2100 PQ the viewer images will appear flat.
 

If using a computer with a colour managed HDR screen, for example a MacBook, you will also need to allow Premiere to manage the computers Display Colour Management so that the viewers are displayed in the correct colourspace and then enable Extended Dynamic Range monitoring if the monitor is capable of displaying in HDR in the Lumetri settings under “Preferences”.


Be aware that if you allow Premiere to manage your colourspace this way you will no longer be able to use the majority of the LUT’s designed for S-Log3 as these LUT’s include a colourspace transformation that you no longer need or want. But – now you are free to deliver in both SDR and HDR without having to create different grades for each.

The colour management in Premiere is still somewhat basic, but it does work. But it’s very difficult to use the colour management and LUTs at the same time. Personally I much prefer the colour management in DaVinci Resolve which has a lot more options and the ability to add additional colourspace transformations as part of the individual grade used for each clip. This allows you to add LUT’s designed for a huge range of different colourspaces within different overall colour spaces.


Copying a LUT from a Sony FX camera into DaVinci Resolve.

A question that comes up quite a bit is – how do I get the LUT I have been using in the camera into DaVinci Resolve.

There are two parts to this. The first is how do you get the LUT you are using in the camera, out of the camera. Perhaps you want to export the s709 LUT or perhaps some other LUT.

To export a LUT from the camera you can use the embedded LUT option that is available when using the Cine EI mode. 
If you turn on “Embedded LUT” on the camera and record a clip the camera will save the LUT on the SD card under:

FX3/FX30 – private – M4ROOT – GENERAL – LUT folder.

FX6/FX9 – private – XDROOT – GENERAL – LUT folder.

Then to get a LUT into DaVinci Resolve the easy way is to go to the Resolve preferences colour management page, scroll down and there is an “open LUT folder” button that will open the LUT folder. Copy your LUT into this folder. Then click on the “Update Lists” button. Now your LUT will be available to use in Resolve.

CVP Brussels Technology Showcase

 

I’ll be in Brussels for CVP’s Technology Showcase event on the 11th of December. I’ll be presenting a session on monitoring, delving into what type of monitor might you need for shooting as well as for post production for both SDR and HDR workflows.
There are also sessions on lighting, hybrid production, audio and  broadcast production. As well as the workshop sessions there will be a large selection of some of the very latest equipment  from a wide range of manufactures on show.

For more details and to register for this free event (That includes a socila evening with drinks and food) please click here.