Those of you that follow me on facebook will know that recently I have been travelling a lot. A couple of days ago I arrived in Dubai and I have been staying on a pretty high floor of the Dusit Thani hotel. I didn’t ask for a room with a view, but I got one. From my bedroom window I could see the iconic Burj Kahlifa tower and parts of one of Dubai’s major roads. I also had my FX30 with me, so I felt I should take advantage of this view and shoot a time-lapse going from day to night.
Easy Peasy Lemon Squeezy!
Fortunately this is a pretty easy thing to do with the FX30. I didn’t use the cameras video modes, instead I used it in the “P” program auto photo mode. In this mode the camera automatically sets the aperture and shutter speed to suit the available light levels. As the light level decreases the aperture will open up until it can’t open any more and then the shutter speed will become longer.
So, all YOU need to do is determine the ISO at which you want to shoot. I chose 125 ISO (I used picture profile 11 – S-Cinetone) as this will give the lowest possible noise level and in addition for shots at night it will force the shutter speed to become quite long as the light levels fall. The longer shutter will then cause the lights of any cars on the roads to become blurred and form pleasing trails.
To shoot the sequence of still frames that would ultimately be turned into a video clip I used the FX30’s built in time-lapse photo mode (Menu – Shooting – Drive Mode – Interval Shoot Function). I set the start time to 1 sec which is the minimum and means the the camera will start shooting the sequence 1 second after you press the shutter release. I set the shooting interval to 3 seconds and the number of shots to 3000 as this would cover the full duration of the day to night shot that I wanted (about 2.5 hours).
To power the camera for a couple of hours I used my Macbook Pro’s power supply with a USB-C cable going to the FX30’s USB-C port. As an alternative you could also use a powerbank that has a USB-C PD port (USB-C Power Deliver).
To position the camera I used a soft pillow (I didn’t have a tripod with me). I used manual focus and double, triple checked the focus with the lens wide open to ensure it was sharp.
A common issue when shooting through a window is reflections of objects inside the room or light from in the room falling on the often dirty window. Unless the rooms curtains are black, closing the curtains doesn’t help as the outside light tends to reflect back off the curtains onto the window. To prevent this I used a couple of black T-Shirts wrapped around the camera and lens to block any light from reflecting off the window and kept the room lights off.
All that was then left was to press the shutter release and allow the camera to take the images that would make up the sequence. I shot both raw an jpeg. The jpegs would allow me to very quickly preview the end result (and in fact the jpegs were used for the video linked here). The raw frames can be used when you need the very highest quality and will give you greater grading flexibility compared to the 8 bit jpegs.
Once the sequence was shot I then dropped the jpegs into a DaVinci Resolve project, Resolve will bring in sequentially numbered jpeg and tiff files as a single video clip, so editing and grading is easy. I haven’t yet worked on the raw files, but my workflow with these normally involves using Photoshop to adjust and grade a single frame and then use Adobe Bridge to batch process and then export all the frames as tiff files using the same grading settings.
All in all it took me about 15 to 20 minutes to set the camera up. Most of that was time spent figuring out how to best place the black shirts to prevent reflections. Then I went out for diner while the camera shot the sequence over a couple of hours and finally I spent about 45 minutes doing a bit of an animation and a few colour tweaks in Resolve. Because the FX30 still frames are 6.2K x 4.1K there is plenty of resolution to crop in a bit and create a move within the image, even when delivering in 4K. So, for very little actual time spent, I got a quite nice little time-lapse sequence.
The Sony FX30 is really growing on me. I also own the FX3, the FX6 and the FX9. But when I am travelling the FX30 is now my go-to camera. When combined with the 18-105 power zoom lens you have a low cost and lightweight package that really does deliver great looking images. The 6K oversampled to 4K recordings have a texture and quality to them that I find really pleasing. In the Venice workshop we did here in Dubai we put my FX30 side by side with the Venice and the audience members were quite shocked by how close they are. But then this is the whole point of the cinema line – to provide a range of cameras to suit all budgets and a vast range of applications that all look more or less the same.
Of course the Venice image is that bit better, the 16 bit encoding and X-OCN makes the footage a delight to grade and the textures in the deepest shadows are clearer and finer. The way Venice handles highlights is just that little bit better. All around there are very subtle things about the Venice image that are better. But the FX30 really does produce a remarkably good image for very little money.
Over the last 2 weeks I have been shooting some tests for a major feature film. The tests involved a special process that involves the use of Infrared light and shooting outdoors.
On the test day we had some fairly bright light levels to deal with. So as you would normally do we added some ND filtration to reduce the light levels. Most of the equipment for the shoot was on hire from Panavision, the main cameras being Panavised Sony Venices with PV70 mounts and Panavison lenses. But for reasons I can’t go into yet, we were unable to use the Venice internal ND filters, so we had to use external ND’s.
The first ND’s we used were circular Tiffen IRND’s that were the correct size for the PV lenses. But much to my surprise these made very little difference to the amount of IR reaching the camera. For our application they were absolutely no good. Fortunately, I had a set of Formatt Hitech IRND’s in my camera bag and when we tried these we got an equal visible and infrared cut. So, the Tiffen’s were put back in their boxes and the Formatt filters used instead.
Back at Panavision we did some further testing and found that both the Tiffen and Schnieder IRND’s that we tested had very little IR cut. But the Formatt Hitech and Panavision IRND’s that we tested cut the IR by a very similar amount to the visible light. In addition we were able to test the Venice built in ND filters and found that these too did a very good job at cutting both IR and visible light by similar amounts.
So, my recommendation is – if you are ever concerned about infrared light contaminating your images use a Venice 2 with it’s built in ND’s, Panavision or Formatt Hitech IRND’s.
If you are using Zebras to measure the exposure of a log gamma curve you should consider using a narrower Zebra window.
Why?
From middle grey to white (50% to 90%) in the world of standard dynamic range Rec-709 each stop occupies approximately 16% of the recording range. Typically the default zebra window or zebra range used by most cameras is 10% (often +/- 5%). So, when Zebras are set to 70% they will appear at 65% and go away at 75%. For Rec-709 and most conventional SDR gammas this window or range is around 3/4 of a stop, so less than 1 full stop and generally reasonably accurate.
But if using most Cineon based log curves, such as Sony’s S-Log3, between middle grey and white (41% to 61%) each stop only occupies around 8% of the recording range, half the range used by Rec-709. As a result if you use a default 10% zebra window, zebras will appear over a 1.2 stop range, this is excessive and introduces a large margin of exposure error. Compared to Rec-709 the zebras will only be half as precise, especially if you are trying to measure the brightness of a grey card or white card.
I recommend reducing the width of the Zebra window to 6% when using Zebras to measure skin tones within the S-Log3 image (if measuring a Rec-709 LUT there is no need to change the window). This will then give a similar range and accuracy to a 10% window in 709. If you are using zebras to measure a white card or grey card then consider bringing the zebra window down to 2% to gain a more accurate reading of the white/grey card.
FX6(left) and FX3 (right) zebras set to measure S-Log3 white card exposure.
The zebra window or range can normally adjusted in the cameras menu under the zebra settings. On the Sony Alpha’s and and FX3/FX30 you can adjust the range of the C1 and C2 custom zebras.
This tripped me up recently and I really should know better.
Don’t mix wireless and cabled microphones with differing amounts of latency because if you do you may have a nasty and difficult to remove echo or phase issues in your audio.
Digital + Analog don’t mix well.
In my particular case I was using a couple of Sony UWP-D wireless microphones to mic up two out of 3 members of a discussion panel. For the 3rd member I had planned to use another UWP-D but that microphone became unavailable at the last minute, so instead I used a lower cost digital microphone that works on the 2.5Ghz band. There is absolutely nothing fundamentally wrong with this lower cost microphone but the digital processing and transmission adds a very slight delay to the audio.
The Sony UWP-D’s are extremely low latency (delay) microphones and the audio arrives at the camera almost instantly. However most of the lower cost digital microphones have a very slight delay. That delay may be 1 frame or less, but there is still a delay. So the audio from the digital microphone arrives at the camera slightly late. If this is the only microphone you are using this isn’t an issue. But if you mix a very low latency microphone with one with a very slight delay, if both mics pick up any of the same sounds in the background there will be an echo or possibly a phase issue.
As the delay is almost never exactly 1 frame this can be difficult to resolve in most normal video post production suites where you can only shift things in 1 frame increments.
Phase Issues:
Phase issues occur when the audio from one source arrives very slightly out of sync with the other so that the one source cancels certain frequencies of the other out when the two are mixed together. This can make the audio sound thin or have a reduced frequency response.
So… don’t mix different types of digital wireless microphones and don’t mix lower cost digital microphones with more expensive low latency microphones. And when you are checking and monitoring your audio listen to a full mix of all your audio channels. If you monitor the channels separately the echo or any phasing issues might not be heard.
A fundamental aspect of electronic cameras is that the bulk of the noise comes from the sensor. So the amount of noise in the final image is mostly a function of the amount of light you put on to the sensor v the noise the sensor produces (which is more or less constant). This is known as the signal to noise ratio, often abbreviated to SNR.
Whether you use S-Log3 or S-Cinetone, even though the base ISO number the camera displays changes the sensitivity of the camera is actually the same, after all we are not changing the sensor when we change modes. In fact if you set the camera to dB you will see that in custom mode the base for both S-Cinetone and S-log3 (and every other gamma curve) is always 0dB.
All we are changing when we switch between S-Cinetone and S-Log3 is the gamma curve – which is a form of gain curve. The base ISO number changes between S-Log3 and S-Cinetone because if you were using an external light meter this would be the number to put into the meter to get the “correct” exposure, but the actual sensitivity of the camera remains the same.
First let’s think about what is happening at the base ISO of each if we were to use an external light meter to set the exposure…..
If we shoot at S-Cinetone and use the 320 ISO value in the light meter the aperture will be a little over a stop more open than if you shoot with S-Log3 and use 800 ISO for the light meter. So when using S-Cinetone at the base ISO there is a little over twice as much light going on to the sensor compared to S-Log3 at the base ISO and as a result the S-Cinetone will be much less noisy than the S-Log3. Not because of a sensitivity or noise performance difference but simply because you are exposing the sensor more brightly.
And if we use the SAME ISO value for S-Cinetone and S-Log3?
So now think about what might happen if you were to put 400 ISO into your light meter and use the values for shutter and aperture the meter gives and shoot with either S-Cinetone or S-Log3 using the very same aperture and shutter settings so that the same amount of light is hitting the sensor for both. The result will be that the amount of noise in the resulting image will be broadly similar for both and the same would happen if you were to use, let’s say, 4000 ISO (assuming you switch to high base for both).
There will tend to be a bit more noise in the S-Log and CineEI at the default settings, because by default NR is turned off in CineEI. But with the same in camera NR settings, again both the S-Log3 and S-Cinetone will have very, very similar noise levels when the sensor receives the same amount of light.
What about when there isn’t enough light?
So – when you are struggling for light, both will perform similarly from a noise point of view. BUT where there may be a difference is that with S-Cinetone all your image processing is done before it is compressed by the codec and what you see in the viewfinder is what you get. With S-Log3 the “underexposed” image gets compressed and then you will need to process that in post and when you add your post corrections this will be to the recorded image + compression artefacts so there will always be a lot of uncertainty as to how the final image will come out.
Personally I tend to favour S-Cinetone for under exposed situations. Generally if it’s under exposed dynamic range isn’t going to be an issue. S-Cinetone also spreads what image information you do have over a greater range of code values than S-Log3 and this may also help a little. But there is no right or wrong way and any differences will be small.
There seems to be a huge misunderstanding about what timecode is and what timecode can do. I lay most of the blame for this on manufactures that make claims such as “Our Timecode Gadget Will Keep Your Cameras in Sync” or “by connecting our wireless time code device to both your audio recorder and camera everything will remain in perfect sync”. These claims are almost never actually true.
What is “Sync”.
First we have to consider what we mean when we talk about “sync” or synchronisation. A dictionary definition would be something like “the operation or activity of two or more things at the same time or rate.” For film and video applications if we are talking about 2 cameras they would be said to be in sync when both start recording each frame that they record at exactly the same moment in time and then over any period of time they record exactly the same number of frames, each frame starting and ending at precisely the same moment.
What is “Timecode”.
Next we have to consider what time code is. Timecode is a numerical value that is attached to each frame of a video or an audio recording in an audio device to give it a time value in hours, minutes, seconds, frames. It is used to identify individual frames and each frame must have a unique numerical value. Each individual successive frames timecode value MUST be “1” greater than the frame before (I’m ignoring drop frame for the sake of clarity here). A normal timecode stream does not feature any form of sync pulse or sync control, it is just a number value.
Controlling the “Frame Rate”.
And now we have to consider what controls the frame rate that a camera or recorder records at. The frame rate the camera records at is governed by the cameras internal sync or frame clock. This is normally a circuit controlled by a crystal oscillator. It’s worth noting that these circuits can be affected by heat and at different temperatures there may be very slight variations in the frequency of the sync clock. Also this clock starts when you turn the camera on, so the exact starting moment of the sync clock depends on the exact moment the camera is switched on. If you were to randomly turn on a bunch of cameras their sync clocks would all be running out of sync. Even if you could press the record button on each camera at exactly the same moment, each would start recording the first frame at a very slightly different moment in time depending on where in the frame rate cycle the sync clock of each camera is. In higher end cameras there is often a way to externally control the sync clock via an input called “Genlock”. Applying a synchronisation signal to the cameras Genlock input will pull the cameras sync clock into precise sync with the sync signal and then hold it in sync.
And the issue is………..
Timecode doesn’t perform a sync function. To SYNCHRONISE two cameras or a camera and audio recorder you need a genlock sync signal and timecode isn’t a sync signal, timecode is just a frame count number. So timecode cannot synchronise 2 devices. The camera’s sync/frame clock might be running at a very slightly different frame rate to the clock of the source of the time code. When feeding timecode to a camera the camera might already be part way through a frame when the timecode value for that frame arrives, making it too late to be added, so there will be an unavoidable offset. Across multiple cameras this offset will vary, so it is completely normal to have a +/- 2 frame (sometimes more) offset amongst several cameras at the start of each recording.
And once you start to record the problems can get even worse…
If the camera’s frame clock is running slightly faster than the clock of the TC source then perhaps the camera might record 500 frames but only receive 498 timecode values – So what happens for the 2 extra frames the camera records in this time? The answer is the camera will give each frame in the sequence a unique numerical value that increments by 1, so the extra frames will have the necessary 2 additional TC values. And as a result the TC in the camera at the end of the clip will be an additional 2 frames different to that of the TC source. The TC from the source and the TC from the camera won’t exactly match, they won’t be in sync or “two or more things at the same time or rate”, they will be different.
The longer the clip that you record, the greater these errors become as the camera and TC source drift further apart.
Before you press record on the camera, the cameras TC clock will follow the external TC input. But as soon as you press record, every recorded frame MUST have a unique new numerical value 1 greater than the previous frame, regardless of what value is on the external TC input. So the cameras TC clock will count the frames recorded. And the number of frames recorded is governed by the camera sync/frame clock, NOT the external TC.
So in reality the ONLY way to truly synchronise the timecode across multiple cameras or audio devices is to use a sync clock connected to the GENLOCK input of each device.
Connecting an external TC source to a cameras TC input is likely to result in much closer TC values for both the audio recorder and camera(s) than no connection at all. But don’t be surprised if you see small 1 or 2 frame errors at the start of clips due to the exact timing of when the TC number arrives at the camera relative to when the camera starts to record the first frame and then possibly much larger errors at the ends of clips, these errors are expected and normal. If you can’t genlock everything with a proper sync signal, a better way to do it is to use the camera as the TC source and feed the TC from the camera to the audio recorder. Audio recorders don’t record in frames, they just lay the TC values alongside the audio. As an audio recorder doesn’t need to count frames the TC values will always be in the right place in the audio file to match the cameras TC frame count.
CineEI is different to conventional Shooting and you will need to think differently.
Shooting using CineEI is a very different process to conventional shooting. The first thing to understand about CineEI and Log is that the number one objective is to get the best possible image quality with the greatest possible dynamic range and this can only be achieved by recording at the cameras base sensitivity. If you add in camera gain you add noise and reduce the dynamic range that can be recorded, so ideally you always need to record at the cameras base sensitivity for the best possible captured image.
Sony call their system CineEI. On an Arri camera the only way to shoot log or raw is using Exposure Indexes and it’s the same with Red, Canon and almost every other digital cinema camera when shooting log. You always record at the cameras base sensitivity because this will deliver the greatest dynamic range.
Post Production.
A key part of any log workflow is the post production. Without a really good post production workflow you will never see the best possible results from shooting Log. An important part of the post production workflow will be correcting for any exposure offsets used when shooting. If something has been exposed very brightly, then in post you will bring that exposure down to a normal level. Bringing the levels down in post will decrease noise. The flip side to this is that if the exposure is very dark then you will need to raise the levels in post and this will make then more noisy
Exposure and Light Levels.
It is assumed that when using CineEI and shooting with log that you will control the light levels in your shots and use levels suitable for the recording ISO (base ISO) of the camera using combinations of aperture, ND and shutter speed, again it’s all about getting the best possible image quality. If lighting a scene you will light for the base ISO of the camera you are using.
Here’s the bit that’s different:
Changing the EI (Exposure Index) allows you to tailor where the middle of your exposure range is. It allows you to alter the balance between more highlight range with less shadows or less highlight range with more shadow information in the captured image. On a bright high contrast exterior you might want more highlight range, while for a dark moody night scene you might want more shadow range. Exposing brighter puts more light on to the sensor. More light on the sensor will extend the shadow range but decrease the highlight range. Exposing darker will decrease the shadow range but also allow brighter highlights to be captured without clipping.
IMPORTANT: EI is NOT the same as ISO.
ISO is a measure of a film stock or camera sensors SENSITIVITY to light. It is the measure of how strongly the cameras sensor responds to light.
Exposure Index is a camera setting that determines how bright the image will become for a given EXPOSURE. While it is related to sensitivity it is NOT the same thing and should always be kept distinct from sensitivity.
ISO= Sensitivity and a measure of the sensors response to light.
EI =Exposure Index – how bright the image seen in the viewfinder will be.
The important bit to understand is that EI is an exposure rating, not a sensitivity rating. The EI is the number you would put into a light meter for the optimum EXPOSURE for the type of scene you are shooting. The EI that you use depends on your desired shadow and highlight ranges as well as how much noise you feel is acceptable.
What Actually happens when I change the EI value on a Sony camera?
On a Sony camera the only things that change when you alter the EI value are the brightness of any Look Up Tables (LUTs) being used, the EI value indicated in the viewfinder and the EI value recorded in the metadata that is attached to your clip.
Importantly – To actually see a change in the viewfinder image or the image on an external monitor you must be viewing your images via a LUT as the EI changes the LUT brightness, changing the EI does not on it’s own change the way the S-Log3 is recorded or the sensitivity of the camera. If you are not viewing via a LUT you won’t see any changes when you change the EI values, so for CineEI to work, you must be monitoring via a LUT.
Raising and Lowering the EI value:
When you raise the EI value the LUT will become brighter. When you lower the EI value the LUT will become darker.
If we were to take a camera with a base ISO of 800 then a nominal “normal” exposure would result from using 800 EI. When the base ISO value and the EI value are matched, then we can expect to get a “normal” exposure.
The S-Log3 levels that you will get when exposed correctly and the EI value matches the cameras base ISO value. Note you will have 6 stops of range above middle grey and 8+ stops below middle grey.
Let’s now look at what happens when we use EI values higher or lower than the base ISO value.
(Note: One extra stop of exposure is the equivalent of doubling the ISO or EI. One less stop of exposure is the equivalent of halving the ISO or EI. So if double 800 EI so you get to 1600 EI this would be considered 1 stop higher. If you double 1600 EI so you are at 3200 EI this is one further stop higher. So 800 EI to 3200 EI is 2 stops higher)
If you were to use a higher EI, let’s say 3200 EI, two stops higher than the base 800 EI, then the LUT will become 2 stops brighter.
If you were using a light meter you would enter 3200 into the light meter.
When looking at this now 2 stops brighter viewfinder image you would be inclined to close the aperture by 2 stops (or add ND/shorter shutter) to bring the brightness of the viewfinder image back to normal. The light meter would also recommend an exposure that is 2 stops darker.
Because the recording sensitivity or base ISO remains the same no matter what the EI, the fact that you have reduced your exposure by 2 stops means that the sensor is now receiving 2 stops less light, however the recording sensitivity has not changed.
Shooting like this, using a higher EI than the base ISO will result in less light hitting the sensor which will result in images with less shadow range and more noise but at the same time a greater highlight range.
The S-Log3 levels that you will get when the EI value is 2 stops higher than the cameras base ISO value and you have exposed 2 stops darker to compensate for the brighter viewfinder image. Note how you now have 8 stops above middle grey and 6+ stops below. The final image will also have more noise.
A very important thing to consider here is that this is not what you normally want when shooting darker scenes, you normally want less noise, more shadow range. So with CineEI, you would normally try to shoot a darker, moody scene with an EI lower than the base ISO.
In this chart we can see how at 800 EI there is 6 stops of over exposure range and 9 stops of under. At 1600 EI there will be 7 stops of over range and 8 stops of under and the image will also be twice as noisy. At 400 EI there are 5 stops over and 10 stops under and the noise will be halved.
This goes completely against most peoples conventional exposure thinking.
For a darker scene or a scene with large shadow areas you actually want to use a low EI value. So if the base ISO is 800 then you might want to consider using 400 EI. 400 EI will make the LUT 1 stop darker. Enter 400 EI into a light meter and compared to 800 the light meter will recommend an exposure that is 1 stop brighter. When seeing an image in the viewfinder that is 1 stop darker you will be inclined to open the aperture or reduce the ND to bring the brightness back to a normal level.
This now brighter exposure means you are putting more light on to the sensor, more light on the sensor means less noise in the final image and an increased shadow range. But, that comes at the loss of some of the highlight range.
The S-Log3 levels that you will get when the EI value is 2 stops lower than the cameras base ISO value and you have exposed more brightly to compensate for the darker viewfinder image. Note how you now have 4 stops above middle grey and 10+ stops below. The final image will have less noise.
Need to think differently.
The CineEI mode and log are not the same as conventional “what you see is what you get” shooting methods. CineEI requires a completely different approach if you really want to achieve the best possible results.
If you find the images are too dark when the EI value matches the recording base ISO, then you need to open the aperture, add light or use a faster lens. Raising the EI to compensate for a dark scene is likely to create more problems than it will fix. It might brighten the image in the viewfinder, making you think all is OK, but on your small viewfinder screen you won’t see the extra noise and grain that will be in the final images once you have raised your levels in post production. Using a higher EI and not paying attention could result in you stopping down a touch to protect some blown out highlight or to tweak the exposure when this is probably the last thing you actually want to do.
I’ve lost count of the number of times I have seen people cranking up the EI to a high value thinking this is how you should shoot a darker scene only to discover they can’t then make it look good in post production. The CineEI mode on these cameras is deliberately kept separate from the conventional “custom” or “SDR” mode to help people understand that this is something different. And it really does need to be treated differently and you really do need to re-learn how you think about exposure.
For dark scenes you almost never want to use an EI value higher than the base ISO value and often it is better it use a lower EI value as this will help ensure you expose any shadow areas sufficiently brightly.
The CineEI mode in some regards emulates how you would shoot with a film camera. You have a single film stock with a fixed sensitivity (the base ISO). Then you have the option to expose that stock brighter (using a lower EI) for less grain, more shadow detail, less highlight range or expose darker (using a higher EI) more grain, less shadow detail, more highlight range. Just as you would do with a film camera.
Sony’s CineEI mode is not significantly different from the way you shoot log or raw with an Arri camera. Nor is it significantly different to how you shoot raw on a Red camera – the camera shoots at a fixed sensitivity and any changes to the ISO value you make in camera are only actually changing the monitoring brightness and the clips metadata.
Exposing more brightly on purpose to achieve a better end result is not “over exposure”. It is simply brighter exposure. Over exposure is generally considered to be a mistake or undesirable, but exposing more brightly on purpose is not a mistake.
Sony rate the ND filters in most of there cameras using a fractional value such as 1/4, 1/16, 1/64 etc.
These values represent the amount of light that can pass through the filter, so a 1/4 ND lets 1/4 of the light through. 1/4 is the equivalent to 2 stops ( 1 stop = half, 2 stops = 1/4, 3 stops = 1/8, 4 stops = 1/16, 5 stops = 1/32, 6 stops = 1/64, 7 stops = 1/128).
These fractional values are actually quite easy to work with in conjunction with the cameras ISO rating.
If you want to quickly figure out what ISO value to put into a light meter to discover the aperture/shutter needed when using the camera with the built in ND filters, simply take the cameras ISO rating and multiply it by the ND value. So 800 ISO with 1/4 ND becomes 800 x 1/4 = 200 (or you can do the maths as 800 ÷ 4). Put 200 in the light meter and it will tell what aperture to use for your chosen shutter speed.
If you want to figure out how much ND to use to get an equivalent overall ISO rating (camera ISO and ND combined) you take the ISO of the camera and divide by the ISO you want and this gives you a value “x” which is the fraction in 1/x. So if you want 3200 ISO then take the base of 12800 and divide by 3200 which gives 4, so you want 1/4 ND at 12800.
This is a common problem and something people often complain about. It may be that the LCD screen of their camera and the brightness of the image on their monitor don’t ever seem to quite match. Or after the shoot and once in the grading suite the pictures look brighter or darker than they did at the time of shooting.
A little bit of background info: Most of the small LCD screens used on video cameras are SDR Rec-709 devices. If you were to calibrate the screen correctly the brightness of white on the screen would be 100 Nits. It’s also important to note that this level is the level that is also used for monitors that are designed to be viewed in dimly lit rooms such as edit or grading suites as well as TV’s at home.
The issue with uncovered LCD screens and monitors is your perception of brightness changes according to the ambient viewing light levels. Indoors in a dark room the image on it will appear to be quite bright. Outside on a Sunny day it will appear to be much darker. It’s why all high end viewfinders have enclosed eyepieces, not just to help you focus on a small screen but also because that way you are always viewing the screen under the very same always dark viewing conditions. It’s why a video village on a film set will be in a dark tent. This allows you to then calibrate the viewfinder with white at the correct 100 NIT level and then when viewed in a dark environment your images will look correct.
If you are trying to use an unshaded LCD screen on a bright sunny day you may find you end up over exposing as you compensate for the brighter viewing conditions. Or if you also have an extra monitor that is either brighter or darker you may become confused as to which is the right one to base your exposure assessments on. Pick the wrong one and your exposure may be off. My recommendation is to get a loupe for the LCD, then your exposure assessment will be much more consistent as you will then always be viewing the screen under the same near ideal conditions.
It’s also been suggested that perhaps the camera and monitor manufacturers should make more small, properly calibrated monitors. But I think a lot of people would be very disappointed with a proper calibrated but uncovered display where white would be 100 NITs as it would be too dim for most outside shoots. Great indoors in a dim room such as an edit or grading suite but unusably dim outside on a sunny day. Most smaller camera monitors are uncalibrated and place white 3 or 4 times brighter at 300 NIT’s or so to make them more easily viewable outside. But because there is no standard for this there can be great variation between different monitors making it hard to understand which one to trust depending on the ambient light levels.
Sadly this is not an uncommon problem. Suddenly and seemingly for no apparent reason the SDI output on your camera stops working. And this isn’t a new problem either, SDI ports have been failing ever since they were first introduced. This issue affect all types of SDI ports. But it is more likely with higher speed SDI ports such as 6G or 12G as they operate at higher frequencies and as a result the components used are more easily damaged as it is harder to protect them without degrading the high frequency performance.
Probably the most common cause of an SDI port failure is the use of the now near ubiquitous D-Tap cable to power accessories connected to the camera. The D-Tap connector is sadly shockingly crudely designed. Not only is it possible to plug in many of the cheaper ones the wrong way around but with a standard D-Tap plug there is no mechanism to ensure that the negative or “ground” connection of the D-Tap cable makes or breaks before the live connection. There is a however a special but much more expensive D-Tap connector available that includes electronic protection against this very issue – see: https://lentequip.com/products/safetap
Imagine for a moment you are using a monitor that’s connected to your cameras SDI port. You are powering the monitor via the D-Tap on the cameras battery as you always do and everything is working just fine. Then the battery has to be changed. To change the battery you have to unplug the D-Tap cable and as you pull the D-Tap out, the ground connection disconnects fractionally before the live connection. During that moment there is still positive power going to the monitor but because the ground on the D-Tap is now disconnected the only ground route back to the battery becomes via the SDI cable through the camera. For a fraction of a second the SDI cable becomes the power cable and that power surge blows the SDI driver chip.
After you have completed the battery swap, you turn everything back on and at first all appears good, but now you can’t get the SDI output to work. There’s no smoke, no burning smells, no obvious damage as it all happened in a tiny fraction of a second. The only symptom is a dead SDI.
And it’s not only D-Tap cables that can cause problems. A lot of the cheap DC barrel connectors have a center positive terminal that can connect before the outer barrel makes a good connection. There are many connectors where the positive can make before the negative.
It can also happen when powering the camera and monitor (or other SDI connected devices like a video transmitter) via separate mains adapters. The power outputs of most of the small, modern, generally plastic bodied switch mode type power adapters and chargers are not connected to ground. They have a positive and negative terminal that “floats” above ground at some unknown voltage. Each power supplies negative rail may be at a completely different voltage compared to ground. So again an SDI cable connected between two devices, powered by different power supplies will act as the ground between them and power may briefly flow down the SDI cable as the SDI cables ground brings both power supply negative rails to the same common voltage. Failures this way are less common, but do still occur.
For these reasons you should always connect all your power supplies, power cables and especially D-Tap or other DC power cables first. Then while everything remains switched off connect the SDI cables. Only when everything is connected should you turn anything on. If unplugging or re-plugging a monitor (or anything else for that matter) turn everything off first. Do not connect or disconnect anything while any of the equipment is on. Although to be honest the greatest risk is at the time you connect or disconnect any power cables such as when swapping a battery where you are using the D-Tap to power any accessories. So if changing batteries, switch EVERYTHING off first, then disconnect your SDI cables before disconnecting the D-Tap or other power cables next.
(NOTE: It’s been brought to my attention that Red recommend that after connecting the power, but before connecting any SDI cables you should turn on any monitors etc. If the monitor comes on OK, this is evidence that the power is correctly connected. There is certainly some merit to this. However this only indicates that there is some power to the monitor, it does not ensure that the ground connection is 100% OK or that the ground voltages at the camera and monitor are the same. By all means power the monitor up to check it has power, then I still recommend that you turn it off again before connecting the SDI).
The reason Arri talk about shielded power cables is because most shielded power cables use connectors such as Lemo or Hirose where the body of the connector is grounded to the cable shield. This helps ensure that when plugging the power cable in it is the ground connection that is made first and the power connection after. Then when unplugging the power breaks first and ground after. When using properly constructed shielded power cables with Lemo or Hirose connectors it is much less likely that these issues will occur (but not impossible).
Is this an SD fault? No, not really. The fault lies in the choice of power cables that allow the power to make before the ground or the ground to break before the power breaks. Or the fault is with power supplies that have poor or no ground connection. Additionally you can put it down to user error. I know I’m guilty of rushing to change a battery and pulling a D-Tap connector without first disconnecting the SDI on many occasions, but so far I’ve mostly gotten away with it (I have blown an SDI on one of my Convergent Design Odysseys).
If you are working with an assistant or as part of a larger crew do make sure that everyone on set knows not to plug or unplug power cables or SDI cables without checking that it’s OK to do so. How many of us have set up a camera, powered it up, got a picture in the viewfinder and then plugged an SDI cable between the camera and a monitor that doesn’t have a power connection yet or already on and plugged in to some other power supply? Don’t do it! Plug and unplug in the right order – ALL power cables and power supplies first, check power is going to the camera, check power is going to the monitor, then turn it all off first, finally plug in the SDI.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Name
Domain
Purpose
Expiry
Type
wpl_user_preference
www.xdcam-user.com
WP GDPR Cookie Consent Preferences
1 year
HTTP
YSC
youtube.com
YouTube session cookie.
54 years
HTTP
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Name
Domain
Purpose
Expiry
Type
VISITOR_INFO1_LIVE
youtube.com
YouTube cookie.
6 months
HTTP
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Name
Domain
Purpose
Expiry
Type
__utma
xdcam-user.com
Google Analytics long-term user and session tracking identifier.
2 years
HTTP
__utmc
xdcam-user.com
Legacy Google Analytics short-term technical cookie used along with __utmb to determine new users sessions.
54 years
HTTP
__utmz
xdcam-user.com
Google Analytics campaign and traffic source tracking cookie.
6 months
HTTP
__utmt
xdcam-user.com
Google Analytics technical cookie used to throttle request rate.
Session
HTTP
__utmb
xdcam-user.com
Google Analytics short-term functional cookie used to determine new users and sessions.
Session
HTTP
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Name
Domain
Purpose
Expiry
Type
__cf_bm
onesignal.com
Generic CloudFlare functional cookie.
Session
HTTP
NID
translate-pa.googleapis.com
Google unique id for preferences.
6 months
HTTP
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Name
Domain
Purpose
Expiry
Type
_ir
api.pinterest.com
---
Session
---
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.