Tag Archives: recording

SD Cards – how long do they last?

This came up on facebook the other day, how long do SD cards last?

First of all – I have found SD cards to be pretty reliable overall. Not as reliable as SxS cards or XQD cards, but pretty good generally. The physical construction of SD cards has let me down a few times, the little plastic fins between the contacts breaking off.  I’ve had a couple of cards that have just died, but I didn’t loose any content as the camera wouldn’t let me record to them. Plus I have also had SD cards that have given me a lot of trouble getting content and files off them. But compared to tape, I’ve had far fewer problems with solid state media.

But something that I don’t think most people realise is that a  lot of solid state media ages the more you use it. In effect it wears out.

There are a couple of different types of memory cell that can be used in solid state media. High end professional media will often use single level memory cells that are either on or off. These cells can only store a single value, but they tend to be fast and extremely reliable due to their simplicity. But you need a lot of them in a big memory card.  The other type of cell found in most lower cost media is a multi-level cell. Each multi-level cell stores a voltage and the level of the voltage in that cell represents many different values. As a result each cell can store more than one single value. The memory cells are insulated to prevent the voltage charge leaking away. However each time you write to the cell the insulation can be eroded. Over time this can result in the cell becoming leaky and this allows the voltage in the cell to change slightly resulting in a change to the data that it holds. This can lead to data corruption.

So multi level cards that get used a lot, may develop leaky cells. But if the card is read reasonably soon after it was written to (days, weeks, a month perhaps) then it is unlikely that the user will experience any problems. The cards include circuitry designed to detect problem cells and then avoid them. But over time the card can reach a point where it no longer has enough memory to keep mapping out damaged cells, or the cells loose there charge quickly and as a result the data becomes corrupt.

Raspberry Pi computers that use SD cards as memory can kill SD cards in a matter of days because of the extremely high number of times that the card may be written to.

With a video camera it will depend on how often you use the cards. If you only have one or 2 cards and you shoot a lot I would recommend replacing the cards yearly. If you have lots of cards either use one or two and replace them regularly or try to cycle through all the cards you have to extend their life and avoid any one card from excessive use which might make it less reliable than the rest.

One thing regular SD cards are not good for is long term storage (more than a year and never more than 5 years) as the charge in the cells will leak away over time. There are special write once SD cards designed for archival purposes where each cell is permanently fused to either On or Off.  Most standard SD cards, no matter how many times they have been used won’t hold data reliably beyond 5 years.

Advertisements

Raw and the PXW-FS5

This isn’t a “how to” guide. There are many different recorders that can be used to record raw from the FS5 and each would need it’s own user guide. This is an overview of what raw is and how raw recording works to help those that are a bit confused, or not getting the best results.

First of all – you need to have the raw upgrade installed on the FS5 and it must be set to output raw. Then you need a suitable raw recorder. Just taking the regular SDI or HDMI output and recording it on an external recorder is not raw.

Raw is raw data direct from the cameras sensor with very little image processing. It isn’t even a color image, it won’t become color until some external processing, often called “De-Bayer” is done to convert the raw data to a color image.

For raw to work correctly the camera has to be set up just right. On the FS5 you should use Picture Profile 7. Don’t try and use any other profile, don’t try and shoot without a profile. You must use Picture Profile 7 at it’s factory default settings. In addition don’t add any gain or change the ISO from 3200 (2000 ISO from version 4.02 firmware). Even if the scene is a dark one, adding gain will not help and it may in fact degrade the recorded image.

White balance is set using the appropriate SGamut + color temperature preset chosen from within Picture Profile 7, there are only 3 to choose from for S-Gamut, but with a raw workflow you will normally fine tune the white balance in post. No other color matrix or white balance method should be used. Trying to white balance any other way may result in the sensor data being skewed or shifted in a way that makes it hard to deal with later on.

All of the above is done to get the best possible, full dynamic range data off the sensor and out of the camera.

If you are viewing the S-Log2 (ie don’t have viewfinder gamma assist enabled) then the exposure level that Sony recommend is to have a white card at 60%. So consider setting the zebras to 60%. Don’t worry that this may look a bit dark or appear to be a low level, but that’s the level you should start with… More about exposure later on.

This raw data is then passed down the SDI cable to the external recorder. The external recorder will then process it, turn it into a color signal (de-bayer) and add a gamma curve so that it can be viewed on the recorders screen. Exactly what it will look like on the monitor screen will depend on how the recorder is set up. IF the recorder is set to show S-Log2, then the recorders screen and the FS5’s LCD should look similar. However you might find that it looks very different to what you are seeing on the FS5’s LCD screen. This is not unexpected. If the recorder is setup to convert the raw to Rec-709 for display then the image on the recorder will be brighter and show more contrast, in fact it should look “normal”.

Under the surface however, the external raw recorder is going to be doing one of two things (normally at least). It’s either going to be recording the raw data coming from the camera as it is, in other words as raw. Or it will be converting the raw data to S-Log2 and recording it as a conventional ProRes or DNxHR video file. Either way when you bring this footage in to post production it will normally appear as a flat, low contrast S-Log2 image rather than a bright, contrasty rec-709 image. So understand that the footage will normally need to be graded or have some other changes made to it to look nice.

Recording the actual raw data will give you the best possible information that you can get from the FS5 to work with in post production. The downside is that the files will be huge and will take a fair amount of processing power to work with. Recording a ProRes or DNxHR video file with S-Log2 gamma is second best. You are throwing away a bit of image quality (going from 12 bit linear down to 10 bit log) but the files should still be superior to the 8 bit UHD internal recordings or even an external recording done via the HDMI which is also limited to 8 bit in UHD.

Most raw recorders have the ability to add a LUT – Look Up Table – to the image viewed on the screen. The purpose of the LUT is to convert the S-Log2/raw to a conventional gamma such as Rec-709 so that the picture looks normal. If you are using a LUT then the normal way to do things is to view the normal looking picture on the recorders screen while the recorder continues to record S-Log2 or raw. This is useful as the image on the screen looks normal so it is easier to judge exposure. With a 709 LUT you would expose the picture so that the image on the recorders screen looks as bright as normal, skin tones would be the usual 70% (ish) and white would be 90%.

There is a further option and that is to “bake in the LUT”. This means that instead of just using the LUT to help with monitoring and exposure you actually record the image that you see on the recorders screen. This might be useful if you don’t have any time for grading, but… and it’s a big BUT…. you are now no longer recording S-log2 or raw. You will no longer have the post production grading flexibility that raw or S-Log2 provide and for me at least this really does defeat the whole point of recording raw.

Exposure: Raw will not help you in low light. Raw needs to be exposed brightly (there are some data limitations in the shadows with 12 bit linear raw compared to 16 bit raw and possibly even 10 bit log). If viewing S-Log2 then Sony’s recommendation is to have a white card or white piece of paper at 60%. I consider that to be the absolute minimum level you can get away with. The best results will normally be achieved if you can expose that white card or piece of paper at around 70% to 75% (when looking at an S-Log2 image). Skin tones would be around 55%. If you expose like this you may need to use a different LUT on the recorder to ensure the picture doesn’t look over exposed on the recorders monitor screen. Most of the recorders include LUT’s that have offsets for brighter exposures to allow for this. Then in post production you will also want a LUT with an exposure offset to apply to the S-Log2 recordings. You can use the search function (top right) to find my free LUT sets and download them. Exposing that bit brighter helps get around the shadow data limitations of 12 bit linear raw and pushes the image up into the highlights where there is more data.

SEE ALSO: https://www.sony.co.uk/pro/article/broadcast-products-FS5-raw-shooting-tips

 

Can I use 8 bit to record S-Log?

My opinion is that while 8 bit, 422 can be used for S-Log, it is not something I would recommend. I’d rather use a cinegamma with 8 bit recording where possible. 10 bit 422 S-log is another matter altogether, this is well worth using and works very well indeed. It’s not so much whether you use 444, 422 or maybe even 420, but the number of bits that you use to record your output.

What you have to consider is this. With 8 bit, you have 240 shades of grey from black to super white. Of the 256 bits available, 16 are used for sync, white is at 235 and super white 256 so black to 100% white is only 219. With Rec-709, standard gamma, on an F3 and most other cameras you get about an 8 stop range, so each stop of exposure has about 30 shades of grey. The stops above middle grey where faces and skin tones are have the most shades, often around 50 or more. Then you hit the knee at 90% and each stop only has a handful of shades (why over exposure looks bad).

When you go to S-Log, you now have around 13 stops of DR (14 with S-log2 and S-Log3), so now each stop above middle grey only has approx 18 shades. Potentially using 8 bit for S-Log, before you even start to grade, your image will be seriously degraded if you have any flat or near flat surfaces like walls or the sky in your scene.

Now think about how you expose S-Log. Mid grey sits at 38% when you shoot. If you then grade this to Rec-709 for display on a normal TV then you are going to stretch the lower end of your image by approx 30%, so when you stretch you 18 steps of S-Log grey to get to Rec-709 you then end up with the equivalent of only around 12 shades of grey for each stop, that’s less than half of what you would have if you had originally shot using Rec-709. I’m sure most of us have at some point seen banding on walls or the sky with standard gammas and 8 bit, just imagine what might happen if you effectively halve the number of grey shades you have.

By way of a contrast, just consider that 10 bit has 956 grey shades from black to super white. the first 64 bits are used for sync and other data, 100% white is bit 940 and super white 1019. So when shooting S-Log using 10 bit you have about 73 grey shades per stop, a four fold improvement over 8 bit S-Log so even after shooting S-Log and grading to Rec-709 there are still almost twice as many grey shades than if you had originally shot at 8 bit Rec-709.

This is a bit of an over simplification as during the grading process, if your workflow is fully optimised you would be grading from 8 bit to 10 bit and there are ways of taking your original 8 bit master and extrapolating additional grey shades from that signal through smoothing or other calculations. But the reality is that 8 bits for a 13 stop dynamic range is really not enough.

The whole reason for S-Log is to give us a way to take the 14 stop range of a high end camera sensor and squeeze as much of that signal as possible into a signal that remains useable and will pass through existing editing and post production workflows without the need for extensive processing such as de-bayering or RAW conversion. This isn’t to much of a problem if you have a 10 bit recording, but with an 8 bit recording making it work well is challenging. It can be done, but it is not ideal.

Sonnet SDHC to SxS Adapter Review.

I recently reviewed the rather excellent Sonnet QIO I/O device that allows you to very quickly ingest material from SxS cards, P2 cards as well as SD cards to your computer. Along with the QIO I was sent a Sonnet SDHC to SxS card adapter to take a look at. Now I’m going to lay my cards on the table here and say that I strongly believe that if your going to shoot with an XDCAM EX camera you should be using SxS cards in order to get the best possible reliability. However as we all know SxS cards are expensive, although a lot cheaper now than they used to be, I remember paying £600 for an 8Gb card only 4 years ago!

So ever since the launch of the XDCAM EX cameras, users including me have been trying to find alternative recording solutions. I found that it was possible to use an off-the-shelf SD card to express card adapter (the original Kensington Adapter) to record standard frame rates on class 6 SD cards in the EX cameras.  However the SDHC cards stick out of the end of the generic adapters so you can’t close the doors that cover the card slots in the cameras. Following that initial discovery various companies have brought out flush fitting adapters that allow the use of SDHC cards. Then about two years ago Sony openly admitted it was possible to use an adapter in the cameras and released their own adapters (MEAD-SD01 and MEAD-MS01) as well as making some firmware changes that made using adapters more reliable. The key point to consider when using an SxS adapter and SD cards is that the media, the SD cards, are consumer media. They are produced in vast quantities and the quality can be quite variable. They are not made to the same standards as SxS cards. So I choose to shoot on SxS whenever possible and I’ve never had a single failure or unexplained footage loss. BUT I do carry a couple of adapters and some SD cards in my camera kit for emergencies. You never know when you might run out of media or find yourself in a situation where you have to hand over you media to a third party at the end of a shoot. SDHC cards are cheap and readily available. You can buy an SDHC card just about anywhere. I’d rather switch to SDHC cards than try to do a panic off-load to a backup device mid-shoot, that’s a recipe for disaster!

Sonnet-SxS-300x295 Sonnet SDHC to SxS Adapter Review.
Sonnet SDHC adapter for SxS Camera Slot

Anyway… on to the Sonnet SDHC to SxS adapter. It feels as well built as any other adapter on the market. It is mostly metal with plastic end pieces that are made from a nice high quality plastic. I have other adapters that use a very brittle plastic and these can break quite easily, but this one appears to be well made. The SDHC card slots into a sprung loaded slot in the end of the adapter making a reassuringly positive sounding click when it’s latched in place. Once inserted the SDHC card is slightly recessed into the adapter. This is good as it helps prevent the SDHC card from being released from the adapter as you put the adapter into the camera. It means that as you push the adapter into the camera you are pushing on the end of the adapter and not on the SDHC card like some other adapters I have used. To remove the SDHC card you simply push it quite firmly, further into the adapter until you hear another click and it then pops out far enough to be pulled out. This is certainly one of the better made adapters that I have come across.

To test the adapter I used some Transcend class 6 SDHC cards as well as some Integral Ultima Pro class 10 SDHC cards. I used the adapter in my PMW-F3 with firmware version 1.10 as some user have reported problems with other adapters and this firmware revision. I was able to completely fill the cards shooting using S&Q motion at 50fps or 60fps using long and short clips with lots of motion. This is I believe the toughest test for these adapters as the recording bit rate is close to 70Mb/s. I had no issues at all with either type of SDHC card and there was very little delay between finishing a recording and being able to start the next, a good indicator of the cards high performance. I also tested recording very long clips to ensure that there would be no issues when the camera breaks the recording into 4Gb chunks. Again, no problem.

So if you are going to use SDHC cards and an SxS adapter I would suggest you consider the Sonnet SxS adapter. It’s certainly cheaper than the Sony adapter. Sonnet are a large business with a wide range of products and a global distributor and dealer network, so you should have no problem finding a local supplier.

The 8 bit or 10 bit debate.

Over the years there have been many, often heated debates over the differences between 8 bit and 10 bit codecs. This is my take on the situation, from the acquisition point of view.

The first thing to consider is that a 10 bit codec requires a 30% higher bitrate to achieve the same compression ratio as the equivalent 8 bit codec. So recording 10 bit needs bigger files for the same quality. The EBU recently evaluated several different 8 bit and 10 bit acquisition codecs and their conclusion was that for acquisition there was little to be gained by using any of the commonly available 10 bit codecs over 8 bit because of the data overheads.

My experience in post production has been that what limits what you can do with your footage, more than anything else is noise. If you have a noisy image and you start to push and pull it, the noise in the image tends to limit what you can get away with. If you take two recordings, one at a nominal 100Mb/s and another at say 50Mb/s you will be able to do more with the 100Mb/s material because there will be less noise. Encoding and compressing material introduces noise, often in the form of mosquito noise as well as general image blockiness. The more highly compressed the image the more noise and the more blockiness. It’s this noise and blockiness that will limit what you can do with your footage in post production, not whether it is 10 bit over 8 bit. If you have a 100Mb 10 bit HD compressed recording and comparable 100Mb 8 bit recording then you will be able to do more with the 8 bit recording because it will be in effect 30% less compressed which will give a reduction in noise.

Now if you have a 100Mb 8bit recording and a 130Mb 10 bit recording things are more evenly matched and possibly the 10 bit recording if it is from a very clean, noise free source will have a very small edge, but in reality all cameras produce some noise and it’s likely to be the camera noise that limits what you can do with the images so the 10 bit codec has little advantage for acquisition, if any.

I often hear people complaining about the codec they are using, siting that they are seeing banding across gradients such a white walls or the sky. Very often this is nothing to do with the codec. Very often it is being caused by the display they are using. Computers seem to be the worst culprits. Often you are taking an 8 bit YUV codec, crudely converting that to 8 bit RGB and then further converting it to 24 bit VGA or DVI which then gets converted back down to 16 bit by the monitor. It’s very often all these conversions between YUV and RGB that cause banding on the monitor and not the fact that you have shot at 8 bit.

There is certainly an advantage to be had by using 10 bit in post production for any renders, grading or effects. Once in the edit suite you can afford to use larger codecs running at higher bit rates. ProRes HQ or DNxHD at 185Mb/s or 220Mb/s are good choices but these often wouldn’t be practical as shooting codecs eating through memory cards at over 2Gb per minute. It should also be remembered that these are “I” frame only codecs so they are not as efficient as long GoP codecs. From my point of view I believe that to get something the equivalent of 8 bit Mpeg 2 at 50Mb/s you would need a 10 bit I frame codec running at over 160Mb/s. How do I work that out? Well if we consider that Mpeg 2 is 2.5x more efficient than I frame only then we get to 125Mb/s (50 x 2.5). Next we add the required 30% overhead for 10 bit (125 x 1.3) which gives 162.5Mb/s. This assumes the minimum long GoP efficiency of x2.5. Very often the long GoP advantage is closer to x3.

So I hope you can see that 8 bit still makes sense for acquisition. In the future as cameras get less noisy, storage gets cheaper and codecs get better the situation will change. Also if you are studio based and can record uncompressed 10 bit then why not? Do though consider how you are going to store your media in the long term and consider the overheads needed to throw large files over networks or even the extra time it takes to copy big files compared to small files.