SSD as archiving medium
For a few years now I've been archiving projects and media to Blu-Ray. We used to employ DLT/LTO but we had some issues with it and are "once bitten twice or maybe three times shy" about it now. Blu-Ray has worked well enough (though we had one batch of discs early on that burned and verified just fine but are now completely unreadable) but due to the spread of 4K project sizes are getting to the point where it's less feasible to spend a whole day archiving everything twice. I know a lot of folks archive to raw SATA drives, a prospect that scares me given all the horror stories I've been hearing about drives that won't spin up after sitting on a shelf for a year or so. I've been looking into SSD drives now that the prices have dropped a bit, and it seems really intriguing. Even though there are no moving parts, I've read that SSD drives might have similar problems to standard hard drives after a latency period of a year or more. The idea of pulling every drive in the library and exercising them seems to render the whole idea unsustainable.
I'm sympathetic; we've tried DLT solutions and had a miserable time. We went to gold archival DVD for our long-term SD programming storage. Now everybody is HD and DVD's are too small unless you use tighter codecs. H.265 is coming and may make DVD-R useful again for a while for HD archival. It will make BluRay even higher-capacity than it is now. But your mechanical storage is always going to have two points of failure: the media itself might deteriorate, and the access to a working device to read/write to it may not last. Anybody here still have a working Iomega Jazz drive and disks? Hell, those were hard to keep functioning when they were NEW! :-)
So we have BluRay in our shop now, for HD things that may have to have a physical dub handed off at short notice, but we haven't used it much yet, because we're basically storing the most-worthy archivals in "cloud" storage now. It's a private system owned by our agency and we pay a monthly "rental" for the drive space we take up, which is a teensy tiny piece of a very vast system. Material that doesn't see any action past a certain time goes into "deep" storage, which is a redundant DLT/LTO type deal, but a massive one, part of the same cloud enterprise. Deep storage rents a little cheaper, but takes longer to retrieve. I tried a deep retrieval of a 30-minute HD file and it took under two hours, mostly due to relatively slow networks, not the storage, so, not that bad.
My opinion on this issue keeps evolving. The thrifty cheapskate in me balks at putting SSD's on the shelf at their current prices. If you're serious about data integrity, you're putting TWO of them on shelves in different facilities, and that, as you point out, can add up.
While working on some scripts for a tech show, I really got a slap in the face regarding where things are going.
Suffice to say, in this decade, we're going to have a huge leap in processing and storage technology, the impetus for that comes from two "big science" projects; the Human Brain Project, and the Square Kilometer Array. (I recommend you look both of these up, as they are fascinating projects)
These two projects will, in a few short years, be generating more fresh data each day, by themselves, than the entire archived content of today's internet, worldwide. Right now IBM and a consortium of computer experts are working out how to capture, manipulate, transport, and store these zettabytes of data.
You and I will live to see this happen, easily, and the spin-offs of these developments are going to rock our industry and our world, just as the invention of HTML and the Web as we know it was spun off the building of the LHC collider project at CERN in Switzerland. Data storage is going to become incredibly capacious and thus, so cheap that nobody will even have to worry about erasing anything, ever, in a lifetime, with redundancy and ubiquity on a level we can't really imagine today.
All this is going to happen in what they now call "the cloud", but it's going to be much more dense than any cloud: it's going to become a data Ocean, and we'll all swim in it like fish. Data won't need to be "portable", in any sense of the word we know today, because the entire planetary network of distributed storage contains it all. What *will* be "portable" and "private" will be the access codes and encryption keys that give you "ownership" of "your" data at the exclusion of others, and an access device you use for the interface. Your edits will happen in the data ocean and your editing tool will be a smartphone, or something like Google Glass.
I know, this sounds like science fiction ganja dreaming. Today. But I've had a peek at the infrastructure being built for it, today; the multi-gigabit networks, the processors... and there is no question of "if", only when, measured in handfuls of years.
So, in ten years, this problem of yours will, in a semantic sense, not exist.
Yeah, that sounds great, Mark. But what do we do next week, right?
SSD's in my opinion are going to continue to get cheaper, eventually as "throw-away" as the thumb drives we use today... but not as cheap as today's early-stage "cloud" storage already is. I think what makes the accountants happy is that cloud storage "virtualizes" the job and gives you a low monthly cost and no hardware and facilities of your own that need amortization, maintenance, updates, and staff to oversee them. No need to try and guess what hardware to jump to next. Renting virtualized "cloud" space from Amazon or Google or some place like that is what I see as the financially prudent interim solution, until this glorious future I predicted turns into hard fact.
We live in a science fiction world, today.