RAID Setup with DAS
Here's my predicament. Just purchased/rebuilt a fairly high end laptop which will allow me to work on projects while out and about. Along with this is a high-end self-built rack-mounted production computer.
So, here's the goal:
I want to build a Direct Attached Storage system (probably rack-mounted) that can be moved between the two computers and also serve as my main production storage while working on projects. The DAS will likely use Port Multipliers in some fashion to be able to take 4 drives through one eSata cable, thus connecting to an eSata port on the laptop or production machine. Simple enough, eh?
It used to be like this for me:
After I worked on an A/V project, I would label and store all raw media (DV tapes, DAT's, whatever), output the master to a few tapes (archive and backup), and then delete all of the imported AVI footage and just save to DVD the Premiere project files, AE project files, etc. If I had ever needed to re-edit, I would have to re-import all of the footage again (talk about downtime :) ).
So, my thoroughly thought out plan is this:
Have 2 HDD's per major project (they're cheap, have huge storage potential, and will not suffer from degradation as much as media). One will be on-site, labeled, the second will be off-site. They will be archived, and can hold ALL data (especially since I'm going tapeless here real soon) without the need for media. Need another copy of a DVD? Open up the file and re-burn. Re-edit? No issues, all the imported and cached data is still there.
So, I am thinking of building a DAS w/ 4 drive slots. I'm thinking of setting up a 1+0 RAID per the 4 drives. Here's the question. How can I build this w/o needing 4 drives per project? I just want 2 drives, one main and one off-site backup. But, I also would like the performance of RAID 1+0 over straight RAID 1.
The only permanent fixture would be the 4-drive DAS. I would like to swap out the project drives (2) everytime I work on a different project. Can I do this? Can I swap out just 2 of 4 drives for a new project and have the 2 "striping" drives rebuild to the 2 "mirror" drives for each project.
Am I nutty? :)
Perhaps a better explanation of how RAID 1+0 works is in order? I am semi-new to this and am just looking for some expertise and experience.
Any other thoughts or suggestions on this, good or bad, are greatly appreciated.
Doesn't both Sonnet and Cal Digit sell 2 port slot34 eSATA cards - plug in 2 4 bay chassis, and you have what you want. What is the problem ?
Perhaps I was not clear enough. The issue isn't with the input into the laptop, but rather the 4 bay chassis. Trying to figure out if a RAID 1+0 setup would allow me to remove 2 of the discs (of 4) and be able to retrieve usable data from them.
As far as I understand, RAID 1+0 is a mirror, then a strip across those mirrors. Thus, in a real world application, can 2 drives (project drives, one main and one mirror of main, as in a RAID 1 setup) be removed from a RAID 1+0 setup, and work independently without the other 2 of 4 drives, which I assume are striping drives?
Again, this is mainly a question of real world application of RAID 1+0.
I am trying to have to avoid having 3-4 HDD's stored per project (minimum of RAID 1+0) as I would have to with say a RAID 5 setup, and still be able to plug in the drives and retrieve data.
I basically just want a RAID 1 setup. With the effeciency of RAID 1+0. If I'm still not clear, please let me know.
Dustin, I can tell that you really want a RAID 0 + 1 (or "1 + 0" is the same thing) drive setup. Basically you can`t have the HD arms swings speed and the RAID 1 protection at the same time on too few, 2 drives. For example, a friend of a friend buys pairs of 9376258765876250243384578587635873587397208209858397187498 GB hard drives, partitions each one just before the halfway point, then uses them as RAID 5 (that`s pretty similar to RAID 0 + 1) to make a server. Others partition their pairs of drives two thirds of the way through in order to have three equal work partitions in total, similar. But do you know what? These are server storages for slow juice like bank archiving, other databasing, or maybe audio production and storage. They`re really trading off speed they don`t need. If you were to build a storage system that had barely more than enough speed to do the job, that itself would contribute to production or storage mistakes. Got the idea there? There are some extra design items you can reach for like a controller that has an extra dose of memory sticks and a RAID battery in case the hard drives take a hiccup or power glitch that`s a bit bigger than normal, but I see no real shortcuts. If you want video style speeds with the RAID 1 principled protection, you`ll need a regular array. Permanently storing projects on removed hard drives is mostly economical and practical these days. If you remove two (caught up secondary) drives of a 4- drive in order to drop in blank ones and archive the safety drives, you can make the system work resourcefully some of the time but not all of the time, There will be times when you`ve guessed wrong and the secondary drives won`t be caught up and you`ll accidently snip off the end of a project. Also, rebuilding (regenerating) a part of a RAID, while stable and regular, is slow. Make unloading for storage a more discreet (that means "visible" and regular) part of the operation. If you can find a way to make your hard drives box saw into their job on a regular but nice controller, then switch to a controller that has maintenance utilities built right in it when the hard drives box switches to the non- portable computer, then you`ll be using one of my pet methods.
There is a difference between RAID 0+1 and RAID 1+0. Wikipedia has a decent write up about RAID levels that is probably worth looking at: http://en.wikipedia.org/wiki/RAID Also good info on the differences between nested RAID levels: http://en.wikipedia.org/wiki/Nested_RAID_levels
If you want to build a RAID 1+0 (also called RAID 10), Dustin, it sounds like you've got a pretty good handle on the how it technically works. You start with two drives and mirror them (RAID 1). Then, you get another two drives and mirror them. Then, you take those two mirrored sets and stripe them together (RAID 0). And, you're technically correct that you could remove one drive from each of those mirrored sets and still have all your data (just can't lose both drives from either of the mirrored sets). Performance of a RAID 10 is better than RAID 1 but still with a pretty high level of protection.
However, I'd recommend against this setup for a few reasons.
First off, RAID 10 is not the best choice for video work. You'll find a surprising number of people willing to take on a battle of RAID 5 vs. RAID 10, but in the end, I believe RAID 10 tends to work best for small files and in write-intensive situations (specifically databases). With video, you're dealing with large files and care more about read speed than write speed. For my money, RAID 5 is really a better choice.
Second, hard drives on shelves make poor long-term storage devices. They fail unpredictably when sitting on the shelf. You need to spin them up from time to time just to make sure they're happy and keep all the parts moving, which is easy to forget to do. All hard drives eventually die, and sitting on a shelf, you'll have no warning as to when it's about to happen.
Third, archiving to a RAID 0 is a doubly-bad long-term storage choice. If one hard drive is prone to fail, archiving to a two-disk RAID 0 is twice as prone to failure (because if you lose either drive out of the set, all your data is lost). If I'm following your plan right, you'll be removing two disks (one from each mirrored set) to save long term, which essentially gives you a RAID 0 where you need both those disks to stick around and work forever.
Fourth, constantly matching drives and reformatting a new RAID set will be a challenge over time. Ideally, all the drives in your RAID 10 set should match -- so what happens in a couple years when you can't find the same drives easily anymore or want to move up to bigger drives? Not only will keeping track be a headache, but you'll probably run into a few times where you just need to start over with a new RAID 10 to match up drives.
I know a few people who have chased this dream -- editing a project on a hard drive, sticking it on the shelf and restoring the project instantly when changes come down in the future. In fact, I know at least one person who swears by it and I believe uses a series of LaCie external drives (rather than bare drives in a RAID 10 like you're suggesting). But, I'm not convinced it's a good long-term model.
Instead, I'd recommend you develop two systems -- one to serve your editing needs, and one to serve your archiving needs. Editing and archiving are such different tasks with competing priorities. It sounds like you're trying to get everything at once -- both the safety and flexibility of a long-term archive right along with the speed and performance of an edit system -- in the same hard drives. It's a nice dream, but I think you'll be happier (and actually spend less money long term) by maintaining two separate systems.
For your editing system, I'd recommend a good, hardware-based RAID 5. Lots of vendors on the COW sell them and even offer discounts. My current favorite is the G-Speed ES series from G-Technology. http://www.g-technology.com/products/g-speed-es.cfm
For an archive system, I'm currently a big fan of the DroboPro. http://www.drobo.com/products/drobopro/ Basically, if you put some green (slow) 2TB hard drives in that, you can grow it over time, and keep it on with e-mail notifications so you can find out if a drive is failing. It's slow, but safe, reliable and flexible.
I'd recommend editing on your fast RAID 5 device, and when you're done with a project, take the time to move the files you'd need to work on the project again off to your archive device and delete those files from the RAID 5. With some software (like AE) you can pack up and archive only the finalized compositions you actually used. With others, you'll need to manually move files and folders. This takes time, but I've found gives me a much more usable archive.
In the future, you can do some things (like burn another DVD of a project) just by connecting to your archive device -- no need to recopy the files. However, if you need to re-edit a project, you can copy all its files back to your RAID 5 with less time than recapturing from tape. (Sad as it is to say, though, tape is still an extremely cost-effective and fairly long-term storage medium. I would trust many tapes longer than almost any hard drive. Depending on what you've got and how much money you have to spend, tape archiving shouldn't be overlooked.)
I doubt this was the good explanation of RAID 1+0 you were looking for, but hopefully it'll get you thinking about some different options. Good luck and please post back if questions come up!
Fred and Dave,
Just wanted to start off by saying thanks for both of your inputs. It really, really is appreciated.
I think you both are understanding where I am coming from. Dave, you really hit in on the nose. Seriously. I've had the hardest time explaining my ideas to people about this, but I think you hit it.
I was not under the exact realization of the actual specs of RAID 10. I was thinking that 2 drives would be "whole" drives, not stripes. Thus, I wouldn't need to archive them together and have to have both to rebuild with them both (double fault exposure).
I think the idea of an editing setup and an archive setup is really, really perfect. I'm coming to the realization that the time needed alone to rebuild a RAID everytime I want to re-edit a project would be far, far to cumbersome. Much more than just moving the files back to an editing drive. The only way I think I could do what I want would be to go single drive or RAID 1, but the speed (unless I went SSD, and WAY TOO EXPENSIVE right now) would be a huge problem.
Doing everything this way would allow me to build a very, very fast editing setup (going as large as 6-8 drives, or more) in a RAID 5 or 6 setup, and still having the archive capabilities of modern storage space.
As far as HDD's for archiving, I'm not the biggest fan either. However, investing in a (IMO) dying LTO system is far too expensive an initial proposition as well as real limitations of usable space (unless I went LTO-5, which is over $100 tape at this point (too new).
Here's the good news. I'm planning, if nothing else (aka, if I don't use a NAS/SAN setup) to RAID 1 the archive drives. Thus, there is a little fault protection against HDD's going bad. Plus, all projects are always mastered out to some form of media, whether it be DV or whatever so at least some type of master exists if the worst happens. Don't get me wrong, would love to always keep the raw footage. However, if hell breaks loose, at least I'd have the masters.
Here's a last quick question for you. If I was to say, build something on my own, what are your thoughts on SATA port multipliers v. a multilane bridge v. individual eSATA cables? Is there a tremendous bottleneck even with a really smart controller?
Hey Dustin, glad you're thinking about some other options, and happy to help if I can.
I'm not a serious DIY SATA guy, honestly. I did a few on my own a few years ago and can give you some basics, but you'll find a lot more people with really solid opinions and experience on which controller cards and enclosures they like for which purposes.
Here's the SATA DIY technology breakdown as I understand it:
Building a SATA RAID with a controller card in your computer and individual cables to each hard drive generally gives you full speed access to each drive in the RAID. Best performance with a good controller card, fine for video. Cabling can be messy and only so many SATA connections fit on a card.
A SATA Multilane or Infiniband is basically just a cleaner way of cabling things but still gives you full-speed access to each drive in the RAID. One SATA Multilane or Infiniband cable usually connects four drives at full speed. Again, you'll need a good controller card (with SATA Multilane connections), and it's fine for video.
Finally, Port Multipliers give you roughly the convenience of a Multilane connection, but slow you down in order to pull it off. A Port Multiplier allows you to split the bandwidth of one 6/3/1.5Gb/s SATA connection, usually connecting four or five drives over a single SATA or eSATA cable. That means each drive isn't necessarily connected at full speed, but you get the capability to connect multiple drives over a single, cheaper cable. A great controller card here won't necessarily make a difference -- the bottleneck is the connection itself. I believe they're used most often where large amounts of storage are needed and speed isn't necessarily the priority (not as much for video editing).
So, if I were building a serious DIY video editing RAID on my own, I'd go with either individual connections or Multilane connections with a really nice controller card that gave me a HARDWARE RAID-5 (or 6 if you want). A couple years ago, I used one that had two Multilane connections on the card, each breaking out to four individual SATA connectors (so the card could RAID up to eight drives). Just don't believe people who offer a cheaper, software-based RAID-5 solution. It's all about a good controller card that takes care of all the heavy-duty RAID work on its own hardware without stealing CPU cycles. Also, I have yet to see a software RAID rebuild go smoothly. Hardware RAID is key.
Also, if I were working on my own, I'd stick with 1TB SATA drives for a while. 1.5TB and 2TB drives have seen issues in the recent past, and while I'm sure people would verify they have been fixed for many models and manufacturers, I'm still a little nervous to build a high-performance DIY system with anything other than proven and reliable 1TB drives. Plus you can pick them up pretty cheap.
However, if we're going to talk true reality, I'd still say buying a good off-the-shelf RAID from one of the top companies advertising on the COW is a smart investment. You can find some decent deals right now; and if you really want your RAID to be highly available, it's nice to have a company to call for support.
No matter what you do, keeping those masters and footage on video tape is something you won't be sorry for -- a small investment for big-time piece of mind.
Good luck with everything, and be sure to post back if other things come up. I'll be interested to hear how it all goes!