Multiple questions about SAN/Archiving for small non-profit
Hello there everyone,
First - thank you for all the threads created and discussed over the years, I constantly refer to these forums whenever I need guidance on something media-related. Being the go-to video guy, and having to learn and recommend technology based on no experience with any of the stuff I potentially buy, this community has been an invaluable tool to help me not make totally destructive decisions. hahaha.
Anyways, I work as a videographer/asset manager/IT guy at a small (10 people) non-profit. So far, we have been fortunate to have funding to secure various software and hardware to actually have quite the 1-person video setup for a company of our size and budget. I will breakdown what I currently have, and then go into the next steps/questions.
Current setup is very piecemeal, of course primarily dictated by budget and time i have available to learn the different systems/hardware.
2009 macpro edit station with final cut studio 2
2010 macpro edit station with final cut studio 2
2010 macpro server with final cut server 1.5 (just has standard leopard, not server)
gspeed espeed es 8tb raid configured in raid 5 as asset storage for final cut server *6tb*
cheapo trendnet unmanaged switch for gigabit connections for the 3 computers *jumbo enabled*
drobo *was purchased before i started, would never get this* 2 TB for archiving OLD projects from FCS (what i can fit, thing is almost full)
raw archival done on dual layer dvd *again, gasp* as I shoot on a sony ex-1r and/or nikon d7000
Performance of this system has been ok for ME. I am the primary editor, and sometimes have the other editor come in and help on the other station, and the system hiccups here and there, as we are editing in sony excam/xdcam codec.
Moving forward, my org is getting more and more video work and I am getting nervous about this setup for 2 reasons - it can't really scale if we add another editor in terms of storage, and it might be painfully slow if we add another editor. analyze/transcode/encode could be faster in FCS, but we don't have the budget to add machines for a compressor farm, etc.
I am thinking that we might need some kind of expandable SAN in the future to accomodate the need for multiple producers to be working on different projects and pulling assets from final cut server at one time. Or maybe i just add another gtech espeed es to get by? I have only filled 2.5tb over 2 years to this, and really only actively use about 500gb during project edits. In other words, i do shoot a lot of footage over a year, but i don't see myself using up the rest of the 3.5tb until maybe end of NEXT year.
Do i try to increase the speed of the network of the asset server through better NICs and link aggregation?
Do I solve the archival problem of the drobo and just go LTO-4/5 and directly backup the final cut server disk? Or should i just drop massive hard drives in the drobo, use that as like a 10 tb archival device, and dump backups to LTO from that? Or just completely remove this quasi archive device for a giant raid that is direct connected to FCS?
So to consolidate:
2 editors working simultaneously through final cut server managed database
need to solve the potential storage scale/capacity problem before its too late
need to solve archival/backup/offsite? strategy.
Any feedback would be great. Even if its to laugh at my hackjob setup. It has worked so far, but I would rather invest some $$ up front and avoid headaches, than invest $$$$$$$ and be buried. I can also run speed tests if you all would like more granular information.
There are a few things that I can see that would probably benefit you. You could trunk nic ports together to give you a bit more push through your network, but that still won't solve the latency problem coming from your storage. The storage can only push as fast as the components that are inside and without an optimized storage operating system, there won't be much that can be done to push it harder. A great and free OS for storage is ZFS (http://openindiana.org/ or http://www.napp-it.org/) If your machine can support and run on this platform, you will be able to take advantage of ZFS's ability to use what is called Hybrid Storage. Hybrid storage is where ZFS uses multiple pools of different storage mediums to enhance performance of read/write transactions. IE. It will use RAM and SSD drives as a read cache (80% of your transactions) to cache the data and have it immediately available to submit back to your editing station. ZFS also uses copy on write technology to simultaneously write the transactions down to hard disk.
If you combine ZFS with 10gbe NICs in your editing stations, your performance will skyrocket have sustained transfers of at least 300mb/s or higher depending on how much ram/ssd's you have in your storage array.
As for the LTO question, it's always nice to have your data in a tape format, but most are going to some form of cloud based or disk based back up. Tape isn't really dead, but disk based back up solutions have been replacing tape over the past bit of time. You may see more benefit from simply adding larger SATA drives to the existing, as long as you have a primary source for your data, and you use RSYNC or some other type of back up medium to push your primary over to secondary storage, your data should be secure. The nice thing about disk based is that it gives you the ability to mount your production over to the back up if your primary ever did have a failure, which would not be as easy to get back online and much more time consuming if it is on tape.
We have been working with ZFS for over 3 years now, and have seen the advantages of ZFS over traditional storage. We've built and sold many solutions to video editors and have seen over 40 streams at once coming from a ZFS array, and have saturated a 10gbe network with 8 editors.... The storage had to wait for the network, which has not typically been the case. With older OS platforms, disk was always the slowest medium. So if you can invest in a new storage array that can properly run ZFS (48gb ram or more, 8-12x SSD @ 120gb, and SATA drives to meet your capacity needs and growth) then I would recommend investing. You would be able to not only meet your production needs today, but also have room to expand and provide at least 5-7 yrs of growth protection.
That's my $.02
[Aaron Vaughn] "We have been working with ZFS for over 3 years now, and have seen the advantages of ZFS over traditional storage. We've built and sold many solutions to video editors and have seen over 40 streams at once coming from a ZFS array, and have saturated a 10gbe network with 8 editors."
Curious: are you sharing ZFS via NFS or SMB? Or something else?
Thank you so much for your response Aaron. It looks like an incredibly powerful system that can be created using that technology.
Something that might be important that I forgot to mention - is that I am also using Final Cut Server as an Edit in Place device - So not only is it storing and cataloging my assets, I also am working directly from the gspeed es as I edit.
Andrew - ZFS can receive and understand all protocols. CIFS/SMB, NFS, AFP, iSCSI, FC, and once standardized FCoE, all in the same box, without additional licensing*(depending on the vendor) Nexenta charges for FC and HA, as does their resellers. Typically with video we recommend 10Gbe in order to sustain transfer rate and streams.
James, I've have to ask one of our video customers to give specifics around video editing software (I'm more of a storage geek). Shoot an email over to firstname.lastname@example.org, and my CTO Jason would probably know more on that front.
With our version of ZFS (ArcOS) I know that my CTO tuned it specifically for video on our shArc prodcut, so that it has no issues with read/write, and storing data to disk.
When editing, the data resides on SSD with a 1ms latency. You'll lose a bit of latency depending on interconnect to your client/server/switch network, but the way that ZFS's L2Arc technology controls datasets and handles transactions, you won't lose much latency through the storage array (ours or anyone's using a current code for ZFS).
why isn't ARC Storage going to the NAB show, if you are in the professional video business ? I just checked http://www.nabshow.com, and I don't see your booth listing ?
We've been planning for NAB, and are still working towards getting a booth lined up for the show. We are running on a limited budget, and in order to have a proper booth set up we need to get a size-able booth and have a nice demo unit. If we can't get those items finalized we'll probably not have a booth, but we'll still attend the show.
to answer your LTO question, i would say no. i only recommend LTO to my clients that are putting away 20+ TB a year. less than that, and i think hard drives look like a better option price-wise. just make sure you have multiple copies in multiple locations.
Eric Hansen - http://www.erichansen.tv