Any suggestions on SAN set-ups post NAB?
Hi Gang, I posted this in the FCP forum but they suggested I try here.
This is for those smarter than me when it comes to hardware which is probably all of you.
I have had my face in the computer for so long that I have fallen behind on all things technical.
Well I pulled my face away long enough to realize that my company has fallen into a major content bottleneck. Things were fine when it was just me trying to access footage but now I have 3 people trying to use the same footage and it is starting to get ugly. And it will be twice as bad next month when we double our 4,000 clips of B-roll to 8,000 and 5 people trying to access the same stuff. So I have come to the pasture for help.
I did do a day and half of research on my own before I put my problems on you all but all that did was confuse me more. So I humbly ask for peoples suggestions on the best system/workflow I should incorporate. FC Server looks interesting and that it could help but I want to make sure that is a good viable option before I head down that path. And if it is a good option what kind of system should I set it up on. Will I be able to store all content in one central location and have individual edit stations access it as needed? Any help is greatly appreciated and I understand this won't be cheap but if it helps me save money and time in the future then it will be worth it.
Here is my current workflow.
All content is shot on P2 cards and transfered into FCP with the DVCPROHD 720P codec. We capture all the footage onto firewire drives and then clone the data to other firewire drives This has been a quick and cheap fix but annoying because of all the darn firewire drives I have and now with over 4,000 clips and 6 terabytes of source, render, and export files it has started to become a burden. I know a horrible workflow but we didn't expect this little thing to grow so fast so we didn't plan ahead. But I can catch it now before it gets too ugly.
We also never leave the digital realm so I don't have to worry about tape output or changing the native codec.
We just edit the short segments, compress them into DIVX files and then send them off via FTP.
So once again I thank you for your knowledge and any advice you wish to bestow upon me.
We experienced something very similiar when we landed a client whose workload grew faster than we ever imagined. I researched various shared solutions off and on for nearly 6 weeks.
We explored ISCSI, SATA over Ethernet (called AoE), NAS based solutions and traditional SAN's. We talked to companies including EditShare, Ciprico, Tiger Technology, Aberdeen, Inc., Winchester Systems, Celeros, Digilant, Studio Network Solutions, 2 Degrees Frost, Apace, and several others I've forgotten.
I was totally confused by just about every company's description of how the system's worked, and found most of their solutions were either very complicated, or didn't really meet our needs of being able to share a common pool of media among several edit workstations.
The only two that made sense to me were EditShare and Apace's vStor. We also found Tiger Technology's software solution coupled with 3rd party hardware (either NAS or SAN based) to be appealing. Apace and Tiger were by far the most service oriented of all the companies we spoke to. A few of the companies took days or even weeks to respond to initial inquiries....which left me wondering, "how long would I have to wait if I had a technical issue with ALL my footage on their system??"
We didn't like the way SAN's forced you to work, which by understaning, forces you to set up separate volumes for each client to edit from. This sort of defeated the purpose of what we were trying to achieve, which was the ability for multiple editors to share content from one pool of data. Plus, all the SAN solutions we explored were just way too expensive. They would've required us to buy new fibre channel HBA's for each PC, a new fibre channel switch, new fiber channel cabling, not to mention the expense of paying someone to install and run all this.
We decided the only way we could afford it was to use our existing gigbit ethernet network and go the NAS route. This offerred several advantages. We didn't have to buy the file management software that all the SAN solutions required (unless we configured a system using Tiger Technology's MetaLan Server/Client software, which manages and helps speed throughput on do-it-yourself NAS built systems.
In the end it came down to EditShare and Apace vStor. Their prices were similar, with vStor being a little lower for entry level systems, but about equal as you moved up in storage size and speed.
After that the decision came down to service. Apace and their vendor IEEE,inc. answered every question we threw at them, and usually did within hours. They spent hours on the phone with us in conference calls and online demos. They guaranteed their product in writing and told us we could send it back for a full refund after 30 days if it didn't work the way they advertised...or if we had problems with our edit systems (VelocityQ).
EditShare took days to return phone calls, never returned emails, and when it came down to the final decision, couldn't give us any guarantee or assurance that the system would work properly with the VelocityQ, and had no answer for my query about what our options would be if we bought it and it didn't work correctly.
I'm sure they have a great product and good people, but we didn't feel much love from them during all this.
So we chose the vStor. We've had it up and running for a little over 3 weeks now and it works exactly like they advertise. We're editing among 4 edit stations (typically only 2 or 3 are accessing video/audio simultaneously) and getting the exact same performance we were getting with direct attached U320, 15k, 8 drive SCSI arrays. We opted for their lowest end system because we just do SD work. But the system we have tests out consistently at about 80MB/sec sustained for reads/writes on each system...that's even if there are a couple of edit stations accessing it at the same time.
We get this performance because we have 4 GigE ports coming off the vStor into a managed switch, and each of the ports is isolated from the others and assigned a specific amount of bandwidth (200MB/sec I think). We paid an IT company to install the new switch ($800) and help us setup the vStor, although with Apace's technical support we could've probably done it ourselves (Apace spent hours with us on the phone using an online meeting application that allowed them to control all our edit PC's and set them up).
After we experienced some initial playback problems, they helped us trace it to bad cabling, which fixed it lickety split. The ONLY drawback so far is the unit (4TB) is REALLY LOUD. I'm talking jet on the tarmac levels. We have it in a very large, well ventilated closet with the rest of our IT stuff (switches, phone system, other network storage devices), and even with the doors closed it's still audible enough to be a annoying. I will say it stays cool as a cucumber though, which IS important considering what we paid, which was around $9200 for the unit (including a backup drive in case one fails, in which case the system will rebuild your data). We also had to buy the new switch ($800), a backup for the vStor (LaCie 4TB network drive at about $1800), and we added shared storage for projects and graphic files (LaCie 2TB network drive at aboiut $850). We also paid the IT company about $1500 in installation, setup and troubleshooting. So total spent was a little over $14,000.
Apace also has units for HD stuff, so you'd probably spend more than we did regardless of which route you go. You could also work with companies like Aberdeen, Winchester, Celeros, and Digilant to build your own system. Aberdeen was especially helpful on that side and had built shared storage systems for a few broadcast production facilities. They just couldn't or wouldn't give me guarantees on data rates or video streams the way Apace would. So although a similar storage unit in terms of technical specs from Aberdeen cost about half what the Apace vStor cost, we just felt like the vStor was built to be a video editing device, and believe they've actually written Linux code into the server side of things that's designed to facilitate streaming broadcast video data.
Naturally you'd want to look into Apple LanShare and Apple XSAN systems (I think that's what they're called), which are specifically built for the MAC. Their prices look very affordable but I don't know how well they work for video editing. I'm sure there are folks on the Final Cut list that could help on that front.
Sorry for the long answer, but hopefully you'll find the info helpful. Again...I'm in NO way pitching Apace's products. As a 24 yr. production veteran and 12 year facility ownwer, I just found it refreshing to deal with a group of people that weren't full of crap, whose product does what they say it does, and who's service is almost unbelievable. The vendor we used was IEEE, inc.
One last thing I'll mention is I wasn't very impressed by either company's website in terms of design, layout etc. when I first started doing research. So don't let that throw you when you visit them. EditShare, Studio Network Solutions, Ciprico and others' websites are much more impressive. But looks can be deceiving. Ciprico's shared solutions are outrageously priced too compared to all their competitors.
Magnetic Image, Inc.
Magnetic Image, Inc.
401 E. Indiana
Evansville, IN 47711
I would also take a look at the LairdSharedHD, we looked at all of them and these guys get it!! Below is a post from the designing engineer on the FCP-L.....FWIW
Dan Hatch - President / CEO
ProMax Systems, Inc.
Direct: (949) 861-2710 - Fax: (949) 727-3546
http://www.promax.com | Media Workflow Solutions since 1994
LairdShareHD is not using SMART to predict
drive failure. The data that is sent back to Laird comes from it's
RAID hardware. We watch for recoverable errors, which do not cause the
RAID to drop a drive but do make it wait for data. If these
recoverable errors happen too often on a given disk, Laird will get
notified and instruct the dealer to proactively ship out a drive. If
the RAID drops a drive due to failure, that of course also triggers a
notification and triggers a replacement to be shipped.
Users can also support the LairdShare locally. If a disk fails, a red
warning light will light up indicating which disk is having a problem
and the unit will beep. Customers will have the option to buy a
"cold-spare" drive to keep near the system, and the system supports
hot-swapping of disks. Still, running in degraded mode on a RAID6 is
reasonable to do while waiting for a disk since a RAID6 is comparable
to a RAID5 when it has one disk down - one more disk can still fail
without losing data. If a customer chooses not to maintain a local
cold-spare, their dealer can silence the alarm for them while they
wait for it.
What is perhaps more important here is the effort we have made to
avoid failure in the first place. The coolest aspect of the LairdShare
is it's thermal profile. As has been mentioned elsewhere on this
thread, it is cold to the touch even under heavy load. The base server
system pulls less than 300W under load! We find thermal has the
biggest impact on RAID failure, not to mention impact on anything else
in the environment the system is deployed in.
Designer of the LairdShareHD
Senior V.P. of Operations
ProMax Systems, Inc.
The Leader in Digital Video Pre-Configured Systems Since 1994
16 Technology Drive #106 http://www.promax.com
Irvine, CA. 92618
800-977-6629 x110 949-727-3977 949-727-925
I am looking to buy a vStor from Apace also. What had been your experience with them - does digitizing slow down your system/stutter on dropped frames? Do you have a sealed closet door or is it just that noisy from outside your closet? Is it the fans or the drives? Sorry to be so obvious, but noise could be a problem for us also and as you know it's a big decision.
We're also looking at Terrablock 8X but wondering if its stream count will be enough....
To help with the noise we suggest rack enlosurs from XRackPro to our customers. They are at http://www.xrackpro.com.