Apace vStor System
I've browsed this forum since last fall when we decided to move to shared storage. Thought I'd share our experiences with the Apace vStor system, which we bought from http://www.ieei.com.
I should point out that I'm not trying to promote Apace or IEEE's products, it's just nice to do business with companies that provide great service and a product that actually does what it's supposed to do right out of the box.
We researched a bunch of options, including EditShare, various SAN solutions, Studio Network Solutions, ISCSI appliances, AoE solutions, as well as build it yourself solutions coupled with SAN/LAN software.
We chose the Apace vStor for two reasons, price and service, which has been excellent from both IEEE and Apace.
We've been using the system for about three weeks now and it's pretty much transparent. We have 3 VelocityQ edit systems and a Blackmagic Decklink Extreme HD with Premiere CS2/After Effects CS3.
We're using the proprietary VelocityQ .dps and .dva files on it, as well as Blackmagic .avi and quicktime files. 99% of what we do is SD and we rarely use uncompressed, typically capturing and editing at anywhere between 7-14MB/sec data rates, roughly DVCPro50 quality on most stuff.
The system we have is a 4TB, with 3.2 available for projects. We creatd a separate volume for video and audio and they show up on the 4 edit stations as local drives. We're also using a Lacie 2TB networked storage appliance for projects, which is also working fine and tests out at about 25MB/sec on reads per edit system, which is plenty for project and graphic files.
The whole setup has been totally transparent to our editors. Performance has been exactly the same as what we got with direct attached SCSI arrays for video/audio files.
Drive performance benchmark software puts the sustained data rate on the vStor at about 80MB/sec, which is a little higher than Apace told us to expect, and this varies slightly from edit system to edit system. But our actual editing performance has somehow exceeded this benchmark, which goes to show you can't just measure peformance based on data rates.
As an example of the performance we're getting. I captured 4 uncompressed clips (about 20 seconds in length each), then stacked those 4 clips directly on top of each other in a timeline and sized each clip with a DVE move so I could see them. I then copied and pasted that 20 second segment 10 times, then set the timeline to loop. I played the timeline on each edit system for 10 minutes and it never hiccupped, never stuttered, never paused...all in real-time. Pretty amazing. The drives didn't have much on them at the time...so not sure I would expect that performance on a drive that's 70% full, but still impressive considering we'd never build a timeline that taxing.
We've been editing real projects on it among the 4 edit stations, with typically 2, and often 3 systems running simultaneously. Only once have we seen a video clip not load correctly, and that was when we were editing timelines on 3 systems, and capturing a clip in the 4th. It wasn't a problem, as the clip just stuttered slightly. We just make sure that if we need to output a project to tape, the other edit systems aren't doing anything real taxing at the time.
The only difference from using direct attached storage is when we go to render a file, there is a slight pause before the render starts, but it renders fine.
Capturing works great too, and you can capture from two edit stations at the same time as long as you're sure both editors are using different directories and file names to avoid corruption or file conflicts. We haven't tried capturing from 3 at the same time, but I imagine it would work.
As stated, we're editing SD only. But while researching the product we talked with several companies who are using the vStor with Final Cut and the ProRes422 HD codec among 2 or 3 edit systems...and their experiences have mirrored ours. They reported getting about 8-10 streams of HD video using that codec (I believe it's data rate is similar to DVCPro50) across several edit stations. Plus, the more drives in your array or the more arrays you stack, the more real-time streams you can get. We're using their lowest-end and cheapest system ($9200), and getting the peformance noted above.
Some people on other lists I belong to were interested in our experience and I thought people on this list would be too. I can report the resluts so far have been very good and exactly what Apace advertises. Plus, their solutions are a little cheaper than their competitors and their customer service has been astonishingly good.
They (IEEE and Apace) spent hours before the sale answering our questions, both by email and in phone conferences. They're responses were almost immediate. After the sale, they likewise spent several hours helping us set the system up...and again responses were nearly immediate when we needed help.
Conversely, it took the folks at the other companies we contacted days to respond to phone calls & emails (if at all). I will say that Bernard (I think that's his name) at Tiger Technology was great in our research phase, and we came close to building a system with Aberdeen, Inc. hardware and Tiger Technology software. But in the end, it was going to cost about $3500 more than what we paid for the vStor, which uses native OS (Windows, MAC or Linux) file sharing.
Again...I'm not trying to push their products, but it's refreshing in this industry to find a company and a product that actual works the way they say it will.
Magnetic Image, Inc.
Magnetic Image, Inc.
Magnetic Image, Inc.
401 E. Indiana
Evansville, IN 47711