SAN software for every machine? Or just host?
I'm evaluating our options as we consider rebuilding our studio's storage. If we were to go the infiniband or fiberchannel route would we need a $1k copy of MetaSAN for every single workstation or just the server?
Also from a little shopping it would seem as if you could get 8gbps infiniband for about $1500 8 port switch and $200 PCI-E cards. That works out to about $3,000 for a small studio. By comparison both 10GbE and Fiberchannel seem to be about 2-3x that amount and fiberchannel is rated as even slower than 8Gbps. Is there something wrong with infiniband that would make it less desirable?
What hardware are you talking about putting Infiniband onto?
Are you talking Macs? (if yes, there is no current Infiniband market for Mac, unfortunately.)
It does however exist on Windows and Linux to my knowledge.
But I would count on 100K to be on the super low cost minimum.
Usually after you get all the hardware you need to do the job, the price goes up for the required items. This is similar with Fibre Channel, and other fabric heavy environments.
The future of Ethernet is the coming 40Gb and 100Gb products to the market.
The next 10 years, especially in the Mac world, will bring great things, I suspect.
Let us know what you're putting together.
Maybe we can help further.
(Video Networking Solutions Expert)
(Creative Design Workflow Consultant)
(Social Media Networks Consultant)
(Technical Video Industry Sales Consultant)
$100k would be well outside of our budget. Our studio is 6 workstations and 20 render nodes. We have one file server already that is dual bonded GigE to the switch that feeds everything. This works well enough for 3D animation projects and serves the farm well but we're looking for something to boost performance for the two compositing workstations. I was thinking about running a small pocket sized SAN between the file server and the two compositing workstations so that they wouldn't need to sync renders to and from their local RAIDs.
We work on short form commercial work so we generally don't have more than 3TB active at any one time so we're looking to rollout an 8 bay server in RAID10 for 8 effective TB of storage. We have a second server which will be a lower performance deeper storage for archived projects. The current plan is to simply run quad bonded ethernet to each of the compositing workstations but we'll need another switch for all of the extra cabling and I figure if we're already going through the hassle why not consider a SAN for the two systems that need high performance access.
I don't see how it would end up being $100k for those needs unless I'm massively overlooking a bunch of hidden costs to setting up a SAN. Which I very well may be, and hence the nature of my inquiry. :D
Gavin, give me a second to channel Zelin here for you:
You're doing it wrong. You are massively overlooking a bunch of hidden costs. Let me spell it out for you...
The hardware parts are a small portion of what actual costs are for a production storage system.
You mentioned Infiniband. You know who uses that? Isilon. Do you know what they charge? Well, let me put it this way. To renew basic level support on my 20TB cluster, the cost was nearly $15k. That $15k is JUST FOR SUPPORT. Do you really need support on a system like that? YES YOU DO. Why? Well, if you have an NVRAM battery go bad (I had 3), or a motherboard failure (I had 1), YOU NEED SUPPORT.
You mentioned Raid10. To me, that is a sign that you don't know what you're doing at all. Almost all media raids are Raid5 or Raid6.
You mentioned "quad bonded ethernet to each client" -- again, you do not know what you're doing. You can't bond clients for more bandwidth like that (at least not for single transfers), but you CAN bond the server for more total connections.
You mentioned 8TB of storage, but what about bandwidth needs? I don't think that 8 disks (in RAID10?) will provide you with the bandwidth you need.
If you only want to spend, say, $3k, forget shared storage. Just buy everyone a small thunderbolt or usb3 RAID.
If you want shared storage, you're going to have to spend at least $15k if not 30-40k.
As Zelin would say, "You don't know what you're doing, call someone who does."
Re: RAID 10. I agree it's a weird choice. But our system builders think there is a new option that they want to test so seeing as a standard RAID 5/6 will only give us more storage I like to err on the side of conservative predictions for available storage in laying out our needs. I'm more than happy to entertain their curiosity with a demo system to see how it performs when tested.
So what you're saying is "Don't buy infiniband because the hardware is horrifically unreliable?" Good to know.
As to link aggregation not working between clients. Are these people crazy? http://www.thetechrepo.com/main-articles/569-link-aggregation-aka-trunking-...
Again, we don't need editing system levels of bandwidth. We're working as-is. I'm just trying to find a way to get a little more oomf for not too much more. SuperShare seemed like the perfect solution but Caldigit doesn't seem to be interested in pushing it anymore so I'm waiting to hear back from their sales rep on what the state of the union is there.
Thousands of Tiger users have successfully integrated metaSAN and metaLAN in their facilities, so it can definitely be done.
While Fibre remains strong, we are increasingly seeing people wanting to take advantage of the 1GbE and 10GbE simplicity. They purchase a good Windows or Mac file server onto which they then install metaLAN Server (MSRP $595 USD). They connect clients over metaLAN (MSRP $295 USD each or a 20x seats pack for MSRP $2,950 USD). They use either 4-ports bonding or 10GbE (in this case, the bonding makes sense because it feeds multiple clients – and as David eluded to, a single process will not take advantage of bonding). Why metaLAN? First, because it helps overcoming the shortcomings of standard file server technologies that was designed for IT and not for video. So, instead of trying to get SMB or AFP to work right in a video environment, metaLAN works straight on the TCP/IP layer and offers performance that is comparable to iSCSI. With metaLAN, you also have the added benefit of block-level access; more stable and sustained throughput; bandwidth control; virtual volume creation as well as project-based management with Avid bin locking support. I personally believe that metaLAN is the best way to go for turning a regular Windows or Mac server into a souped-up, cross-platform server for Mac, PC and Linux video editing.
Certainly, depending on how technically savvy you are, you can save money by integrating your own solution. However, the amount of time and energy required for achieving adequate and reliable performance and for working out all the quirks is not predictable. In the end, you may end-up with lots of frustration and a half-baked solution. David nailed it right on the head by highlighting the most common issues you could run into… Matt is also right in terms of the lack of Infiniband penetration on the Mac market. Even on the PC side, it is not widely used in video environments (mostly used for IT). Isilon is all-proprietary; and this is why they can afford to use Inifiband.
For the above reasons, many resellers and end-users prefer a more integrated solution that works out-of-the-box and delivers predictable performances.
If you go Gb Ethernet, which is fine for low and moderate bit rates, then a simple NAS or file server optimized for video editing, such as the Small Tree Titanium, can do a great job for you (http://www.small-tree.com/GraniteSTOR_Shared_Storage_s/94.htm).
You can also benefit from metaLAN and metaSAN value-add in pre-configured solutions:
One such box (based on metaLAN technology), is available from Z Systems (http://shop.zsyst.com/Z-Systems-ZShare-16TB-ZShare-16TB.htm). This is a great solution for Gb Ethernet-based editing.
For Fibre-Channel connectivity, Sonnet offers the Vfibre (based on metaSAN technology): (http://www.sonnettech.com/news/pr2010/pr091010_fusionrx1600vfibre.html)
Finally, I believe that Tiger has just taken the NAS and SAN integration up another notch with the introduction a brand new shared-storage integrated appliance, called “Tbox” (http://www.tiger-technology.com/Tbox). Instead of integrating the standard metaSAN/metaLAN software into our own chassis, we have decided to develop new software specifically for Tbox. We believe Tbox currently offers the very best of NAS and SAN: it is competitively priced; there is no need for software licenses; you can have a mix of GbE, 10GbE and FC connections on the same chassis with 16, 32, 48 or 64TB of storage; it is designed to deliver superb performance; and it requires virtually no maintenance (web ui, greatly simplified admin, automatic configuration, auto-defrag during idle times, etc.). You can direct connect up to 16x clients to Tbox or add your own Gb or FC switch to extend to an unlimited number of clients (of course, the performance of the storage has its limits). Tbox also provides project-based management and Avid bin locking support, which makes it perfect for video editing…
Re: RAID10. Please fire your "system builders" who think that raid10 is good for media applications.
Re: Infiniband. It's not unreliable, it's just complicated (especially when working with clusters like Isilon), and when you're dealing with complicated solutions, you want dedicated (trained and paid) support people standing behind it.
Re: Link Aggregation. Link aggregation works like this as far as I understand: 2x 1gb allows 2Gb total, but only 1Gb per socket at a time. What this means is that one file transfer or stream can only go at 1Gb, but 2 of them or more can saturate 2Gb. So it would depend on your application whether or not it helps you. Read this: http://blog.open-e.com/bonding-versus-mpio-explained/
Here's what good responsible "system builders" do - they detail out their exact requirements (bandwidth per client, number of clients, amount of storage, software connectivity requirements, reliability requirements, and then they shop for RETAIL solutions that meet these needs.
Build-it-yourself stuff almost never works reliably in production environments, unless it's something super simple and your requirements are super low. You will cost your company money through down-time, lack of performance, or even lost projects.
Gavin writes -
If we were to go the infiniband or fiberchannel route would we need a $1k copy of MetaSAN for every single workstation or just the server?
REPLY from Bob -
simple answer - YES.
so, since this is a rediculous post, let me get right to it - you are not going to build a professional shared storage system of any type for $1500 with parts from Best Buy and Comp USA. You are wasting your time, and when you get done, and it all fails, your boss will fire you. Comprende ?
You are not going to run Infiniband with MetaSAN. You will run Fibre Channel - using ATTO FC41ES cards, or similar, that all costs lots of money, PLUS a $1000 license of MetaSAN in every computer, PLUS a QLOGIC Fibre switch, plus a big fast expensive drive array, to accomplish what you want.
You want Infiniband - you are NOT going to build this by yourself - PERIOD - understand ? You will buy Cal Digit Super Share (where you can use Tiger Technolgy MetaSAN on each client) or get the OEM version which is AccuSys ExaSAN (Infiniband shared storage) and yes, still spend $1000 per client seat of MetaSAN.
Also from a little shopping it would seem as if you could get 8gbps infiniband for about $1500 8 port switch and $200 PCI-E cards. That works out to about $3,000 for a small studio.
REPLY from Bob -
you are on drugs. This is not going to work.
By comparison both 10GbE and Fiberchannel seem to be about 2-3x that amount and fiberchannel is rated as even slower than 8Gbps. Is there something wrong with infiniband that would make it less desirable?
REPLY - again, if you want an Infiniband system that will actually work, you will purchase a system from Cal Digit or Accusys, and you will spend similar money of what "we" charge for a Fibre system, a 10gig system, or even a 1GbE copper ethernet based system. And it's not $3000.
You will fail, unless you follow what I say.
works wonderfully with MetaSAN as the management software -
while you are on the CalDigit site, look at the opening page (at least this weekend) - happy client with SuperShare
also works wonderfully with MetaSAN.
BUY ONE OF THESE, and stop this nonsense.
Your setup sounds a lot like ours.
We have about the same number of workstations and a few more render nodes. Plus, like you, we need fast compositing.
Here's my 2 pence. You need...
A self-contained RAID that is 30% bigger than the largest amount of storage you anticipate needing at one time. You'll probably need something with 24 drives for speed. This should be your primary storage. You want a 10gb ethernet port out of the back.
Something like the systems being suggested above.
A back-up raid that can be lower performance but same size.
This is a mirror of your primary storage. Also on 10gb ethernet. It needs to be backing up your primary server on an hourly or nightly basis.
A 10gb ethernet switch. Go for fibre, not copper connections.
An Edgecore ethernet switch with a 10gb port in the back, so you can serve out 1gb ethernet to all your animation workstations.
Another ethernet switch to go out to your render nodes.
These connections will give you about 90mb/s I/O. Easily enough for animation.
For your faster finishing machines, install 10gb cards and connect them directly to the 10gb switch.
Buy a multi-changer tape backup system and backup software such as Presstore.
Get the whole lot, Sync, Archive and Backup.
Back-up everything from your mirror drive nightly.
Archive at the end of each job.
You will need someone to put all this together for you.
Preferably a reseller who can be at your premises within 4 hours if there is a problem.
Allow some time to get it working properly. It's a big job.
You will also need someone to do about a day a week (at least) of IT support to keep it all working healthily.
We have been down this road for some time now.
It's all about redundancy. You need something where if your primary server goes down, you can get working again as soon as possible. 10gb is good cos it just works.
Just a follow up. I ignored everyone's advice, bought two 20gb infiniband cards for $30, a $30 cable and am getting about 300-400MBs w/ IPoIB using the open fabric drivers.
Problem solved. So far all of my performance tests in Nuke and windows seem to be working exactly as expected.
Do you see SuperShare, HDPro-24 and MetaSAN combo still as viable option ?
I have to signets something to my client and it seemed on paper good option?
4 to 5 seats. Mostly Premiere on win and one mac with Resolve.
Maybe one Avid on PC.
Workflow is 1080p from dpx to compressed formats.
DaVinci 10, OSX 10.8.5
MacPro 5.1 2x2,93 24GB
GUI 4000 / GPU GTX 780
Full Ligthspace CMS