Question regarding performance...
Hi to whomever can give me some input,
I am looking to build a storage array and server for a video/photo company here in Toronto. The system I have designed will be about 50TB to start with easy expandability. It will be a server based FreeBSD array, connected through SAS to 1 x 32 enclosures (expanders allow the easy expandability), which I will place 2TB WD Black drives (for performance) inside. This system will be connected to the switch by dual 10GBps links.
The workstations will all be Macs, so I have chosen AFP as the network layer to connect to the storage. They will do this through gigabit links back to the switch. Now, the primary codec that this office uses is Pro Res LT (100mbps) for their edits.
I believe the math is there, and based on this we should be able to have the editors edit off the Network, but I have not built a system like this yet. I have built similar smaller systems, but I was looking to get feedback from anyone who has built (or experienced) a similar system to give some input on problems they've had (if any). They are looking for about 4-5 editors to use the system, some admin people & stills/graphics production.
Any input is appreciated, so that I may calm the clients worries.
my suggestion is to get one of the established shared storage solutions that you see discussed here on this forumes every day.
Why are you going to try to reinvent the wheel, when you can just go out and buy it ?
Agree with Bob. If you're building this for a client, especially a creative company, it needs to be a known solution. With support. By a manufacturer that will be available for your client after you're gone. Something like this...
If you decide you'd still like to build something, at least base it on a Mac Pro or Xserve. It will be a solution your client has a reasonable chance of managing on their own, and that you can have up and running in under a week.
Chose to design (build) this instead of turnkey because of scalability and cost. SAS system will allow for the cheap expansion of storage, as this house has a data production rate of about 2TB per month. The cost of turnkey was almost double what this system came to, when factoring in expansion. I know support is what will be needed once this system is built, (management too) as they wont have in-house personnel to deal with this, or the means to self manage. But, I failed to mention that I have ties to the company, and that I'm not just installing than leaving. The topology is really easy to understand, if needed to be taught. I must say that if we just took what was given, than innovation, or growth, to solutions wouldn't ever happen.
I'm sure this system will work (at least for the storage aspect) as its the same as used in enterprise. I guess I should have been more clear with what I was asking for. I was just wondering more on how have peoples experience have been editing over a network system. What problems they've encountered?
Based on this design, all stations would have a gigabit link to the server, no bottlenecks due to increased access (as the storage is connected through the 10Gbps links to the switch). Kind of the same principal as in your "Build your own affordable SAN" article, but using 10Gbps links (instead of an aggregated 4x1Gbps links) & a custom server, instead of a MacPro. So in theory it should be possible. What I'm wondering is if you've come across anything that would hinder this theoretical plausibility?
Based on a ProRes LT edit codec, do you forsee/experienced any problems in the systems you've built with editshare, Bob or Jason?
Thanks, I appreciate your guys help BTW
I'm sure this system will work (at least for the storage aspect) as its the same as used in enterprise.
It's not. And that's an assumption the multitude of enterprise storage vendors pouring into the media space are making right now.
Kind of the same principal as in your "Build your own affordable SAN" article, but using 10Gbps links (instead of an aggregated 4x1Gbps links) & a custom server, instead of a MacPro.
Bob used OS X, and you are attempting to use FreeBSD. FreeBSD is a great OS, but not for this. Why forgo arguably the best AFP implementation in the world when you can have everything you need, with a great GUI, great administration tools, and editor familiarity, for free? The cost you save in your custom server will be far outweighed by the hours you put into this, and all for a lower quality result.
So in theory it should be possible.
In theory, we all came from an explosion, but not everyone agrees on that yet either :)
The problem is the task; the perfect, frame-by-frame delivery of multiple streams of video to several editors at once. Bob's, solution is popular because it comes as close as you can to this goal, while being easy to configure, reliable, affordable and complete.
No one is saying your shouldn't develop your own solution, but it should happen on your own time, and not on your customer's.
You asked for advice. The advice is to use a known solution, or hang your storage off of a Mac Pro or Xserve. If you do that your editors will be working in a week. If you attempt to develop something based on a variant of NIX, that's actually complete (fast, affordable, shared storage for several editors, that's manageable by an assistant), you may spend several months getting it working, and still not have as nice a solution as you'll have if you just use OS X.
Mark, you say that you've designed similar systems before. Did those include a FreeBSD server with Mac clients sharing video over AFP? If so, how was the performance and reliability? If not, I'd definitely go with OS X Server as Jason was saying.
Mark writes -
Kind of the same principal as in your "Build your own affordable SAN" article, but using 10Gbps links (instead of an aggregated 4x1Gbps links) & a custom server, instead of a MacPro
you are very innocent, and I can see that you want to be the big hero of this company "look how smart I am". I wrote that article after I read a post on Creative Cow about Tiger Technology MetaLAN, about 1 week after we got it working. That was in 2008. Boy, have we learned a lot since then. That system stopped working, and we suffered, and suffered to get it working properly. This is a similar story to what Gary Holiday told me about how he started Studio Network Solutions, where he bought an audio SAN, and it never worked right, and he suffered to figure out how to do it.
And Mark, we STILL suffer with new systems. Every computer change is a challange, every host adaptor is a challange. We use SPECIFIC disk host adaptors, and specific drives because we have SUFFERED on what works, and what doesn't work. We make mistakes EVERY WEEK with new gear, with new computers, with new applications. But this is what we do - WE GET PAID to do this, we make LOTS of mistakes, and we figure them out. And every time there is new gear, there is NEW MISTAKES, and we have to figure them out.
If you do this yourself, YOU WILL MAKE MISTAKES, and you won't come to creative cow to figure them out. I can tell you that almost every manufacturer is SICK AND TIRED of me calling them and saying "hey, we are doing this, and it's not working any more". This is what manufacturing is all about. Figuring crap out so that your clients don't suffer.
I recently "saved" a Creative Cow poster who hired someone to build his own 10Gig shared storage system with Myricom cards and a HP ProCurve switch- well, I saved him, but there are STILL PROBLEMS, because I can't devote weeks of research into getting this exact hardware to work 100%, because every switch is different, every NIC card is different, every drive, evey host adaptor is different, and they all have "tweeks" to get them to work correctly, without issue. And you know what - we STILL have ongoing problems, and that is why we (and everyone else) comes out with NEW PRODUCTS - to address these issues.
So, go ahead and build your own system - maybe you will get lucky, and maybe you will enjoy the process, and start your own shared storage firm, and compete with everyone else on this list. I can tell you, you have a long road ahead of you. If this stuff just plugged in and worked, we would all be "on the beach". But it doesn't work that way in real life.
Hi Mark -
I want to torture you a little more.
do your clients use Sony XDCam EX or Panasonic P2 cameras, that create MXF or MP4 file formats. Do you know that if you use Apple AFP as your network for Final Cut Pro, you will have errors when you try to log and transfer a single file greater than 4 Gigs across an Apple AFP network ? Do you know how you will resolve this issue ? You haven't thought about this yet, because you haven't even built the system yet, so you don't know think about stuff like this yet.
You will see soon. There are many wonderful manufacturers on this list - they all work. I suggest you call some of these companies.
I'm going to be a nice guy and do something nobody else has right now on this forum posting....you mention using Pro Res LT. I'm making an assumption you're not intending to do anything with Multi Video Threads per client, so I would just tell you that what you want to do is entirely OVERKILL and EXPENSIVE (Considering you want to save money) and to not use a switch at all and just go direct to the server using Gigabit from the client editor stations .........the real problem you'll have is making sure whatever storage you use, and server tuning options, wire tuning options, and whatever else you want to invoke can will all work the way they should do the editing in real time you need to do when all 5, 6 etc people hit the storage network at once.......
I recommend calling Small Tree ----- WHY? -- No, it's not because I'm working for them, it's because they are Real Time Video Experts from SGI and Cray and they know what you'll need to make your Shared Storage work right and they can solve ANY problem you'll find yourself having with it in terms of video performance, o/s performance, shared network performance, etc.
Here's a few additional perks because I'm a nice guy ----
Using 6 1xGB Ports (600MB/sec) Link Aggregated to your Switch, is about the same as what a Single 10Gb Ethernet link will run under AFP ...... that's just how it is .....
You're better bet here would just be to stick a Small Tree, PE2G6i Card into a Mac Pro (making sure the Mac Pro is the best of all the systems connecting to it in terms of generation, ethernet chip controller, etc...) and connect Small Tree Single Port Cards (PEG1) to each editor station (or opt to use their built in versions....) and hook them directly to the server......
The rest will be up to you to resolve and take care of if you don't want to put the money saved from NOT using 10Gb in this case and make the Storage and Support Services investment to KNOW 100% that what you get will work and can be supported by whomever is selling it to you directly.
Let me know if you'd like to talk. I'm sure with some Google searching you can find your way around. You may even want to swing on over to the Small Tree forum to find some contacts that would be happy to have a phone conversation with you!
Matt G (Small Tree)