DIY SAN / Fileserver
We are in the process of starting a new company and require a fileserver for the first upcoming project, There will be 6-8 artists(windows machines) working on the same footage in after Effects/nuke.
The raw Material will be 4-5 layers of 2k tgas but we are also planning to use exrs and standard video material on this project.
With 10GBase becoming affordable we decided to invest in something future proof(server and client side) :)
Obviously we don't have a chance to test this setup before buying and assembling it and it is quite a big investment for us. Thats why I am asking you guys for your feedback.
This is our current setup:
8x Seagate Video 3.5 HDD 4TB, SATA 6Gb/s
4x Samsung SSD 840 Evo Series 500GB -cache
2x Intel Xeon E5-2620 v2, 6x 2.10GHz,
Kingston ValueRAM Intel DIMM Kit 64GB, DDR3-1333, CL9, reg ECC
Samsung SSD 840 Evo Series 250GB, 2.5", SATA 6Gb/s
Adaptec RAID 81605ZQ
Supermicro X9DRH-iTF retail
Supermicro 836TQ-R800B black, 3HE, 800W redundant
Eaton Ellipse ECO 800 DIN USV
We are planning to run WinServer 2008 r2 on this machine. After reading through the caching abillities of the newly released adaptec cards we also decided to switch from 16x HDD to 8xHDD and 4xSSD. The plan is to increase IOPS and Transfer Speeds through the caching ability of the raid controller. We also decided to go with the non enterprise disks. We are aware that they will be failing from time to time, but a Raid 5 with a spare disk will leave us with enough safety(hopefully).
A daily Backup will be on a Synology DiskStation DS1813+
I forgot to mention that there is not only no budget for outsourcing this, but we are also geeky enough to look forward to building this beast ourselves :)
- How much mb/s do you think will this baby be able to read/write?
- Should we be scared about the file sequences (10mb per frame tgas)?
- any bottlenecks that we didnt see?
- Does anyone have any experience with a similar setup or the new adaptec raid controllers?
thank you for your help
cheers from Austria,
[Valentin Struklec] "How much mb/s do you think will this baby be able to read/write? "
This begs a question: if you were sold on Adaptec's caching performance, why not ask Adaptec? (Don't take it the wrong way - who else would we ask?)
I'd be worried less about the memory, CPUs and power supplies in your system, and much more - with these three:
(1) the disk I/O subsystem (drives, controller, caching),
(2) 10GbE cards and their performance,
(3) OS and its ability to pump the I/O from storage to NICs, with good MPIO support and low latencies.
[Valentin Struklec] "WinServer 2008 r2"
Have you talked to anyone using 08r2 to serve multiple streams of 2K TGAs?
[Valentin Struklec] "- Should we be scared about the file sequences (10mb per frame tgas)?"
4-5 streams of them? I'd be scared. :) That's 1.5GB/s assuming 30fps, an impossible feat for a single 10GbE link, a very tall order for an MPIO 10GbE configuration. Pretty much can guarantee the configuration you described is nowhere near that in performance.
I'd first see if you can bring your performance requirements down to perhaps 2-3 streams, ensure the storage subsystem can serve double that on its own, and then take a look at what OS and networking configuration can do what you need. WinServer 2012 r2 may do that over SMB3 with Win8 clients but I'd perhaps look into Linux storage servers.
-- Alex Gerulaitis | Systems Engineer | DV411 - Los Angeles, CA
Thank you for your thoughts.
[Alex Gerulaitis] "That's 1.5GB/s assuming 30fps"
I think I forgot to clarify that we are doing compositing not editing. It will be a handdrawn cartoon feature layered in After Effects, which means we won't have to stream the raw material in real time. Nevertheless performance is key to getting the work done. With anything around 1000mb/s I would be very happy and I am hoping that we will be in that region. But I did have problems with performance and file sequences in the past. Never with a windows system though.
[Alex Gerulaitis] "why not ask Adaptec?"
We also asked adaptec about their new controller series. I will let you know as soon as they respond.
Unfortunately there are not a lot of people around who have experience with this in Austria, so I was not able to find someone with the expertise to give me some advice :) We do have companies that are specialised on editing workflows but no infrastructure providers for the VFX/Compositing Area.
[Alex Gerulaitis] "(2) 10GbE cards and their performance,"
I have not played with 10GbE yet, but from what I have read it seems to be doing not to bad.
Honestly I am more worried about IOPS and the performance of the system going down with more than 2-3 users constantly streaming data.
Linux is an option. I believe I would have to invest a lot more time in that case, but it may be the smarter choice ;)
Anyways, thx for the feedback and we will keep you guys updated :)
[Valentin Struklec] "I have not played with 10GbE yet, but from what I have read it seems to be doing not to bad.
Honestly I am more worried about IOPS and the performance of the system going down with more than 2-3 users constantly streaming data."
I hear you. In that case, CPUs, OS overheard, number of spindles and caching do make an impact.
Did you already buy the config above, or is it still in planning stages?
Don't think you need someone locally to consult you on what to buy, or how to configure an OS. That can be done from anywhere.
Thanks for the replies. The system needs to be up and running in march, so we will start buying the parts in January. We haven't bought anything yet. Luckily we have enough time to set it up properly and do some tweaks before production starts.
Here are some of the responses from studio sysadmins who walk the talk as far as VFX storage, and who I learn a lot from, every day.
Re: Windows Server
- "No way I'd use win server (any) for dedicated storage"
- "I don't think anyone is using M$ products for file serving needs"
Re: what OS:
- from Greg: "If I were to find myself in this position, I'd be looking at a solution which is more focused on storage as opposed to a 'i can do it all' solution, such as a Nexenta product, or if license costs take it out of the running, some other ZFS product. The caching is a lot more intelligent and you can augment it in a few different areas to meet your needs. Support and comfort levels will play into that decision. If they do go ZFS I'd not do it on linux but rather a solaris based OS, a second choice would be *BSD, with linux being the last choice."
- from Julian: "FreeNAS/ZFS would be cheaper, more reliable and perform better"
- from Brian: "FreeNAS (9.2 RC is out)
OWC SSD Pro Enterprise for ZIL
Samsung Pro or OWC Pro (non Ent) for L2ARC
Nearline SAS for spinning rust (SATA with SAS controller).
Or, you can rent storage if this is a temp thing."
I'll also reiterate my earlier sentiment that this project will likely benefit greatly from hiring someone (not me - not for this project) to help you through buying choices, and perhaps with ongoing support as well. I don't think that it has to be local.
[Alex Gerulaitis] "This begs a question: if you were sold on Adaptec's caching performance, why not ask Adaptec? (Don't take it the wrong way - who else would we ask?)"
Luckily Adaptec support are pretty helpful.
Ultimately +1 what Alex said.
You also need to assess if the working set will actually fit on the SSD cache? I doubt Adaptec's cache policy is anywhere near as intelligent as L2ARC (for example). Big ZFS love going on here..
Personally used Adaptec cards in a number of indiestor rigs and they perform very well. We don't need anywhere near the same kind of performance as you though.. your IO requirements are considerable.
Also Adaptec 'MaxView' is not as flawless as I have liked out of the box (Linux). Hopefully it works better on Win.
Overall, you should consider budgeting to hire a specialist in your country, or at least consultation from an expert on this board (Alex Gerulaitis, Bob). It will save you money and grey hair in the long run.
I've no doubt you'll get something up and running, but it might not satisfy your performance needs. DIY is always a question of suck and see, make some adjustments, see what you learn. This can be liberating.. but there is also a lot of risk involved.
Power to ya either way :)
I think that you are going to have to better define your requirements. Are you in need of a SAN or a NAS? I ask because they are both very different and serve two completely different functions. With the hardware list & drive count you posted, I am afraid that it is not going to take you very far. For what you are trying to do, it all comes down to LUNs and wide-striping. Don't confuse bandwidth with caching as caching mainly helps with non-sequential workloads, which will probably only constitute a small part of your requirements.
Also, Windows makes a pretty bad file server for a whole host of reasons. If you have to use Windows (and you don't), then don't use their server product, use their storage gateway solution instead. Personally, I would not use either.
BTW, it looks like we are both RTT alum., but I don't think we ever met. I have only made it out to the Munich office twice.
First of all:
Thank you guys for all the response. They have been read multiple times and have already helped us a lot. After doing some more research and testing we changed the setup quite a bit..
Adaptec also told us that their caching abilty would not help in our case. It is primarily focused on sequential reading :(
Our current Setup is using 24x 2tb HDDs with the adaptec raid controller +expander.
In theory this should give us more performance than dual 10gb is able to handle. Nevertheless tests like the one from tomshardware keep us worried:
Its in german but the third picture would represent our setup..
In their test setup 8hdds are providing 300-500 mb/s. Not very satisfying :)
We also decided to go with WinServer 2012 R2 and Win8.1 as suggested by a couple of people on StudioSysAdmins.
Again: Any Input is highly appreciated :)
Many thanks from Austria