FORUMS: list search recent posts

ZFS experiments.

COW Forums : NAS - Network Attached Storage

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
Karel Voners
ZFS experiments.
on Sep 10, 2014 at 7:29:40 pm


I'm currently an editing assistant and as an experiment I build a ZFS server and tried to use it in the editing environment. After some tweaking all runs well it's main purpose is a secure onsite backup and and to have an editing copy of all footage for myself. Right now my ZPOOL exists out of 2 vdefs, each containing 5 disks of 4TB in RaidZ1. So 10 disks in total. I also added a 256GB cache SSD.

raidz1-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
ada0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0

I get about 300MB in reads so It's quite good. The IOPS are the biggest issue. With this config I can only really manage to edit with 1 client. The limited IOPS also has a big influence on opening large projects in FCP7 (120-160MB). It can take 2-3 minutes. After adding and SSD L2ARC this goes down to about a minute or less. As this is a test setup that's okay.

But I was wondering if there are alternative ways to implement ZFS towards OSX clients? Right now I use NAS4FREE and I run things over CIFS over a 1GB connection. AFP on NAS4FREE is too slow and too outdated. NAS4FREE is not compatible with firbre channel. But for example I could turn the server into a hackintosh and have native fibre channel support on both ends. (I have good experiences with hackintoshes for the past 4 years, super stable if you get the right parts). Or just go the 10gbE route and keep NAS4FREE. Or go for Linux and keep all options open? I also read a thread about Oracle but I don't know much about that, would that be an option?

If anyone has questions or needs recommendations feel free to ask. I'm using it in an FCP7 environment. It took some time to get things right (like the insane amount or RAM you really need!) but now it hums along quite fine.

I think if you really want those IOPS you need to work with mirrors. (And you don't want to use 5000RPM drives like I'm using, only do this for slow backup stuff).


This is my hardware setup:

Supermicro X7DWN+ - Dual Xeon 5130 - 48GB Ram - 8x 4TB Seagate ST4000DM000 - 2x 4TB HGST HDN724040ALE640 - 1x 256GB Crucial_CT256MX100SSD1 - TDK LoR TF30 USB 3.0 PMAP (boot) - Dell H310 SAS/SATA Controller - 2x HP360T Gb NIC - Supermicro SC825 2U Chassis 920Watt Platinum PSU.

Return to posts index

Bob Zelin
Re: ZFS experiments.
on Sep 10, 2014 at 8:06:53 pm

Hi Karel -
at the risk of getting throw off Creative Cow, I will answer your question in a way that you will not like.

There are two well known companies that offer wonderful ZFS solutions -
Small Tree Communications

both owners of these companies are very active on Creative Cow, and if you simply purchase their systems, you will have a wonderful working ZFS Shared Storage system.

Creative Cow is not an engineering forum. No one is going to tell you how to "build it yourself" on this forum, so you can save ALL THAT MONEY that those other guys charge. This is not, and even on there, they won't tell you all the secrets to get things to work. If you want a ZFS shared storage environment, buy a Small Tree or LumaForge shared storage system, and you will have a working system with wonderful support. If you want to build it yourself, you will not get your answers here.

sorry for being rude.

Bob Zelin

Bob Zelin
Rescue 1, Inc.

Return to posts index

Karel Voners
Re: ZFS experiments.
on Sep 10, 2014 at 9:12:32 pm

No worries, I just wanted to check if people had recommendations or other ZFS experiences they wanted to share. I like implementing it myself and want to keep doing it. My goal is to implement systems for other people and post houses to give them some data security at an affordable cost. (without having to rely on other companies). The beauty of ZFS is that it's highly secure, fast, open source and you don't need overpriced raid disks.

Guess I will go on experimenting myself then...


Return to posts index

alex gardiner
Re: ZFS experiments.
on Sep 10, 2014 at 10:28:39 pm

We have been working with ZFS (on Linux) on the indiestor project for some time.

I would recommend using the dedicated ZFS mailing lists because they move quickly and are run by really good folks + the actual dev team.

Beyond that you don't provide much analytic data, other than the use case.

I find Bonnie++ very handy for digging deeper into what your pools are actually doing, as well as some of the scripts that can show you how effectively you are using the L2ARC. Alignment/ashift are also very important, but I suppose you already know that..

In my circle lots of people rate Illumos (which you could look at). That said, I work with Debian (because thats what indiestor is packaged for) and get very good results using ZOL 0.6.3.

Not sure how much help I've been, but I wanted to chip in as I drop past here every so often.

Have fun!

+447961751453 - "Avid project sharing, shared!"

Return to posts index

John Heagy
Re: ZFS experiments.
on Sep 16, 2014 at 11:20:00 pm

We have a Coraid system that uses zfs straight from the source in the form of a Sun X3-L2 server running Solaris with half populated 256GB of ram. It consists of six 36 drive SRX 6300 chassises each with 18x6 vdevs in raidz2. The chassises are half populated with 18 drives each and can be scaled up as granular as one drive per chassis. No L2ARC or ZIL. The vdevs consist of one drive per chassis meaning we have a RAIN 6 system so we can loose two whole chassises without data loss.

Other zfs based systems are Nexsan's NST, tho you'll find no mention of zfs on their site, and OWA's Jupiter Callisto NAS.


Return to posts index

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
© 2019 All Rights Reserved