ADVICE PLEASE: On a annoying RAID issue.
I do a lot of multiclip sequences editing, thats pretty much all I do. I tend to run 8 Atomos Ninjas, shooting 1920x1080 Pro LT, and anywhere between 1-4 Go pros which I convert Pro Res LT. The Ninjas record in 4GB 6min files, so I drop these into timelines, along with the converted go pro footage, and make reference movies (not self contained) with the Make Quicktime Movie facility. The resulting clips are about 15min long, so anywhere from 8-12 streams of 15min Pro Res LT. It's a fairly old Mac 1,1 I'm using Dual Core 3Ghz, 12GB ram.
Now here's the thing. My RAID Proavio box, 8 x1TB WD Caviar Green drives about 18mths ago, preceded by a strong sense of burning, my Highpoint 4322 card failed, and the raid disappeared off my desktop. Never to return. I got a second Highpoint 4322 card, got back to work, fine, 15mths later a similar thing happened again, the card didn't disappear and I could access files on the raid, but it would no longer push the streams it had before, it became erratic and the dropped frame warning kept coming up.
Having learnt my lesson a purchased an Areca ARC 1882x, and it's up and running again. However and here is the nub of my post, the RAID is struggling to push 12 streams of Pro Res LT in multclips. It's all set to dynamic, safe or unlimited RT, I get 15-20secs then the warnings start again about a slow system or close open sequences, etc, etc.
I've regularly run AJA system test, and while messing about trying to get the raid working again, I temporarily put 3 Seagate 2TB drives and internally software raided them as raid 0, in bays 2,3 & 4 of the Mac Pro, the system test gave me write and read speeds around 350MB/s and even now half full, the graph shows a fairly consistent line. This is good for maybe 8, or 9 streams of video.
The RAID 5 enclosure on the other hand running the same 4GB 1920x1080 10 bit file gives an up and down trace over the test - read varying from less than 100 to over 1000 and write a steady line over 1100 for a third of 4 GB 1920/x1080 10bit file before falling to around 600-700. The average write speed is about 700MB/s, and the read is 450MB/s. Which along with inconsistent graph line puzzles me as I remember when I first got my RAID it was both WRITE and READ in excess 700MB/s and this is really it, I feel sure it had no problem at all with 12 streams of Pro Res (I used to convert HDV tape before getting the Ninjas), and the trace when the raid was new fairly flat, consistent, 750 write and read. It has now has 2TB of PRo Res onboard, and pushing a 15min 12 stream Pro Res multiclip, the frame rate is looking very choppy and low, like hand cranked, I've disabled the dropped frame/slow HDD/slow system warning. Collapse the multi clip and its fine.
Before buying the new RAID card I tested each (now reformatted) individual drives from the RAID separately and got Write/Read speeds 65-110MB/s, the 3 lowest speeds I replaced with new drives just for piece of mind when I rebuilt the raid with the new card. So that left the lowest speed around 95MB/s.
Now here's my simple maths, forgive me if I'm too simple, 12 streams of Pro Res LT (@85Mb/s or 10.7MB/s). Or 1027.2 Mb/s or 128MB/s, the AJA system test I'm getting average write speeds around 700MB/s and read speeds around 420MB/s, the pipe, if my simple math/understanding is right, ought to be wide enough. Which leaves me puzzling over the inconsistent speed over the course of the AJA test.
I'm now wondering what to do? I can't decide if I need to replace the other drives in the raid, or there's something wrong internally with the Mac, although if there is why does the internal RAID work so well (well up to 350MB/s)? Or whether I need a new Mac Pro. Or did I imagine I could run 12 streams of Pro Res LT. I've checked the slot utility I have ATI HD4870 in slot one (x16) and the RAID card in slot 4 (x8)
I've read on here that AJA test should be a fairly straightline graph, which has got me thinking i have a wide enough pipe through which to pour the video streams, but either the pump (the Mac - in some odd way) or the tanks (the RAID drives) are not feeding the pipe consistently enough. Excuse the dodgy plumbing metaphors. The RAID itself says it's running fine on all it's diagnostics. Activity Monitor shows no signs of a resources hog, and the Mac's RAM is showing up as all there in the system profile. I don't have a second Mac Pro, although I'm toying with hiring one for a month and shifting the card, FCS3 over to rule in, or out my trusty Mac Pro.
Has any Jedi Raid Master here got any ideas, of the what, and how to diagnose, and indeed the fix? IT was suggested the Caviar Greens are too slow a drive for RAID use, and I'll consider replacing all 8 RAID drives, which is expensive, as I'm tempted, but baulking at, installing WD RE4s, but I can't help thinking there's more to it than a suspicion of slow caviar green drives.
(If you are wondering why anyone needs 12 streams of video - I do motorsport event coverage, circuit racing mostly)
[Chris Simpson] "Now here's my simple maths, forgive me if I'm too simple, 12 streams of Pro Res LT (@85Mb/s or 10.7MB/s). Or 1027.2 Mb/s or 128MB/s, the AJA system test I'm getting average write speeds around 700MB/s and read speeds around 420MB/s, the pipe, if my simple math/understanding is right, ought to be wide enough. Which leaves me puzzling over the inconsistent speed over the course of the AJA test."
This sort of math doesn't hold up with raid controllers. Their performance on a drag race test like AJA doesn't equate to their performance when running 10 streams of something low bandwidth. It's also true that you need to have fairly reliable, stable, low latency response through the entire run, not just an overall number that gives you good bandwidth. For example, your raid could glitch for half a second during the test and still pump out 750MB/sec, but you would certainly drop frames during that glitch. (We've profiled older variants of Areca and seen this behavior. I've not looked at the newer cards).
Have you worked with Areca support to see if they can help improve this number? There are a number of vendors (like Small Tree and Maxx) that sell systems that have been profiled and tuned to do this kind of work off the shelf.
CTO, Small Tree Communications
Thanks for the reply Steve, I did not think my simple math really would be how it works, although I did feel the higher the number the more likely it would be indicative of it's capability, in much the same way as racecar horsepower is not the whole story on top speed, but it's fair to say you'd expect a corvette motor to give a car a higher top speed than a motor out a Hyundai.
And moreover historically this mac pro and enclosure has given 750+ on both write and read, and a nearly flat trace.
I was hoping that someone might recognise this trace for what it is, it's distinctive and nearly always the same, the earth quake on read speeds and a distinct kink in write. Although (maybe being too simple again) i feel the earth quake on read trace is where the heart of the problem is.
As to vendors I'm based in the UK, the specialist vendor who I have worked with is very good, and replaced the first highpoint card at his expense, always available at the end of a phone, and in every way superb but i feel has run out of ideas on this one.
I need a sort of a diagnostic service, which needs to work, via email and/or something like Teamviewer. While more IT savvy than everyone I know, I'm an editor, and by no measure an IT expert.
[Chris Simpson] "I did feel the higher the number the more likely it would be indicative of it's capability"
This is usually not the case. In fact, I've found that some of the fastest drag race cards are the worst multistream performers. Engineers optimize for the big marketing number at the expense of a more random looking load. When we first got into the storage space, we were testing one vendors card (the fastest we were aware of) and it could not push one Pro Res stream reliably. It glitched to often.
I would call areca on this. I know they have tuning things for their cards. I know they've been working on that sort of performance. They should be able to help.
CTO, Small Tree Communications
I'm with Proavio/Enhance Technologies. While it's good to see that you went with the Areca 1882x card, we do not recommend using WD Caviar green drives in our arrays and using them would lend itself to poor performance. I'm going to check with our lead technician, Luis Rodriguez & I'll have him follow up. You are of course free to call or email us as well.
Vertical Sales Manager
12221 Florence Ave.
Santa Fe Springs, CA 90670
Main: 562-777-3488 X106
See our forum: http://forums.creativecow.net/proavio
I was surprised when I took the drives out to find they were caviar green, I knew they were WD, from raid web gui or disk utility, but was surprised as it was not my decision (don't put a lot if stock, in the green version of anything, be it wind farms that dont work, or electric cars, how green is that charge generated, or those exotic battery materials) but it was the raid vendor's decision and to be fair they have worked well for a couple years, until about 4-5mths ago, when the 2nd highpoint card was removed (and I'm beginning to wonder if that may in fact by fine), so it begs the question what drives would be happy to see in proavio raid, I may as well upgrade to 2tb drives, so caviar black (Shane Ross suggested in the FCP forum)? But I've been reading around today, and read that there may or may not depending on who you read on the web, a TLER issue with consumer WD so should go RE4 but not RE4GP, or hitachi ultrastar, or maybe maybe not deskstar, or seagate not barracuda, but enterprise constellation, etc, I read and read and there's a plethora of recommendations, negative reports and technical jargon that washes back & forth, over the plain "I'm the driver, not the mechanic" in me.
So if you have an opinion, as a rep of the enclosure manufacturer (and i appreciate other drives are available), what drives would you reccommend from experience and customer feedback (I'll email if it's at all tricky)?
Areca has a US office for its US customers and a worldwide one in Taiwan, for the rest, according to tge website, while I'm happy to contact either, timezones (12hrs from the UK) and potentially (although i doubt it) a language barrier (Taiwan) at least make it difficult to call Areca, I'll email though.
Moreover thank you both for your time.
I'm feeling it isn't the card, problems pre date it, although I appreciate tuning it, might help, i'm still thinking its either drive related, or (at a stretch) mac pro related (although the internal raid seems fairly quick and fairly consistent on test). It's just annoying, I'm sure there's a rapid and reliable RAID trapped trying to get back out for business but its a proving expensive issue to work through!
Chris, that does not look good. Apart from using Green drives which are not recommended for RAID use there could be something else causing the peformance issues.
Which PCI-e slot is the Areca card installed in?
Drop me an email, I have a few tweaks that may improve your situation. For the moment I would recommend creating two 4-drive RAID sets and volumes. Compare the performance between the two volumes and see if the numbers are consistent. From there we can try to isolate the issue and determine if we have a potential problem with one or more drives.
12221 Florence Ave.
Santa Fe Springs, CA 90670
Main: 562-777-3488 X109
Thats very kind of you. I will be in touch. I'm happy to replace these green drives, should it prove to be the issue in the end I was intending to upgrade from 1TB drives to 2TB soon anyway.
It's in slot 4, this is an old-ish Mac Pro 1,1, and to run x16 graphics in slot 1 and the raid card at full speed in x8, the only profile that fit is an alternate PCI express profile of 16, 1, 1, 8. My vendor was always keener to run the raid controller in slot 2, but there just were not the lane capacity, I'd be happier myself as in slot 4 is under internal drive bays 3 & 4, which are often quite warm.
It's a very long public holiday weekend in the UK, (our Queen's diamond jubilee), not that it matters, save for an opportunity (and unfortunately) I have too large a project to move and needs to be completed, and of course, it's sat on the RAID, and although I'll have to soldier on, it should be complete by Sunday night. If it's alright I'll email Monday. If you don't mind.
Thanks again. And to Steve & Jon. A great place The Cow!