G Speed Q RAID Config software
Just wondering if anyone has been able to find software for the G Speed Q to change the RAID from 5 to 0?
I attempted to use the config software from their site, but as it say it's only for 10.6 and earlier, which didn't work when I attempted on Mt. Lion, as a shot in the dark, with not being able to find any other software to use.
So I thought I'd put it out here to see if anyone knows of a solution... disk utility considers the RAID 5 one drive, and doesn't see it as a RAID currently.
Brand New Website UP! - scottkdouglas.com
[Scott Douglas] "I attempted to use the config software from their site, but as it say it's only for 10.6 and earlier, which didn't work when I attempted on Mt. Lion"
What was the error message if any?
Because the software is only a "messenger" as it attempts to communicate to the "brain" on the G-Speed Q via USB, it may not be available from anywhere else. What I'd do, is take the Q to any other system (10.6 or Windows), run the software on it.
Or, run Windows trial in Bootcamp on your Mac.
I am sure a local computer shop would be willing to do it for you for next to no money. Once the Q in RAID0, it'll stay in it, and you can re-connect it to your system.
DV411 - Los Angeles, CA
Why on earth do you want to change from Raid 5 to 0?
Raid 0 is harakiri - in case of drive fairure.
Mac pro 8core
several raid systems
With high density consumer drives, even RAID 5 is at least flirting with disaster. Either the drives don't support configurable error time outs, or most RAID systems don't set the drive error time out to an appropriate lower value, thereby ensuring an accumulation of uncorrected bad sectors and UREs. When one drive dies, any encounter of a URE during rebuild will cause array collapse and this is now quite common.
In fact with enough drives, RAID 0 can be more reliable statistically than RAID 5 when the arrays are the same usable size. That one additional disk required for a RAID 5 tips the probability against RAID 5. Really it should only be used with smaller density nearline and enterprise drives, and only with a consistent backup strategy. It's why RAID 6 and 10 have become so much more popular with big data applications.
Most likely, for the OP, his application involves lots of small to medium file writes, which is penalized with RMW on RAID 5/6 but not RAID 0. Or maybe he's instituting a daily (or hourly) replication, so that the RAID 0 can be used locally with minimal data loss should the array collapse.
[Chris Murphy] "In fact with enough drives, RAID 0 can be more reliable statistically than RAID 5 when the arrays are the same usable size"
Chris - any studies in favor of that statement?
It's a pretty straightforward probability estimate, I don't think a study is needed. Most people familiar with RAID 0 know that the probability of total array collapse increases with the number of disks, a study isn't required.
Assume identical model drives for the following:
The probability of a disk failure for a 6 disk array is greater than that of a 5 disk array. The more drives in an array, the greater chance of a disk failure. This is an unremarkable observation.
Even if this is a 6 disk RAID 5, and 5 disk RAID 0, the probability of a disk failure is unchanged. The RAID 5 array has a higher probability of encountering a 1 disk failure than the RAID 0 because of the additional drive. Technically the disks in the RAID 5 work harder than those in the RAID 0, due to RMW, but whether that has a significant statistical impact on reliablty would be up to a study to demonstrate. It's a fair comparison because the useful storage is the same for both arrays.
Now compare a degraded 6 disk RAID 5, and 5 disk RAID 0. The RAID 5 has a distinct disadvantage now, if a bad sector is encountered. A URE on RAID 0 is a near non-event, maybe a file is corrupted, or the file system will need repair. With RAID 5, a URE means the array collapses. Even if you have specialized knowledge to recover the data, the purpose of RAID 5, uptime, is compromised in this scenario.
This is why so many data storage companies proscribe RAID 5 with consumer drives.
Of course, for the OP, he's presumably converting from RAID 5 to RAID 0 with the same number of disks, increasing usable capacity, and increasing the probability of array failure. If there are other ways to mitigate this, it may still be an acceptable trade off.
Too many assumptions for my taste:
- where the significance of a URE is concerned (I don't think there is any difference between a degraded RAID5 or RAID0 in that respect),
- assigning a lot of weight to the probability of RAID5 going (and staying) degraded.
You either need to make bullet-proof assumptions and make conclusions from that, or actually find a study from a reputable source comparing RAID5 and RAID0 reliability based on sound statistics.
My hunch? Unless individual disks' reliability is horrendous sending RAID5 into a degraded mode all the time, RAID5 is vastly more reliable per GB of space per hour of service vs. RAID0.
And if you have a hot spare? Almost as good as RAID6.
DV411 - Los Angeles, CA
RAID 5 offers redundancy, RAID 0 does not. My assertion isn't that RAID 0 is always more reliable than RAID 5, the assertion is that it can be. The assumptions I've offered for that case are reasonable ones, and also aren't the ones you're complaining about. e.g. I'm assuming high density consumer drives which have orders of magnitude more UREs than enterprise SAS drives, and you haven't complained about that.
The assumptions you have a problem with:
There is a significant difference between RAID 5 and RAID 0 with URE. A degraded RAID 5 array will collapse, while a RAID 0 array won't.
I've made no statement that enhances the weighting of RAID 5 going degraded, or that it would stay degraded. It's a fact that a 6 disk RAID 5 has a greater probability of going degraded, than a 5 disk RAID 0 has collapsing, due to a drive failure. Naturally because of that higher probability, I'm exploring the outcome of a degraded RAID 5 vs a normally functioning RAID 0, in the face of a URE. That's highly relevant to understanding the differences between the two.
If you look at the overall probabilities, RAID 5 isn't hugely better or more reliable than RAID 0, when used with high density consumer HDDs, and can actually be less reliable. The point being, people who need redundancy should look to RAID 6 or 10 with conventional file sharing. A distributed file system implementing synchronous replication between RAID 0 arrays is also reasonable.
As for RAID 5 with hot spare being almost as good as RAID 6, I think it's absurd. Degraded RAID 5 is not URE tolerant. A single disk degraded RAID 6 is. They as similar as a single engine airplane and a twin in the face of an engine failure.
Already six years ago Netapp asserted that "protecting online data only via RAID 5 today verges on professional malpractice" I'm amused just imagining what they'd call it today.
I was not questioning the disadvantages or benefits of RAID5 on high density consumer drives, there're enough turf wars about that out there already.
All I asked was to support your statement of:
[Chris Murphy] "In fact with enough drives, RAID 0 can be more reliable statistically than RAID 5 when the arrays are the same usable size."
...yet all you offered was more assumptions that you consider reasonable.
So I'm gonna bow out of this conversation.
It's an annoying debate style to throw rocks, claim a miss is a hit, and then duck for cover so you can't be asked questions any more.
I didn't add more assumptions. I did support the statement with a more detailed explanation. I also refuted your assumptions about equal RAID 5 vs RAID 0 behavior in the face of a URE, a significant factor supporting my statement, as well as refuting the words you put in my mouth that I assumed the array would stay degraded. (My argument is weakened if it's allowed to stay degraded, the likelihood of collapse increases as the array is rebuilt).
You've essentially asked for a study that supports 2+2=4, or that "the more drives added to a RAID 0 array, the higher the probability the array will collapse." For one such a study would be expensive and take a long time, real drives of identical model from the same batch are not in fact identical enough for repeatable experimentation. You'd end up with noisy data, and arrive at inaccurate results, and misleading conclusions. That's why there's statistical analysis.
The flawed assumption here is embedded into the most common comparative description of RAID 5 and RAID 0: "If a drive fails, RAID 5 survives, while RAID 0 collapses." It's a true statement. But contains a false assumption. The two arrays, equal usable size of course, do not have an equal chance of one drive failing.
Perversely, you allow the common and incorrect assumption to pass by without scrutiny, while denying me the exploration of the statistically more likely scenario by way of demanding a study, and opining that only I find exploration of this scenario reasonable. Why do you find it unreasonable?
Alas, Google demonstrates yet again that my assertion isn't an original one. This is not a study.
There are ways to mitigate the likelihood of a URE with RAID 5 and improve its reliability considerably, even when using consumer SATA drives. But that means having compatible error timeouts between drive, controller, and block device layer in the OS. This is non-obvious, which is simply why it's better to not use consumer drives in RAID 5 (or 1 or 6 or 10 for that matter).