G-SPEED eS Pro troubles - not rebuilding
I have the following RAID drive...
G-Speed eS Pro (G-Tech) RAID (mini SAS)
8 TB (4x2TB Hitachi Deskstar 7200rpm)
setup as a RAID 5
RocketRAID 4322 PCI-E to SAS host adapter
I used HighPoint Web RAID Management Software to setup and monitor this drive.
Running this on a Mac Pro 5,1 (Quad Core Intel Xeon 2.4 GHZ), running OS X 10.6.8.
I've been using this RAID for video editing for many years now, with little or no trouble. Although I do have all of the same video data - spread out across numerous backup discs and SD cards - I do not have a proper backup of the full RAID drive itself, including all my FCP projects, folders, etc.
Stupid. Yes, I know.
Recently, I started having trouble. Drive speed was way low. Crashes, dropped frames, and other playback issues.
I ran Disk Utility on it, but that didn't fix the problem. And the computer crashed trying to do AJA speed tests. And then things got much worse.
At the moment, in the Configuration Utility, Drive #1 has a "critical" symbol next to it and says it "needs to rebuild". The configuration utility shows that Drive #1 is trying to rebuild. But it sits at "Rebuilding 0%" forever, with no progress.
The SMART values for that drive, Drive #1, are all "OK".
However, the SMART values for Drive #4 shows a "FAILED" notice for "Reallocated Sector Count".
The only other issues I can see are that Drive #3, and Drive #4 have 247 and 202 "bad sectors found and repaired", respectively.
All four lights on the RAID enclosure are blue, indicating they're working normally. Clearly, they aren't.
I've tried to copy files to a different RAID (I have an empty 16TB eS Pro RAID, as well), but progress slows to a stop while files are being copied, and then it crashes the computer.
I've never had a drive in this array die on me before, so I've never done a hot swap to correct a dead drive. However, I have a second set of drives that used to live in this enclosure and were used for a project I no longer need.
Should I pull Drive #1, the drive that is trying to rebuild, and replace with a new one and let it try to rebuild the array with that new drive?
And if I do try that, and it fails to rebuild from the replacement drive, should I then put the original drive back in that bay, while I'm considering other ideas?
Do I need to format the replacement drive before swapping it?
Thanks in advance for any assistance. The data is very important to me. I've been in contact with a data recovery outfit. The quote for an attempt to fix things was far more than I have at the moment. I'm just trying to make sure I've done as much as I can before I have to go down that road.
well, unless I am mis reading what you wrote -
you state that drive #4 is failed, and that drive # 1 is trying to rebuild.
This is never going to happen.
In a RAID 5 configuration, you can only have ONE drive fail before you lose your data. So if drive 4 failed, you should have replaced drive 4, and it would have rebuilt the RAID 5 array. But it appears, from your description that drive #1 is trying to go into rebuild mode, which means that drive #1 has also failed. But it's not going to rebuild, because this is not a RAID 6 array (where you can have 2 drives fail) - its RAID 5, which means that if one drive fails (drive 4), and then a second drive fails, you are screwed, and you lose all your data. You should have replaced drive #4 when it failed.
Since you have had this for a while, with no issues, this means that these drives are the same age - they are probably at the end of their life (all of them) - because they spin around at 7200 RPM day after day, year after year, and ALL drives eventually fail - enterprise or not.
A data recovery company do recover a RAID array will cost you more than the cost of an entire new RAID array. I fully understand you need your data.
Rescue 1, Inc.
Thanks for your response, Bob. I didn't explain that very well, sorry.
I don't know that Drive #4 ever failed. I'll include some pictures of what I'm seeing. Drive #4 has "FAILED" next to "Reallocated Sector Count" in the SMART window, but it's still mounted. My guess is it's in trouble, though.
Drive #1 is dead though, I think. As of this morning, it's no longer trying to rebuild. The GUI now shows it's offline. And for the first time today, the red warning light on the exterior case (for the entire RAID, not just the Drive #1 enclosure) was blinking and beeping.
I have a brand new, unformatted disk of the exact same make, size, and speed sitting here. I also have a set of four drives that were previously used as an array in this same box, but I don't need that data any longer. I've never hot swapped a dead drive before, so I don't know if it requires any formatting in advance, or if there's a benefit to using one over the other for hot swapping.
And as I've never done this before, I'm a little freaked out, and worried that I could somehow do more damage by trying to hot swap a new drive ( in the #1 bay), if the rebuild doesn't work.
The issue with Drive #4 is that it shows 200+ "bad sectors found and repaired" (Drive #3 also has this problem), as well as the "Reallocated Sector Count" issue in the SMART window. But I don't know if that means the drive is dead. Yet.
I am confused -
you write -
I also have a set of four drives that were previously used as an array in this same box, but I don't need that data any longer.
So you don't care about the lost data ? then just throw out drive 1, throw in all replacement drives, and create a new RAID and you are back in business.
Rescue 1, Inc.
No, no. I'm saying that, in addition to a single brand new drive I have on hand to replace any dead drives, I also have four spares that have already been used in another RAID. I was wondering if it matters whether or not the replacement drive is formatted in any particular way before hot swapping, or if a drive previously used in a RAID might confuse the controller.
In the meantime... I did a hot swap this morning. I pulled the bad drive out and replaced it with a brand new drive (never previously used or formatted). The light on the enclosure is blue, but the GUI says "Rebuilding 0%" and has made no progress for many hours.
then you are screwed. It's not going to rebuild.
Rescue 1, Inc.
how did it go?
Mac pro 8core
several raid systems
The short story... the RAID is rebuilt and working.
The long story is beyond my ability to explain, but I'll see if my friend (who fixed it) can do so in this forum.
Basically, He used some software to clone each of the four (2TB) drives, individually, bit by bit. Two of the drives cloned quickly and without trouble. A third was basically fried. And the fourth had serious issues, but after (literally) weeks of progress creeping along (and noting bad sectors), it managed to complete the clone.
With three fresh, cloned drives, the controller did the rebuild.
I'm scouring my FCP projects now to locate the bad data/corrupted files. There are some (including the VERY last file I was working with before the crash), but not much. The files that are turning up glitchy are easy enough for me to replace with my backups. As mentioned, I have all raw footage and other elements, but no backups of edited projects.
I also have a SECOND set of drives with the same full RAID array backed up, now. In all likelihood, I'll set up a new array as RAID 6, in addition to backing up regularly.
I learned this lesson the hardest way possible.
It's good for everyone to read your story!
It's good you've learned your lesson
Mac pro 8core
several raid systems