Recent Posts

Pages: [1] 2 3 ... 10
1
General Discussion / Re: Advice: 2 weeks and no data, what now?
« Last post by stevatron on April 24, 2017, 10:33:14 pm »
@Brahim
This thread clearly outlines my issues, and I believe I've posted all of the necessary details including screenshots
http://forum.flexraid.com/index.php/topic,49250.0.html

I most certainly wasn't "holding off pertinent details" - and would be delighted to share any details required to get my array back up and running.

More specifically - I have asked a very specific question, twice, in the mentioned post (ie should I restore my drive with "restore into current drive"?), and am yet to receive an answer.  This scenario is not documented, hence there is no indication of what is being restored onto the current drive.

The only mention of the "restore into current drive" scenario is this post (http://forum.flexraid.com/index.php?topic=5004.0) - although it doesn't assist in describing this scenario.

I would appreciate if you, or any other community members, could clarify this scenario for me.
(It could even be a great article for the Knowledge Base)
2
That would probably explain it then, as I only had an update operation and a verify operation scheduled to run regularly.  I must have misinterpreted some configuration guides when I first set up the volume.  I'll have to go through it again and make sure I've got things properly scheduled so if this happens again I don't have an issue.  Thanks so much for your guidance! The data wasn't critical and I mostly have the parity drive in there for restore convenience, but next time it should come in handy if I have a failure. Very appreciated.
3
The Validate task is there to validate the state of the data and parity files (checksum) while the Verify task is there to ensure the RAID is in sync (bit for bit verification).

With the parity not being valid, the restored files would mostly be corrupted.
4
General Discussion / Need to add new hard drive
« Last post by ks-man on April 24, 2017, 07:54:42 pm »
I've been using my media server successfully for the past 3-4 years without issue.  It currently uses four 3tb WD Red drives (3 for data and 1 for parity).  I only have about 700gb of the 9tb remaining so I'm going to soon add another hard drive to my pool.  At this point I see there are 4tb, 6tb, and 8tb drives.  The 4tb drive is just slightly higher in prices than the 3tb drives on Amazon and then there is a bit of a jump to 6tb and 8tb. 

I thought I read somewhere a couple of years ago that once you start using higher capacity drives (above 2-3tb) the failure rate goes up significantly.  Is this still true?  Should I just add a 3tb drive and stay consistent?  I can go up to 4tb drives for just a few bucks but I wouldn't want to do it if the reliability isn't as good.  I also am considering paying for either a 6 or 8tb as the premium isn't that large compared to the added storage.  I'm not sure how much more room my server has for drives so I might only be able to put in another 2 or 3 at most.

I realize that the largest drive will be the parity drive so I'll only gain 3tb regardless if I only buy one new hd however I might buy two or else I want to future proof so the next time I buy a new drive I'm starting to add more capacity assuming the reliability isn't different.

Also, since I haven't messed with this in a long time can somebody direct me to a guide or forum post on the process to add a new drive?  I also need to know the process of changing the parity drive assuming I go with a 4tb or larger drive.

Thanks a lot!
5
Not that I'm aware of, but honestly I wouldn't know what the possible actions might be.  I did try to move the parity drive from its USB enclosure to a native SATA connection when I thought I would have to do the restore, thinking it would be faster, but when I did so it showed as missing right away so I simply put it back to USB.  Other than that I suppose an Update task could have been interrupted at some point when I had a power outage a couple of months ago--but other than those instances I'm not sure what else could have happened.

If I cannot perform the restore, will I need to do something going forward to make sure the parity is correct afterward--perhaps rebuild the parity somehow?

Thank you so much for the assistance thus far.
6
Then it is failing on the parity data being potentially invalid.
Any action you might have taken that would explain the changes to the parity data?
7
It appears to be, does this look correct?





And I don't see any other drives listed as missing/failed (DRU1 is now the replaced good disk)



Do I have any recourse at this point?  Thanks for the reply!
8
1. Navigate to C:\FlexRAID-Managed-Pool\class1_0\{a3247b83-2034-40fe-875b-ae2c30092bfd} and confirm that it is mounted to a disk.

2. It appears you have both disk 1 and 5 that have failed.

3. Your parity data is also failing checksum.
9
General Discussion / Re: Advice: 2 weeks and no data, what now?
« Last post by Brahim on April 24, 2017, 10:40:33 am »
@stevatron
You've still posted nothing relevant to your issues. How do you expect to get any help?
The restoring process is well documented. Anytime I read of someone being confused, it is always tied to them holding off pertinent details.

How confusing is it to click on a button that says "restore" and make a selection to the drive to restore to?  ???
Another option you have is paid premium support.
10
I had a single drive failure in my 5 disk Raid-F Snapshot array.

I performed a swap and restore under the Drive Manager, and the new replacement disk was swapped but the restore failed with the message Too many failed devices! Failed=2, when I only had a single disk fail.  I then re-attempted the restore under the Command Execution Center by choosing the Restore section and the new empty DRU1 and I receive the same error (Too many failed devices! Failed=2).

Attached is the log with TRACE enabled.  Any suggestions as to how to proceed? I would very much like to perform this restore if possible, even if it is only partial.
Pages: [1] 2 3 ... 10