Author Topic: Another Upgrade another configuration fail  (Read 4142 times)

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Another Upgrade another configuration fail
« on: April 20, 2016, 01:33:17 am »
without starting from scratch again any idea how to repair the configuration

there are a few things just not making any sense

the HDD's should be mounted
/mnt/3tbHDD1DRU1
/mnt/3tbHDD2DRU2
/mnt/3tbHDD3DRU3
/mnt/3tbHDD4DRU4
/mnt/3tbHDD5DRU5
/mnt/3tbHDD6DRU6
/mnt/3tbHDD7DRU7
/mnt/3tbHDD8DRU8
/mnt/3tbHDD9PPU1
/mnt/3tbHDD10PPU2

but not only is flexraid picking them up as missing

but picks up the same drive multiple times in the not pooled section

and if I try and import from PPU that also does not work

how ever I had a little success finally got zfs raidz2 running
and retrieved some files from my 2015 HDD image backup
some of the drives are now picked up after copying
/root/FlexRAID-Managed-Pool/class1_0/

unfortunately I literally wiped my backup boot drive a few weeks ago
tho I do believe I have its image on my ZFS Raidz Drive which is currently broken also ... which is why I installed ubuntu 16.04 over my 14.04


if you could help guide me into any files I could retrieve from the backup ISO i will try it ( if i can get zfs running again ... )


« Last Edit: April 20, 2016, 02:38:55 am by MasterCATZ »

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,547
  • Karma: +204/-16
    • View Profile

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #2 on: April 20, 2016, 10:14:53 pm »
I can not even try that because its not even picking up all the drives

even tho all are mounted and accessible and some picked up in the not pooled section ... but not showing in swap out section ...

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,547
  • Karma: +204/-16
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #3 on: April 20, 2016, 11:25:07 pm »
You need to mount all disks you wish to use.
If you look at your screenshots carefully, you will see that it is picking exactly those disks that are mounted.

Do not mount them permanently in fstab. Just mount them for the session.

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #4 on: April 21, 2016, 01:10:17 am »
cheers for reply , slowly making progress

worked for some and not for others

going to try again after flexraid de-installed so I can remove /root/FlexRAID-Managed-Pool nuked/class1_0
Folder
( I am just hoping I did not delete any files on HDD when I last tried deleting the folder I noticed my HDD lights light up light Christmas tree woops ) anyway to use PPU drive to compare file contents to replace missing files ?

when I removed drives from fstab it was mounting them their , but flexraid was still not picking them up , how ever if I unmounted and remounted some where I wanted it picked some of them up

wish I knew why its some drives when they all have the same settings ,,,

I know it worked with drives in fstab but that might have been added after it was all setup last time


Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #5 on: April 21, 2016, 01:40:18 am »
nope also a no go


drives are mounted , all the same manual  configuration /mnt/path .. only few picked up , and its random drives ...

if I have it set on auto mount /media/username/path flexraid picks up none of them ( assuming its because mounted for non root user ? )
yet picks up other drives mounted their


any idea's what to do to find the cause ?


« Last Edit: April 21, 2016, 02:28:56 am by MasterCATZ »

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,547
  • Karma: +204/-16
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #6 on: April 21, 2016, 05:44:01 am »
Which version of Ubuntu is this?

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #7 on: April 21, 2016, 11:09:21 am »
Ubuntu 16.04 LTS
how ever it was an upgrade not a fresh install


whats got me is why some are getting picked up as unpooled best I can get is 5 out of the 10 ...

definitely only picks up drives in fstab , I am pretty sure this is what i discovered last time as well

gives up ... fresh OS install and try again ( hopefully can still use my old home partition )

if not I'll roll-back to an old backup


testing on live cd first

 Unable to locate package sysv-rc-conf



but yet again none of the flex-raid drives are showing , yet other mounts are showing
could their be something stored on the partition causing it ?

as it seems like
"GUID Partition Table"
 is the issue because
 "Master Boot Record"
drives are showing  :-\


still going to push with fresh OS install



« Last Edit: April 21, 2016, 10:50:55 pm by MasterCATZ »

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,547
  • Karma: +204/-16
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #8 on: April 21, 2016, 02:11:41 pm »
Ubuntu 16.04 LTS


whats got me is why some are getting picked up as unpooled best I can get is 5 out of the 10 ...
Hum... I personally have not tested 16.04 LTS.
Please add an entry to bug.flexraid.com and I will take a look. There might be something specific to the new OS that needs a different way to handle things.

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #9 on: April 22, 2016, 12:41:24 am »
it must be an "GUID Partition Table" "Ubuntu 16.04" issue

fresh OS and now can not pick ANY of them up , with the updated OS at least some of them picked up


I have less then an hr to get our Multimedia running again for weekly shows , so I better get stuck into that ,
at least we can still browse the files manually

when I have time will whip up an Virtual machine and do some tests then log the bug


anyway to manually add them via folder path like you can do when importing existing configuration providing  PPU1 path?
so you don't have to reply on flex-raid seeking out mounted drives ?


*edit*

still no idea whats going on can not replicate

created virtual machine and used GPT partition tables , and mounted

flexraid picked these up

so why can I not get it to pick up pre existing drives that flexraid used to use ?

*edit* made an interesting discovery

half of the drives are being picked up as brtfs in gparted when they are ext4
from memory I first tried btrfs as I wanted better bitrot detection and file tools
but went back to ext4 due to no free space bug and poor speed

DRU2 was the only drive the live cd picked up

mean while Gparted reports these drives as btrfs with no partition tables
DRU5
DRU6
DRU7
DRU9

I just noticed live CD had btrfs installed while the installed OS did not so will see what happens
...
but even that makes no sense as even one of the so called btrfs drives was picked up this time DRU5

how ever DRU5 drive is also missing the _flxr_ folder that the other drives have ?
sadly I guess this means I did in fact loose 6% of my data from that drive when I went to remove FlexRaid mount points  ( at 90% and the rest of the used drives are 96% )

which has me really needing to try and get this sorted so I can attempt to recover data from PPU

I am wondering if their is anything flexraid looks for to not show the drives in the unpooled section ?


« Last Edit: April 22, 2016, 05:59:40 pm by MasterCATZ »

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #10 on: April 23, 2016, 04:42:58 pm »
still got me beat even used 14.04 and 15.x live boot disks they would not even pick up my flexraid drives at all

yet any other drive I mounted after flexraid had been started worked

for now Ubuntu 16 has been randomly picking up drives mounted at boot ,
after 100+  reboots I only had 1x left to mount / swapout DRU4
but on reboot the ones that were swaped out where back to showing missing 

what ever it is its something on the partitions preventing flexraid to detect these drives

I recovered the files I accidentally deleted with R-Studio ,



root@aio:~# fsck /dev/sde
fsck from util-linux 2.27.1
e2fsck 1.42.13 (17-May-2015)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/sde

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>


root@aio:~# fsck /dev/sdh
fsck from util-linux 2.27.1
If you wish to check the consistency of a BTRFS filesystem or
repair a damaged filesystem, see btrfs(8) subcommand 'check'.


still no idea why the partitions are running 2x types when they should all be ext4.. I might just have to start them again by scratch but would like to know why a perfectly working system stopped working after an OS update ...

I might try matching them up in the fstab as ext4 / btrfs and see what happens

*edit*

the only thing left is to see if FlexRAID-2.0-Final_u11b.bin works

as I know FlexRAID-2.0-Final_u12b.bin gave me problems last time

*edit*

now I can not get any old flexraid drive to show when mounted

I am at the stage I think my only solution is to blank out an PPU drive copy DRU content onto it rinse repeat and start again

I really do wish I knew what flexraid writes to the /Meta / MBR / GPT / Partition Tables to cause these issues ...

which would make this the third time I have had to start from scratch from an OS update
« Last Edit: April 24, 2016, 02:00:12 am by MasterCATZ »

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #11 on: April 24, 2016, 03:32:16 pm »
Wiped the PPU drives

now all the DRU's are showing up

and the OLD PPU drives are being constantly seeked I am assuming flexraid is looking for something on them , as I have not changed their UUID yet I 0'd out every byte on the drives
« Last Edit: April 24, 2016, 03:40:12 pm by MasterCATZ »

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,547
  • Karma: +204/-16
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #12 on: April 24, 2016, 08:31:11 pm »
You need to investigate what process is accessing those drives. If FlexRAID, check that you don't have a scheduled task that is trying to get run.

Strange about your drive formatting issues, but good that it is not a Ubuntu 16 LTS issue.

Offline MasterCATZ

  • Jr. Member
  • **
  • Posts: 64
  • Karma: +0/-0
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #13 on: April 25, 2016, 10:53:05 pm »
onto more strangeness

just when I thought all was going well

the old PPU drives where Both DD  Wiped 100% Zero's partitioned to GPT / Formatted ext4 and data rsynced  from the DRU drives and now they will not get detected

1 drive only gets picked up if mounted via GUI ( /media/aio/Raid-F.HDD8.DRU8 )
and the other only picks up if  mounted via fstab (/mnt/Raid-F.HDD7.DRU7 )
 ( as long as x-gvfs-show is NOT used )
if I use the same settings its either one way or the other but never with both drives

*edit*
now they are both  picked up via fstab .. tried another reboot and back to square 1 ...
really need to verify they are going to work as intended before I wipe the next 2x DRU drives ..


one thing I did notice is when I tried unmounted the drive that flexraid did not pick up it said it was mounted by another user , so I will try and force mount them as root , I am assuming this would be the account flexraid would be using ?
*edit* but the other drive flexraid was seeing said the same thing ...


anyway to arrange team-viewer session ?

whats bugging me is all my small SSD drives are always showing
« Last Edit: April 26, 2016, 03:21:38 am by MasterCATZ »

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,547
  • Karma: +204/-16
    • View Profile
Re: Another Upgrade another configuration fail
« Reply #14 on: April 26, 2016, 01:52:51 pm »
What changed for the drives that used to be picked up to no longer be so?

Also, different drives being picked up only when mounted one way and not the other tells me many things are odd with your system. I even suspect some disks might be formatted with one file system and you are force mounting them as another.

All your disks should be GUI mountable and should show up if you do: mount -l.