Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - MasterCATZ

Pages: [1]
1
RAID-F on Linux / Another Upgrade another configuration fail
« on: April 20, 2016, 01:33:17 am »
without starting from scratch again any idea how to repair the configuration

there are a few things just not making any sense

the HDD's should be mounted
/mnt/3tbHDD1DRU1
/mnt/3tbHDD2DRU2
/mnt/3tbHDD3DRU3
/mnt/3tbHDD4DRU4
/mnt/3tbHDD5DRU5
/mnt/3tbHDD6DRU6
/mnt/3tbHDD7DRU7
/mnt/3tbHDD8DRU8
/mnt/3tbHDD9PPU1
/mnt/3tbHDD10PPU2

but not only is flexraid picking them up as missing

but picks up the same drive multiple times in the not pooled section

and if I try and import from PPU that also does not work

how ever I had a little success finally got zfs raidz2 running
and retrieved some files from my 2015 HDD image backup
some of the drives are now picked up after copying
/root/FlexRAID-Managed-Pool/class1_0/

unfortunately I literally wiped my backup boot drive a few weeks ago
tho I do believe I have its image on my ZFS Raidz Drive which is currently broken also ... which is why I installed ubuntu 16.04 over my 14.04


if you could help guide me into any files I could retrieve from the backup ISO i will try it ( if i can get zfs running again ... )



2
root@aio:/# ./traid-install-mgr.sh
Unsupported kernel version detected! Detected 3.16.0-31-generic
Ubuntu/Debian supported kernels are 3.13.0-XX-generic where XX is 27 or greater

3
i was running 14.04 but something broke during my last upgrade and I had to remove FlexRAID to finish upgrading because it said FleaxRAID wa going to cause a loop with grub for some reason

and another error was
insserv: warning: script 'FlexRAID' missing LSB tags and overrides


.. now I can not get FlexRAID running again



root@aio:/var/lib/flexraid# uname -a
Linux aio 3.16.0-31-generic #41-Ubuntu SMP Tue Feb 10 15:24:04 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 
*edit* sorry forgotten to put it in the linux section

*edit*
currenlty forcing all packages to reinstall
if that fails
fresh install then re add all my packages ...
*edit*
it seems my "FlexRAID-Config.db" must have gotten corupted at some stage
is their anyway to manually edit it to be correct again ?
I am currently using an old "FlexRAID-Config.db" from an old SSD HD that was upgraded last year its kinda got the pool online

4
*edit*

Ok I completly forgotten about windows having built in firewall  :o

My first attempt on weekend was port forwarding 8080, when I got home I realized its not actually running an web server on that port and can only be managed locally acesing it on port 8080 
(I even tried an PC on the local LAN today and it could not connect to port 8080 either which is a shame it means no mobile phone acess for now ? )

plan B

so today I opened up port 9696 on dad's box
so I could try and acess it through the "NZFS Web Client.html" client here

I tried using both Ip and DNS


*edit* actually does it even work ?

I can not even get locally connected PC's to connect via client ???
Client on Ubuntu Host on Win8



http://gyazo.com/c63219b43752a936682c3edef5f78cd1

5
I spent all day sending data onto an t-RAID to fill up the PPU drives for redundancy testing , well their was no redundancy when the PPU drives was filled why are the DRU's not being used for duplication to store redundancy ?

ie) DRU1 DRU2 DRU3 PPU1 all same size

I filled t-RAID volume which started populating DRU1 then watched untill DRU2 started filling up
 ( this for me meant PPU1 was finally almost full and just needed to fil the remaining 50 gig reserve )
I then watched DRU2 Fill up but no files were going into DRU3 .. how is DRU2 going to have any redundancy if its files are not being backed up some where .. pulled DDR2 .. no redundancy like I thought

this is one thing I wish would be taken more observation of instead of speed tests
( of which RAID-F is by far still the fastest solution for flexraid )

on a typical RAID every drive is normally a part of the array's redundancy  , with flexraid wanting to be power efficient I would assume the PPU drive is just used as the first backup drive with the next empty drive to be the next in line for holding the duplicated files .. apparently not the case

6
General Discussion / Failure past the tolerance level has been detected
« on: September 05, 2014, 09:50:30 pm »
Failure past the tolerance level has been detected. This RAID cannot be restored without an override!

Ok I am testing with 3x DRU and 1X PPU

I pulled DRU1  array kept going I pulled PPU1 array finally died ( YAY!)

now I put DRU1 back ... mmm no option to unfail

*edit* unfail option popped up while I started this post

Advanced operations
Enaled Configuration Over Ride

Un-Fail Device now unlocked :P 

now for my main concern with t-RAID, if the failed Drive is the PPU drive you have no redundancy until it is replaced, the other RAID solutions out their still  use the other remaining disks for redundancy

how ever I guess this is what makes flex-raid energy efficient

I was hoping t-RAID had extra features to RAID-F but is seems to be still a mixed bag between them both


7
General Discussion / How to hide the t-RAID drive letters ?
« on: September 05, 2014, 08:52:53 pm »
with Raid-F you could mount the HDD to an folder location and keep the drives all hidden away

but when ever t-RAID seems to take over it just creates drive letters which populates My Computer pretty fast

like in Computer Management it marks the drive as offline , but it also gives the drive an letter






Or do I just change the paths/drive letters of the virtual drives the t-RAID seems to have created ?




8
RAID-F Bug Reports / ia32-libs depreciated
« on: January 24, 2014, 02:34:30 am »
For no reason today after a reboot flex-raid stopped working
looks like some update removed depreciated software


ERROR: UnsatisfiedLinkError: Can't load library: /var/lib/flexraid/ext/libNativeOS.so

aio@aio:~$ sudo apt-get install ia32
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package ia32
aio@aio:~$ sudo apt-get install ia32-libs
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package ia32-libs is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  lib32asound2 lib32z1 lib32ncurses5 lib32bz2-1.0

E: Package 'ia32-libs' has no installation candidate




The only solution is



sudo apt-get install lib32z1 lib32ncurses5 lib32bz2-1.0 lib32stdc++6

9
Would it be possible to be able to out put all files / path's that were located on a drive that is dropped off the array ?

ie) worst case scenario more drives die then PPU can recover , would be nice to know what to re-download ...

my replacement WD disks arrived and just running flex-raid through its final paces before I swap ZFS Raidz out
 I can live with less redundancy on my multimedia and flex-raid storing at a file level is a big bonus at least when you go past your redundancy levels you don't loose everything


thou its annoying trawling through the massive logs ( that seem to clear on reboots ... )


one thing I have noticed from my last test

when I had 8x DRU and 1x PPU when 1x drive died the array went offline

when I had 8x DRU and 2x PPU the array stayed online when 1x drive dropped off
( minus the data that was on the drive that dropped offline )

when I had 8x DRU and 2x PPU and 1x failed PPU and 1x failed DRU array dropped offline ??

is that normal ? I would have thought with 2x PPU the array should be able to still stay-online with 2x failed disks ?



another thing I noticed ( on Ubuntu ) if I  disconnected sata drive connections from a pool flexraid would not notice them missing no matter how many times I refreshed or stop started pool  only after restarting the service it would discover the drives were no longer their


10
RAID-F Bug Reports / Strange crc tests
« on: June 04, 2013, 12:41:44 am »
I am trying to get Flexraid to update its snapshot on ubuntu

which keeps failing with unexpected End of Files using ext4
( trust me after many rsync redo's this is really now getting to me 3 days per pass )

now this is were it gets interesting

if i use teracopy via wine crc checks both files they are perfect condition
I can use torrent programs to verify data they come up fine

how ever reboot pc and try using torrent programs they fail

if I use deluge / Transmission the files stop getting checked at particular points

if I use teracopy to re verify the files they come back clear

try using torrent programs again they come up clear

when using teracopy I notice the tests hang for a little bit around where the torrent programs just stop


using vuze ... also hangs but keeps going and the files then seem to come out clean until next reboot

idea's ??

logs show nothing


tried enabling retry read errors changing write modes ect

now I know the particular HDD that is getting written to does have around 1000 bad sectors ( gains about 100 a day )
( awaiting WD RMA's to come back to swap out the rest of the failed  new 3tb drives ) .... but whats got be stumped is how a few programs manage to read all the data fine .. and then other programs that couldn't read the data then can 

11
RAID-F on Linux / Flexraid keeps going readonly mode
« on: May 11, 2013, 04:18:20 pm »
Code: [Select]
LABEL=FlexRaidPPU1 /mnt/FlexRaidPPU1 btrfs defaults 0 0
LABEL=FlexRaidDRU1 /mnt/FlexRaidDRU1 btrfs defaults 0 0
LABEL=FlexRaidDRU2 /mnt/FlexRaidDRU2 btrfs defaults 0 0
LABEL=FlexRaidDRU3 /mnt/FlexRaidDRU3 btrfs defaults 0 0
LABEL=FlexRaidDRU4 /mnt/FlexRaidDRU4 btrfs defaults 0 0
LABEL=FlexRaidDRU5 /mnt/FlexRaidDRU5 btrfs defaults 0 0
LABEL=FlexRaidDRU6 /mnt/FlexRaidDRU6 btrfs defaults 0 0
LABEL=FlexRaidDRU7 /mnt/FlexRaidDRU7 btrfs defaults 0 0
LABEL=FlexRaidDRU8 /mnt/FlexRaidDRU8 btrfs defaults 0 0

is how the drives are mounted for flexraid on startup

Code: [Select]
root@aio:~/FlexRAID-Managed-Pool# ls
class1_0
root@aio:~/FlexRAID-Managed-Pool# cd class1_0/
root@aio:~/FlexRAID-Managed-Pool/class1_0# ls
{004f3a72-1b8d-4855-9f24-4838f5355bcd}  {48a5abe3-8cb4-4dde-8254-a874a3284b52}
{023eb29c-eca9-4081-abe1-6a28d969c9a0}  {6f4d93c3-81cc-4416-8a77-bf59d6bfee9b}
{06556123-bddc-41bb-b291-97470eaccda7}  {814a2ae7-0da9-4c69-8774-78d14b17160a}
{25d44a05-bdbe-49b3-8554-edd16f5caf65}  {a45d4ebd-fe9a-4a4f-8c5c-fe221944b653}
{46e726e7-13d5-432b-801e-ad280b4e962b}  {efe0c4bd-1a11-4341-aa1c-5377b6ebf89b}
root@aio:~/FlexRAID-Managed-Pool/class1_0#

is what flexraid creates

I then have flexraid mount into the /mnt folder

Code: [Select]
FlexRaidDRU1  FlexRaidDRU4  FlexRaidDRU7  FlexRAID_POOL
FlexRaidDRU2  FlexRaidDRU5  FlexRaidDRU8  FlexRaidPPU1
FlexRaidDRU3  FlexRaidDRU6  FlexRaidDRU9
root@aio:/mnt#

all those mounts are accessible


if I copy data directly to a HDD I seem to not have problems , but if I copy data into
FlexRAID_POOL it will randomly lock off the HDD mount of drive that has data going to and lock out FlexRAID_POOL

needing reboot to go back into write mode

using Ubuntu and btrfs

12
General Discussion / Just a few raid config questions
« on: April 19, 2013, 09:56:13 pm »
I am leaning towards using flexraid over my zfs system ( using 16x hdd as a test run for media streaming )

my current system consists of 6x zfs hdd raidz2 vdevs with multiple pools tacked onto it ( 4x data 2x redundancy per pool )

because less data is lost if their is a total hdd fail with flexiraid should I just use say 13 hdd data and 3x redundancy

or should I just make some smaller pools of 3 x hdd data and 1x redundancy >?

does flexiraid allow new pools to be tacked on ?

I am assuming smaller pools will be quicker for snapshots to update ?


the other question is expanding if upgrading to larger drives , how many drives would have to be replaced to allow the volume to expand ?

or would it be a matter of expanding before running out of a % of space to allow for a nice steady slow expansion ?

currently swapping over my 2tb drives and getting 3tb drives costs around $2k for 16x HDD
it would be great knowing if I could just slowly replace them with 5tb drives later on slowly ie not have to buy bulk 6x hdd lots :P


and how does flexraid play with btrfs ( I am hoping using that as a file system will help out with datarot / bitrot )

13
RAID-F Feature Requests / cache Drive possible ?
« on: April 19, 2013, 09:41:11 pm »
I am planing on moving from ZFS Raidz to flexraid

mostly because if their is a massive fail I can still recover files the second being I like the sound of LESS HDD's needing to run when only accessing 1 file

my current ZFS system consists of 2 x 24 hdd racks and 1x 16 hdd rack

I am planing on giving flexraid a go with my 16x hdd system as I was about to replace it from 2tb with 3tb HDD's

because of the way zfs worked I used an 4 gig DDR RAM drive for an log drive that I would like to still make use off ( apart from that i might just make it a swap drive as its to small to use for anything else)

or is it possible to still use my SSD cache drives to speed things up ?


 

Pages: [1]