Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Quaraxkad

Pages: [1]
1
Pub / Forum suggestion
« on: June 16, 2014, 08:23:43 pm »
I have a suggestion that is not specific to either RAID-F or tRAID, but rather for the forum itself, so I'm not posting this in the Feature Requests subsection. I see a lot of questions that have wildly different answers based on whether or not the person asking the question has used Cruise Control or Expert mode configurations, I think the RAID-F section should be split up by those two categories because essentially *nobody* mentions it in their initial question. Personally I found Cruise Control to be a considerable amount more confusing and complicated to maintain, yet I think new users will just start off with that since it sounds easier.

2
General Discussion / Expression Scripts
« on: January 04, 2014, 11:38:42 am »
I have a couple of expression scripts that I would like to run manually. The wiki suggests that they can be scheduled to run "now", however I don't see how that is possible in the scheduler.

Quote
Note: expression scripts can only be scheduled. However, you can schedule them to run “now”.

There are only 15 minute intervals, so I often have to sit and wait up to 15 minutes for a single script to run. Am I missing something in the scheduler?



Also, I have one script that runs an update, if the update is successful it runs another script using @success. Again, the wiki suggests that @success will only be run if the task described in @execute is successful, however it runs regardless of the result of that task. Is this also not the case? I have had failed updates that still ran the verify task anyway.

3
RAID-F Bug Reports / Renaming folders
« on: December 07, 2013, 02:55:19 pm »
When renaming a folder immediately after moving files into it, sometimes the old folder remains along with a folder of the new name, with the files split across the two folders. To be a bit more specific, say I have a folder on the pool called "Stuff", inside it is "Folder1" and some files. I select the files, drag+drop them into "Folder1" and then rename "Folder1" to "Folder2". The next time I look inside the "Stuff" folder, I now have a "Folder1" and a "Folder2", with the files I had moved into "Folder1" split between the two.

I am guessing what's happening is that the files are being moved from one DRU to another within the pool (Due to using auto-space-priority), and the move operations are not yet completed before I rename the folder, which interrupts the remaining moves. I've encountered this at least two times, and possibly many more times that have yet gone undiscovered.

4
General Discussion / Parity Calculation Performance
« on: November 06, 2013, 08:11:10 pm »
I've been playing around with RAID-F Snapshot for the last couple of weeks. I initially had very slow parity calculation speed (6MBps) using 12 data drives and 4 parity drives. At first I assumed it was due to my SATA controllers not being able to keep up with so much simultaneous throughput. I did some tests today to compare the performance in various scenarios and figure out what my biggest bottleneck is. First I created a new Expert Snapshot array using one 1TB DRU with only one 10GB test file on it and one PPU. The creation speed was 87.74MBps. Definitely bottlenecked by the slow data drive which benchmarked in HD Tune at only 73.7MBps average. Using the same DRU and PPU, I added one placeholder DRU (no disk, no data), repeat the parity creation, and get the same result, 87.72MBps. I continued adding placeholder DRUs and real PPUs one at a time and found that with 4PPU, 1DRU and 3DRU placeholders, the speed drops down to 47.54MB. I read a post by Brahim in another thread that suggested placeholders should have practically no discernible effect on parity creation speeds, but that doesn't seem to be the case here.

Then I put in a new pair of CPUs. Previously I was using two Opteron 2212HE, 2GHz dual core. I installed two Opteron 2384, 2.7GHz quad-core. I repeat all the same tests as before (after increasing the Processes setting to 8 to make sure it uses all newly available CPU cores). The first tests are all still at around 87MBps, no surprise there because they are limited by the speed of the DRU. Then I get to 4PPU, 1DRU, 3DRU placeholders, and my creation speed is 74.28MBps (was 47.54MBps on the old CPU). A nice improvement.

But this demonstrates that my CPU is a limiting factor here, even the upgraded CPUs, much more so than my SATA controllers. I would settle for 74MBps, but I have 20DRUs and 4PPUs and room for future expansion beyond that. A test using the same DRU as before, 19 DRU placeholders, and my 4 real PPUs results in 8.1MBps parity creation.

So who else is running this many drives? What CPU are you using? How's your parity creation performance? Can I get better performance without replacing every piece of hardware? This is just about the fastest CPU I can put in this motherboard.

For reference:
SuperMicro H8DME-2 motherboard
Two Opteron 2384 2.7GHz quad-core (eight cores total)
16GB DDR2 ECC RAM
Four SuperMicro SAT2-MV8 SATA controllers

5
Snapshot RAID / Migrating to FlexRAID
« on: October 17, 2013, 09:39:49 am »
I'm putting together a new file server over the next week or so, and in researching all my options I settled on FlexRAID. All other options had too many drawbacks and just left me with too many uncertainties.

My server currently has 22 data drives all set up as independent NTFS volumes (about half of them are nightly backups of the others), and they are all about 99.9% full. Once the new server is set up, I plan on using 4 PPUs and 20 DRUs. I will be reusing the same drives. What is the best/safest/quickest way to migrate these drives over to FlexRAID? I thought the safest way would be to add one drive at a time and update the parity after each drive, but that would probably take weeks, wouldn't it? I also don't think that I want to throw them all in at once and create the array using my old backup drives as PPU, in the event that one of the data drives fails during the parity creation I will have lost that data since my backups were overwritten with parity. What's my best choice here?

Pages: [1]