I am testing tRAID from 2 point of view
- User interface. Is is user friendly/complete/consistent & coherent.
- Resilience. What occurs in case of disk/server failure while reading/writing in the pool
- Expansion (not yet tested Contraction)
- Initialize RAID or "Do Nothing+Verify Sync".
- Verify/Verify Sync/Recreate. I did test them all many times, online and offline.
None of those tests was done with an "automatic start" of the pool!
1) Regarding the UI, I did already report a few notes (bugs or suggestions). But I have more and will post soon.
2) Regarding resilience, I am testing with VM and have no mean to simulate a disk failure while the VM is up and running (Do you know how to do that ? I mean: without using the FlexRaid feature to fail a disk).
3) Regarding Expansion. Works fine so far... Except that I could expand with a placeholder instead of a "physical" UoR.. Wierd

4) Regarding Initialization versus "Do Nothing"+"Verify Sync": I noticed indeed (as reported in other posts and analyzed by Brahim) that within a VM, the throughputs are sometimes really weird, but as perf is not the main topic now I will come back on those tests later
5) Works fine (I.e.: Verify always succeeds immediately after "Initialization" or after "Verify Sync" (even if there was errors before the "Verify Sync")
About 2 - Resilience: So far, I can only simulate BSOD/power failure/... while reading or writing data in the pool (Here after, all Verify and Verify Sync are done "offline")
A) BSOD
always results in disk corruption if I was writing data in the pool (I.e.: a "Verify" fails). It's ok if data were only read when BSOD occurs. But as soon as a disk is corrupted, I cannot fix it anymore. I.e.: I do a "Verify Sync", reboot and again "Verify" and this last one fails (disk access error). As explained to me by Brahim, this is most probably related to VMWare and could/should be different with real hardware... (that being said, the data in to pool can still all be accessed.. But no sure "where" is the disk error...)
B) Normal/clean Reboot also results in disk corruption if data were under writing. For that test, I used the Windows restart menu while remotely writing and deleting files in the pool.
B.1) In most cases, the writing of the data stops because the server is rebooting, but after the reboot, everything is fine. I.e.: A "Verify" succeeds. For sure, the data are only partially written and are "corrupted" (cannot be read as incomplete)... The error message got client side can be weird: e.g.: "Disk are write protected"
B.2) Twice, the server crashed
with a real BSOD at the very end of the shutdown process (the BSOD message is something like NO_REFERENCES_POINTER or THREAD_EXCEPTION_NOT_HANDLED). But I didn't analyze the minidumps). After rebooting, a "Verify" failed. I did a "Verify Sync" and rebooted but was
NOT in the same situation as above. I.e.: the "Verify" succeeded after the last reboot and the disks did not appear to be corrupted. So clearly, there is a difference between a real BSOD and simulating a BSOD with a VM "hard" reset

B.3) In some cases, after the reboot, a "Verify" fails. In such case, I do a "Verify Sync", reboot and redo a "Verify". As in B.2), it succeeds too.
To be complete, I really need to find how to do a disk failure while the VM is running and without using tRAID (I want to do it while writing in the pool - as I have created one folder per DRU in the pool, I can to write on a specific drive for testing purpose).