I am quite lost.... I did create an array two days ago and the mean throughput to create the parity was about 240MB/s.
Concretelly, this speed was 30MB/s during one hour and increased slowly to finally complete at 250MB/s (33 hours)...
In that array, I used to have 8 DRU on one controller and 2 PPU on another controller. (4x2TB + 1TB + 2x3TB of DRU + 2x3TB or PPU)
Both controllers are equivalent, from LSI, same features/characteristic/... except that one is 16 ports (with the DRU) and the other one is 8 ports (with the PPU).
Yesterday, I have deleted my initial configuration and move two disks (DRU) from their controller to the controller with the PPUs. Then, I have created a new array and started the creation of the parity.
Now, the throughput in about 20MB/s and it's running since 19 hours... Only 4% has been completed ?!?!?
This is incredible ?! I have only moved two DRU from a controller to another one ?!
The 10 disks are 100% ok from SMART point of view...
I did benchmark the disks before adding them into the array, with Windows Caching disabled. I had the same mean throughput for all disks intended to be used as DRU, being either on the first controller or on the second controller.
BUT for my ST3000DM01 intended tobe used as PPU, I had >300MB/s for read and ~70MB/s for write...
While for my ST3000DM01 intended to be used as DRU, I had : ~200MB/s for read, ~50MB/s for write...
The only difference, the disk going to become DRU are > 90% full while those going to be used as PPU are empty (empty volume)
I will abort the creation of the parity and move the disks used as DRU back to the first controller.
But is there anything else that I should do/try before re-creating the parity ?