Author Topic: Transparent RAID Performance Thread - Part 4/4 (Storage Accelerators - SSD Cache  (Read 16019 times)

Offline pclausen

  • Jr. Member
  • **
  • Posts: 96
  • Karma: +0/-0
    • View Profile
The biggest improvement you should make is to queue the files to be copied rather than doing many parallel operations.

All I did was highlight my music folder from the NAS and drag and drop it into the root of tRAID1.  I was surprised to see all those parallel copy operations myself.  Not sure what would have caused that...

Quote
The simplest thing to do is to have an exclusion folder for the landing disk. When copying large amount of data, copy that to the excluded folder first and then rename/move the data outside of that folder.

Is this covered in the wiki?  I don't recall seeing that, but I'll go did it up as long as its there.

Thanks

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,341
  • Karma: +199/-15
    • View Profile
The next build will have an option to trigger a bypass of the landing disk for cases like this.

Offline pclausen

  • Jr. Member
  • **
  • Posts: 96
  • Karma: +0/-0
    • View Profile
Excellent!

What I'm doing in the meantime to get through the current copy task, was to kill all other programs writing to the arrays, and then pause the NAS file copy until I see no disk write activity in Resource Monitor, then un-pause the copy until it slows down from the ~100MB/s copy rate, then pause again until the landing is flushed out.  It works but is very tedious.

Offline jamohamo

  • Jr. Member
  • **
  • Posts: 63
  • Karma: +0/-0
    • View Profile
I tried creating a folder to bypass the landing disk (I've got an SSD too) and I couldn't get it to work. If my pool drive is Z: with a folder called "bypassLandingDisk" then is the regular expression "Z:\bypassLandingDisk"? An example of what regular expression to put in there would be good.

By the way I initially tried folder names with underscores in them (Z:\_bypass_landing_disk) but that prevented the array from starting up again and I had to change it and restart the Broker Service (it had stopped) to get it the pool up again.

Offline linds

  • Newbie
  • *
  • Posts: 8
  • Karma: +0/-0
    • View Profile
Hi Guys,

On the topic of an SSD as a landing disk, I'm gathering the gear needed to get a t-FlexRAID setup going, however I can't decide between:

- having a WD Black as the PPU drive (with no landing disk)

vs

- dedicating an SSD as the landing disk. In this scenario I take it having a slower drive (such as a WD Red or even a Green) as the PPU will not be noticed at all with writes not exceeding the SSD size. The advantage of a fast PPU drive in this scenario would only be when verifying the array or creating the initial parity...?)

If the Landing disk (using an SSD) is enabled in the array, should either the OS or storage pool caching be turned off? I take it OS Caching would effectively become redundant and leaving it on would just use up OS memory unnecessarily? Would the best mode in this case be Energy Efficient mode?

Thanks for your help!

Lindsay

Offline pooler1

  • Jr. Member
  • **
  • Posts: 73
  • Karma: +0/-0
    • View Profile
I'm not exactly sure how everything works.  But I've configured my setup with most of the recommendations here, including an SSD landing disk.

My speeds currently average right around 52-55 MB/s reliably.  It used to be about 75, but then I changed some computers from windows 8 to 8.1 and the speeds dropped like that.  I have no idea why yet.

I came here to post that I just saw something I never saw before.  I was doing a file copy operation from the traid pool onto an SSD.  It's not a raided SSD, so just standalone by itself.  The transfer was about 30GB, and the rate it transferred was a little over 300 MB/s.  I was blown away, I have never seen these speeds on the traid.

So now I am wondering if I can achieve these speeds all the time, and what is really limiting me in other situations.  Again, for about a year now, I've been used to the speeds of 50-70, that's all I expect.  And before traid my max speed for JBOD was always about 70 anyway.

so here are my experiences so far:
file transfer over network, source is desktop, destination is traid pool...average about 50 MB/s
file transfer locally, destination traid, source SSD...300 MB/s!!
file transfer locally, destination traid, source mechanical drive...max 70 MB/s
file transfer all within traid...still about 50 MB/s...I'm guessing this is because they are all mechanical drives plus some overhead for traid.

Offline jkirkcaldy

  • Newbie
  • *
  • Posts: 4
  • Karma: +0/-0
    • View Profile
Is there any update on when we can expect to see anything on the ssd cache?


Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,341
  • Karma: +199/-15
    • View Profile
Is there any update on when we can expect to see anything on the ssd cache?
It has been implemented. However, this feature will target a different audience (only for advanced and business users).

Offline golf7

  • Newbie
  • *
  • Posts: 6
  • Karma: +0/-0
    • View Profile
It has been implemented. However, this feature will target a different audience (only for advanced and business users).

Are we going to see this in the next release, and any indication of when that might be?  I was hoping to utilize this feature.

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,341
  • Karma: +199/-15
    • View Profile
Are we going to see this in the next release, and any indication of when that might be?  I was hoping to utilize this feature.
This will not be enable in the next release yet. The most likely case would be end of the year.
This feature also requires a fairly stable system on a good UPS. So, I am not too sure on whether I want to release to feature at all since many people concept of "stable system" seems to stretch quite a bit.

Offline b-earl

  • Hero Member
  • *****
  • Posts: 651
  • Karma: +13/-1
    • View Profile
Brahim couldn't you specify here or in a wiki what you see as a stable system so we know. But yes I am with you that a UPS has to be there.
Server HW: Chenbro RM41416 case | Supermicro X10SLM-F + LSI SAS 9305-16i | Xeon E3-1231 v3 | 16 GB DDR3 ECC Ram
Server OS:   Windows Server 2016 (UEFI) on 250 GB Samsung 850 Evo ssd
Transparent RAID 1.1.0 2017.02.11
Backupserver: Supermicro X9SCM-F UEFI + LSI SAS9211-8i IT FW
Server OS: Win 2016

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,341
  • Karma: +199/-15
    • View Profile
Brahim couldn't you specify here or in a wiki what you see as a stable system so we know. But yes I am with you that a UPS has to be there.
There should be no interpretation for what a stable system is. If a system is showing anomalies under load, then it is not stable.

The usual suspects are:
1. Inadequate power supply
2. Driver issues
3. Overloaded systems (controllers overheating and flaking under load)
4. Other software

In almost every case, there will be entries in the OS system logs.

I remember the days when running PC burn-in tests was the norm on DIY builds. Heck, even off the shelf builds were tested for stress and load.
It seems like these days people just run CPU tests and call it a day.

FlexRAID products are I/O intensive. So, tests that take into account I/O loads are important.

In all cases, the best test is the FlexRAID product itself. RAID creating, validation, and or verification must succeed without system issues.
If these fail with system errors, then the system is not stable for the purpose.

As far as RAID with SSD caching enabled, no system being used as a desktop is suitable. A headless system running with minimal other software is more appropriate.

Offline b-earl

  • Hero Member
  • *****
  • Posts: 651
  • Karma: +13/-1
    • View Profile
Thanks for the info Brahim!
Server HW: Chenbro RM41416 case | Supermicro X10SLM-F + LSI SAS 9305-16i | Xeon E3-1231 v3 | 16 GB DDR3 ECC Ram
Server OS:   Windows Server 2016 (UEFI) on 250 GB Samsung 850 Evo ssd
Transparent RAID 1.1.0 2017.02.11
Backupserver: Supermicro X9SCM-F UEFI + LSI SAS9211-8i IT FW
Server OS: Win 2016