Author Topic: FlexRAID Standards Hybrid RAID  (Read 12264 times)

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
FlexRAID Standards Hybrid RAID
« on: November 28, 2015, 12:39:50 am »
Cool. So I've got it installed about to play around.  I'm wondering whether, via the wiki/help, or directly in the different raid descriptions you might want to indicate drive requirements (i.e. Raid 5 must be all same size to get the right sized array, jbod can be any size etc.) also perhaps indicate speed of read/write, e.g. raid5 is generally max speed of all disks together (assuming bandwidth is there on the host motherboards or expansion cards) and jbod is something...

By the way, what would jbod in standards be?  More like Linux's MDADM/LVM2 so max disk speed or traids speed?  Can parity be split across the drives similar to MDADM?

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #1 on: November 28, 2015, 01:53:12 am »
Is the software likely to expire after the 21 days that activation says you have?

I've got two arrays running currently, 1 jbod + 1 parity and a Raid 5 array.  Both are doing what they should be in terms of usable space.  What speeds in a jbod array for a group of normal modern 7200rpm drive should we be getting?  MDADM/LVM2 on Linux nets me full drive speed (with striped parity), and I'm curious to know what the tests have shown during development... Also, are you able to setup a jbod array with parity spread across the disks rather than a dedicated disk?

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #2 on: November 28, 2015, 03:04:20 am »
Hi, sorry to spam :-)

I've now tried to create iSCSI targets using both a Raid 5 array & JBOD/Parity array.  They both suffer from the same issue I had with tRAID and the error 0x80070570, which when I googled was a drive locking issue or access issue.  The error code presents itself on either array and when trying to create either a dynamic or static iscsi image.  Have you tried to create any VHDXs using standards so far?

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,537
  • Karma: +204/-16
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #3 on: November 28, 2015, 05:47:01 am »
@Benoire
I started a FAQ in the first post to answer some of the common questions to be.
I will be working on some documentation next as a higher priority since many important points need to be discussed. The only reason for the release without documentation is to test how intuitive it is first.

Now, to your specific questions. Each RAID types has its description provided when you go to create a new RAID configuration. You can go over each by selecting them and reading the descriptions.

JBOD
JBOD span is just that a span of disks without parity. The disks are simply concatenated to create a large disk.
JBOD span with parity is as above but with dedicated parity.
JBOD span with or without parity have limited usefulness. Mostly, they are more energy efficient as only one data disk and the parity disk(s) need to spin during writes. The trade of is speed.

Speed
There are many performance options to play with, and we will discuss of them as we go. The parity distribution and rotation configuration parameters are very important in striped RAID with distributed parity (RAID 5/6/X) and depending on the targeted workload.
Most important though, ensure you have a proper setup with independent disks. That is, if you setup a VM with virtual disks all on the same physical disk, don't expect good performance.
Additionally, this is software RAID without battery backup units. So, don't expect the performance of high end hardware RAID with BBU and write back caching enabled.
FlexRAID Standard will have SSD caching instead (to be enabled later) in order to compete at that level.
Caching is everything in RAID, which is why hardware RAID with no BBU sucks and get slapped around by any decent software RAID.

Quote
Have you tried to create any VHDXs using standards so far?
I just did. Created a RAID 6 and then created a bunch of VHDs on it. The all mounted and formatted just fine. Even wrote some data in the VHD mounted disks.
Post full details of your RAID configuration and screenshot of RAID options panel.

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #4 on: November 28, 2015, 02:08:13 pm »
Morning @Brahim,

Thanks for the reply, I appreciate that each raid type has a description,  I just wondered whether there might be value in describing some of the important functions or requirements, i.e. Raid 5 size will be the combined size of all the drives, pegged to the smallest drive size; JBOD is any size with largest taken by parity BUT has speed limitations... Synology do this quite well and as that is where I am coming from I just wondered about the value.

If you have a bunch of disks (not in a VM but bare metal) that each transfer at 125mb/s, when in a jbod/parity configuration are you likely to see these speeds, or will the overall write speed be lower than that due to the use of a dedicated parity disk?  For reference, Synology's Hybrid Raid design  (jbod with parity using MDADM/LVM2) writes at full disk speed at all times i.e. 125mb/s.

Are you able to expand a Raid 5 array in standards or do you need to kill the array and start again?  My main reason for the jbod approach was different drive sizes but I might pick up a 3rd drive which makes Raid 5 more attainable, but I see no add additional units of risk button... Really what I am looking for a windows based software raid that can provide parity protection, write/read at atleast full individual disk speed, and can be added to with new disks (either as the same size drive or any size).  Synology DSM can do this with their software raid implementation (raid 5 or SHR) but a) that is not windows based and b) I use a non-official way of running the software and would prefer a supported, paid method on custom  hardware.  I'm willing to give standards a go on my baremetal and change all my drives but I'd like to understand it more before I take the plunge, hence my questions and testing in a VM.

I'll try a new iscsi target later, not sure what is wrong... What OS and iSCSI software did you use?  I'm on Server 2012 R2 with the MS iSCSI target software.

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,537
  • Karma: +204/-16
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #5 on: November 28, 2015, 02:50:56 pm »
Morning @Brahim,

Thanks for the reply, I appreciate that each raid type has a description,  I just wondered whether there might be value in describing some of the important functions or requirements, i.e. Raid 5 size will be the combined size of all the drives, pegged to the smallest drive size; JBOD is any size with largest taken by parity BUT has speed limitations... Synology do this quite well and as that is where I am coming from I just wondered about the value.
Good point. We indeed cannot assume that the end user understands the differences and various aspects of these different RAID types. I will keep this in mind while working on the documentation.

Quote
If you have a bunch of disks (not in a VM but bare metal) that each transfer at 125mb/s, when in a jbod/parity configuration are you likely to see these speeds, or will the overall write speed be lower than that due to the use of a dedicated parity disk?  For reference, Synology's Hybrid Raid design  (jbod with parity using MDADM/LVM2) writes at full disk speed at all times i.e. 125mb/s.
You are always limited by the effect of parity. There is no magic way around this outside of using caching to fake the true speed. Striped RAID negate some of that effect by striping. Non-striped RAID does not have a way out.
Note that you might be confusing Hybrid RAID with pure JBOD span. Most hybrid RAID implementations combine RAID 5 and RAID 1. So, you don't have a JBOD span.

Quote
Are you able to expand a Raid 5 array in standards or do you need to kill the array and start again?  My main reason for the jbod approach was different drive sizes but I might pick up a 3rd drive which makes Raid 5 more attainable, but I see no add additional units of risk button... Really what I am looking for a windows based software raid that can provide parity protection, write/read at atleast full individual disk speed, and can be added to with new disks (either as the same size drive or any size).  Synology DSM can do this with their software raid implementation (raid 5 or SHR) but a) that is not windows based and b) I use a non-official way of running the software and would prefer a supported, paid method on custom  hardware.  I'm willing to give standards a go on my baremetal and change all my drives but I'd like to understand it more before I take the plunge, hence my questions and testing in a VM.
Striped RAIDs (RAID 5/6/X) expand through in-stu migrations. This is a super unique feature to sRAID, which we will discuss later.


You need to separate hybrid RAID in discussions relating to specific RAID.
The way hybrid RAID works is by slicing the disks and applying different RAID types and then wrapping the various RAID volumes into a span.
Picture this, let's say you have the following disks:
- 2x 2TB disks
- 2x 4TB disks


A hybrid RAID config will give you a 4x 2TB in a RAID 5 (using 2TB from each of the 4 disks) and 2x 2TB in a RAID 1 (using the 2x 2TB left over).
This would give you 6TB in RAID 5 and 2TB in RAID 1. Both RAID could then be spanned (using a JBOD span) to create a 8TB RAID volume, which is finally presented to you.
If you were to buy another 4TB disks to expand your array, the RAID 5 could expand to an 8TB RAID 5 and the RAID 1 could convert to a 4TB RAID 5, which would grow the span to 13TB.
All this can be done in sRAID as released, but it requires that you specifically configure things as such. There will be an automated (think Cruise Control) option to do this automatically for those that don't want any control. More on this later.
If you need help configuring hybrid RAID in sRAID, post your full disk details and what you wish for and I will guide you through it.

Quote
I'll try a new iscsi target later, not sure what is wrong... What OS and iSCSI software did you use?  I'm on Server 2012 R2 with the MS iSCSI target software.
I used StarWind for iSCSI.
For VHD, I just tested using Win7.

« Last Edit: November 28, 2015, 02:52:43 pm by Brahim »

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #6 on: November 28, 2015, 06:01:08 pm »
Hi Brahim,

You are right, I have been confusing Hybrid Raids, so cheers for that... I will add that I while I 'kinda' understand Synologys hybrid raid approach, I certainly wouldn't be able to create one outside of their gui using LVM and MDADM as they do.

In your testing, how fast has a jbod span + parity performed for consistent write speeds once you've exhausted the host memory cache?

My current storage device has the following drives:

2 x 3TB HDD
1 x 1TB HDD
1 x 500GB HDD
1 x 80GB Drive

And this is how SHR has split my current drives:

storage> mdstat
-ash: mdstat: not found
storage> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sda7[0] sdc7[2] sdb7[1]
      976733568 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md5 : active raid1 sda8[0] sdb8[1]
      1953494784 blocks super 1.2 [2/2] [UU]

md2 : active raid5 sda5[0] sde5[4] sdd5[3] sdc5[2] sdb5[1]
      293277952 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md3 : active raid5 sda6[0] sdd6[3] sdc6[2] sdb6[1]
      1230655680 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4]
      2097088 blocks [12/5] [UUUUU_______]

md0 : active raid1 sda1[0] sdb1[4] sdc1[1] sdd1[3] sde1[2]
      2490176 blocks [12/5] [UUUUU_______]

unused devices: <none>

MD0 is the DSM 5.2 OS, as that is redundant across all the drives.

What I would like to do is try and replicate that using sRAID if possible; In the SHR world, only one 'drive' is lost as parity, despite the raid 5 and 1 use and every time you add a new drive to the array it adds the full size of the disk (assuming =< largest disk), it appears from your example above that it would work the same...

For iSCSI did you try dynamic discs or just static?

I can confirm that NFS works fine out of the box without issue; and a number of the ESXi functions are supported including thin provisioning.

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #7 on: November 28, 2015, 06:22:39 pm »
Hi Brahim,

Sorry you're going to get a number of these as I think about use or do indeed the use it!

Had a thought for a potential improvement to how you presently display data for the HDDs. Currently you're tab bar goes: dashboard, scheduler, smart...

Perhaps you could use a similar layout to the attached image, and go: Dashboard, Scheduler, HDD/SDD, Smart.

That way, you could provide a place for concentrated information on the HDDs that have been registered (and may not be in use yet as well) for sRAID.

Just a thought to provide more clarity to those wanting to see what the HDDs are doing at a glance.


Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #8 on: November 28, 2015, 07:46:12 pm »
Right, re: the iSCSI issue.  Its MS VHDX files causing the trouble.

If I create a VHDX using the iSCSI target within Server 2012 R2, it fails with the error I listed earlier... If I try to create one from Disk Management, it crashes the OS.

If I create a VHD file using disk management then it creates fine and can be imported in to the iSCSI management which can then be used to create an iscsi target which can be written to fine.

Easy to replicate, but if you want logs and data you'll have to tell me what dumps you need so I can sort it out.

VHDX is the newest format but only available to windows Server 2012 r2.

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #9 on: November 28, 2015, 07:50:40 pm »
Ok, more things... happy to fully test this out for you... Will purchase a new 3TB drive and back up all my stuff, will then reinstall Server 2012 R2 and run standards as my only storage system (once you've explained how to setup a hybrid raid).  Then I can do lots of testing for you as I run NFS, iSCSI, game storage and serving along with media etc.  It will also be my TV server so will get lots of read/writes to it.

This has the potential to do what I want and I want to test it as much as I can for you.

Have you got a UAT or something that I can follow, that is if you need one.

Chris

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,537
  • Karma: +204/-16
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #10 on: November 28, 2015, 08:59:15 pm »
Please get off the JBOD spans topic :). We have long established hybrid RAID is what you want. JBOD span without parity has the performance of a single disk. With parity, performance drops. JBOD spans are used to simply merge volumes into one larger volume. See below.

Your SHR array combines both RAID 1 and RAID 5. What is the size of the volume(s) presented to you?

I suspect, it is doing:
RAID 5: 3x 1TB
RAID 1: 2x 2TB
RAID 1: 2x 80GB
With 420GB of unused disk space.

In sRAID, what you need to do is:
1. Start with disks with no data (backup and then delete all volumes and partitions from all disks)
2. Register the 2x 3TB and 500GB disks as raw disks (not passthrough)
3. Register the 1TB and 80GB disks as passthrough (we could also register them as raw disks and create raw slices, but that's not necessary)
4. Create two raw disk slices of 1TB and ~2TB (remainder) off each of the 3TB disks
5. Create a 80GB raw disk slice from the 500GB
6. Create a RAID 5 config using the two 1TB raw slices + the 1TB passthrough
7. Create a RAID 1 config using the two ~2TB raw slices
8. Create a RAID 1 config using the 80GB raw slice + 80GB passthrough

You will effectively have a 2TB usable RAID 5, 2TB usable RAID 1, and 80GB usable RAID 1.
You can finally wrap all of them in a JBOD span (without parity) to finally have a volume that is 4.08TB.
Basically, the UoRs you will be adding to the JBOD span configuration are the RAID configurations (the RAID 5 and two RAID 1s). It is best to add the 2TB RAID 1 first to the JBOD span, then the 80GB RAID 1, and finally the RAID 5.
That will give you the best write performance on the first 2.08TB of the JBOD span.

The alternative to using a JBOD span to combine the RAIDs is to use Storage Pooling. The RAID 5 and RAID 1s would be fully independent but merged through the pool like in tRAID.

Head spinning? :)
« Last Edit: November 28, 2015, 09:09:22 pm by Brahim »

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #11 on: November 28, 2015, 09:26:08 pm »
I'm of the jbod topic now, don't worry :-)

SHR doesn't actually present the volumes to you, it does it all in the background... I've parted -l the array and attached it as partitions.txt

It does appear to make a 420GB partition so uses all the space.

If I then bought a new drive, say 3TB, how would I then manually add that in to the array?  With SHR, you add the disk and it will go off and add to the array, expand the array to fill and then of you go... Is this something that you are looking at adding as part of cruise control? I presume in your example, you'd end up with a 4 x 1TB RAID 5 and then a 3 x 2TB Raid 5 (converted from 2 x 2 TB Raid 1?).

My concern with pure raid 5 is that eventually I'll have filled my entire 12 bay array and then I won't be able to use any new HDD space until I've migrated all drives to the new larger size, or created a new array outside of the original array... but Raid 5 of the bat would be a much easier solution than a hybrid system.

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,537
  • Karma: +204/-16
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #12 on: November 29, 2015, 06:17:47 am »
Before we go any further, there are a few things we need to establish.
  • Transparent RAID is a much better solution than hybrid RAID for almost all deployments dealing with adhoc disks
  • The exception might be your case whereas you are trying to run VMs and iSCSI images off the array
  • In which case, it is better to use the various RAID type volumes directly rather than through LVM or JBOD span. That is:
    1. Run your iSCSI disk images and VMs off the 2TB RAID 1
    2. Use the 2TB RAID 5 as a slower write access storage (slower VMs or iSCSI disks or general storage)
    3. Use the 80GB RAID 1 for whatever else purpose that fits

I'm of the jbod topic now, don't worry :-)
Good.

Quote
SHR doesn't actually present the volumes to you, it does it all in the background... I've parted -l the array and attached it as partitions.txt

It does appear to make a 420GB partition so uses all the space.
I meant the RAID volumes (md devices - in your case it is 6 [0-5]):
- md0 and md1 look like RAID 1E
- md2  (5x 75GB in RAID 5 => 300GB)
- md3 (4x 420GB in RAID 5 => 1.26TB)
- md4 (3x 500GB in RAID 5 => 1TB)
- md5 (3x 2TB in RAID 1 ??? => 2TB)

md5 is what I don't get. Why do 3x 2TB in RAID 1? That is kind of a poor choice as it wastes space. I think SHR makes a poor use of your disks. It uses them in a very linear fashion. Too many disks with different performance profiles are part of the same RAID. Additionally, your disks heads will get stressed out moving across the different RAID sets. This is not a big deal in a single user environment where each RAID set is used in sequence. However, performance will drop badly if there is parallel use of the different RAID sets.
Note that you also have 3x 3TB disks and not 2.

Quote
If I then bought a new drive, say 3TB, how would I then manually add that in to the array?  With SHR, you add the disk and it will go off and add to the array, expand the array to fill and then of you go... Is this something that you are looking at adding as part of cruise control? I presume in your example, you'd end up with a 4 x 1TB RAID 5 and then a 3 x 2TB Raid 5 (converted from 2 x 2 TB Raid 1?).

My concern with pure raid 5 is that eventually I'll have filled my entire 12 bay array and then I won't be able to use any new HDD space until I've migrated all drives to the new larger size, or created a new array outside of the original array... but Raid 5 of the bat would be a much easier solution than a hybrid system.
1. Baby steps. Everything can be done in sRAID. Let's first establish your sRAID configuration. ;)
2. I suspect SHR only creates new RAID devices with newly added disks. Then, LVM makes use of these new devices to expand the existing volumes. I suspect existing RAID devices are not expanded. If you have a disk laying around, we can test that and know for sure what SHR does.
« Last Edit: November 29, 2015, 06:22:52 am by Brahim »

Offline Benoire

  • Full Member
  • ***
  • Posts: 112
  • Karma: +0/-0
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #13 on: November 29, 2015, 11:38:21 am »
I only have 2 3tb drives, look at the HDD/SDD reporting grab above, it lists my 5 disks so I have no idea why parted is showing the 3rd 3tb drive!

Would knowing what SHR does be helpful for your work with sRAID?  If so, I'm happy to give it a go?

should JBOD span + parity be of similar performance to traid with single parity? if traid drive fails or you pull the array, you can still grab the data yourself; if jbod span + parity has an issue where by you kill the array, I presume the data is lost as each drive will have data that may not be complete?  I presume moving ALL HDDs from one PC to another with sRAID installed will allow the array to still be accessed as is the case with Linux's software raid stack or do you need to back up the config first?  Would you need the same drive order or would it not matter?

Hybrid Raid is going to be quicker than jbod span / traid and also be better for those that need iSCSI/NFS, what is the compromise compared to a full raid 5 array?  Synology seem to suggest that Hybrid Raid has all the benefits of Raid 5 but also the benefits of JBOD (on initial build only as every time you add a disk it must be larger than the last disk added; so if you create an SHR array with 5 disks including an 80GB drive the next drive must be 80GB or more).  What is your view?

Finally what does Raid 5 bring to the table?  Over Hybrid Raid I can see it providing higher speeds due to the striping across the disks in a consistent manner rather than a mixture of RAID 1 and 5, more IOPS.. What else? The downside is that it requires all disks to be the same size or bigger than the smallest, but the available size will be that of the smallest drive, and you can only expand using similar sized disks.. migrating to more space either is adding more of the same size disks or upgrading all the disks to a larger size.

What, in your opinion, is the preferred option?
« Last Edit: November 29, 2015, 07:06:48 pm by Brahim »

Offline Brahim

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 8,537
  • Karma: +204/-16
    • View Profile
Re: FlexRAID Standards Hybrid RAID
« Reply #14 on: November 29, 2015, 08:08:08 pm »
I only have 2 3tb drives, look at the HDD/SDD reporting grab above, it lists my 5 disks so I have no idea why parted is showing the 3rd 3tb drive!
Ok.

Quote
Would knowing what SHR does be helpful for your work with sRAID?  If so, I'm happy to give it a go?
Knowing whether you like what it does is the key here. So, we first need to know what it does and then poll on whether that is a desired behavior.
This is similar to how it does the partitioning. I don't like it, but it seems users that use SHR are happy with the simplicity, which I am fine with (less work for me ;)).

Quote
should JBOD span + parity be of similar performance to traid with single parity? if traid drive fails or you pull the array, you can still grab the data yourself; if jbod span + parity has an issue where by you kill the array, I presume the data is lost as each drive will have data that may not be complete?  I presume moving ALL HDDs from one PC to another with sRAID installed will allow the array to still be accessed as is the case with Linux's software raid stack or do you need to back up the config first?  Would you need the same drive order or would it not matter?
JBOD span + parity is greatly inferior to tRAID in all aspects (performance, reliability, and flexibility). JBOD span is provided for completeness, and it does have usefulness in being able to concatenate other volumes. Think of JBOD span as block level Storage Pooling whereas standard Storage Pooling does file system level pooling.

Quote
Hybrid Raid is going to be quicker than jbod span / traid and also be better for those that need iSCSI/NFS, what is the compromise compared to a full raid 5 array?
It all depends on the scenario. For most users, tRAID with an SSD as landing disk is the better setup.
For those like you that want better iSCSI and block level performance, RAID 0 is best, but with no tolerance. The best compromise is RAID 10. RAID 1 works good too.
Hybrid RAID is kind of a bad idea in that you have different RAIDs with different performances being aggregated and abstracted. If you have VMs running on the LVM volume, then different VMs will have very different performance depending on which RAID they really sit on - and, you have no way of controlling that.
My advice is to manually slice your disks and keep the resulting RAIDs separate rather than wrapping them in a JBOD span or LVM volume.

Quote
Synology seem to suggest that Hybrid Raid has all the benefits of Raid 5 but also the benefits of JBOD (on initial build only as every time you add a disk it must be larger than the last disk added; so if you create an SHR array with 5 disks including an 80GB drive the next drive must be 80GB or more).  What is your view?
The only thing you get is simplicity. It has the benefit of RAID 5 and RAID 1 because it uses them both - not for the benefits, but purely based on the number of disk slices.  :P
The issue is, you have no control over what data goes to the RAID 1s and what data goes to the RAID 5s. So, performance is going to be inconsistent and out of your control.

Quote
Finally what does Raid 5 bring to the table?  Over Hybrid Raid I can see it providing higher speeds due to the striping across the disks in a consistent manner rather than a mixture of RAID 1 and 5, more IOPS.. What else? The downside is that it requires all disks to be the same size or bigger than the smallest, but the available size will be that of the smallest drive, and you can only expand using similar sized disks.. migrating to more space either is adding more of the same size disks or upgrading all the disks to a larger size.

What, in your opinion, is the preferred option?
Hybrid RAID isn't RAID. It is just a RAID manager. As explained above it uses RAID 1 and RAID 5/6 depending on the number of disk slices and then wrap everything in a JBOD span or LVM volume.
All you get is simplicity, which of course cannot be underestimated - often, that's all you want.
What you lose is consistent performance and control over where your data goes and how to make best use of your disks.