Add drives to an existing array, then extend the existing partition/file system?

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Add drives to an existing array, then extend the existing partition/file system?

      I'm new to OMV and just starting playing with it under Virtualbox running OMV 3.0.94.

      I created a RAID5 Array with 4 disks and used EXT4 for the file system. Created 1 share.

      I attempted to add 2 additional drives to the existing array from the GUI in an attempt to Grow/extend (add to the existing size of the array) however you want to refer to is as, but the GUI simply adds them as Spares.

      I've searched, before posting, and all I've found are references to people giving up, backing up their data, and recreating a new array including the new drives.

      This is a simple task under most operating systems, OK I'll say it, a Windows server. I can't believe this is a difficult task under OMV?

      Any takers?

      **Update** - I found a work around from the CLI, however I'd still like to know if there's a way to do this from the GUI?

      zackreed.me/adding-an-extra-disk-to-an-mdadm-array/

      The post was edited 1 time, last by daveinfla ().

    • Roosey wrote:

      I just something similar where I replaced my current drivers 4GB with bigger drives 6GB and let the GUI rebuild my array after each drive replacement. And then I resized the array with mdadm --grow /dev/md127 --size=max

      I hope this helps

      Growing a mdadm RAID by replacing disks
      Let me clarify, I am not replacing existing disks with larger disks. I have 4 drives configured as RAID5, I want to add 2 additional drives to the same array as additional members to expand the array and then expand the partition (file system).

      At present the GUI has added the two additional disks as spares, not additional members for expansion.
    • I just went over this issue with a user who actually grew his array. (And it's probably still in progress.) This is the -> post
      It's one thing to set up an array in a VM and another to grow an array with data on it, and lose everything. (As was stressed in the post.)

      And while it has nothing to do with the GUI, how it implements RAID, etc., there have been more than a few disasters where users have killed their arrays by clicking around in the GUI. After the event, they come to the forum for help where, in some cases, nothing can be done.

      So, in this one instance, OMV developers didn't make the Raid "grow" operation easy. Forcing users to do the "grow" operation on the command line (just 2 lines, as it is in this -> post) gets them to do a bit of research so they understand the risk they're taking.
      ______________________________________

      Along other lines, I know you're testing but a realistic real world limit for software RAID5 is 5 disks max. If I used a RAID5 equivalent, (I don't) I wouldn't put with more than 4 disks in the array and I'd use ZFS (raidz1).

      The safest possible array would be a ZFS pool of mirrors. It scales easily, without the need to restripe drives (which tends to kill them). Simply add another mirror to the pool.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • Re,

      OMV-WebGUI does (currently) not support the "grow" command from mdadm - i just "add" a drive to an array, with "two" possible behaviors:
      - array is missing a member -> adding as spare and starting the rebuild immediately
      - array is not missing a drive -> adding the new drive as spare
      (seen from RAID, these are the same things ...)

      You can grow your RAID-array only via console/shell, after growing the array, you can grow your FS ...

      EDIT:

      flmaxey wrote:

      Along other lines, I know you're testing but a realistic real world limit for software RAID5 is 5 disks max.
      Where did you got that? I would say: it highly depends of the usecase ...

      Sc0rp
    • Sc0rp wrote:

      EDIT:

      flmaxey wrote:

      Along other lines, I know you're testing but a realistic real world limit for software RAID5 is 5 disks max.
      Where did you got that? I would say: it highly depends of the usecase ...
      Sc0rp

      I admit it's my opinion, but it's based on what I believe to be the use case of the "typical" OMV user:
      That would be; "setup an array", "collect a lot of data over a span of years", "ignore the age of the drives in the array", and "grow a geriatric array when they run out of space". All of this is done, in many cases (most?), without backup.
      ((We could go down a rabbit hole regarding "use cases" but I tend to believe that the average OMV user has never been a site admin and has never dealt with RAID in a production environment.))

      Looking at the statistical end of it, the average life of a drive 4 to 5 years, give or take. As more drives are added to an array, the probability of a single drive failure in the array increases significantly which means the array is less likely to last 4-5 years without a single drive failure. (Adding more drives increases the probability of a single drive failure.)

      Hence, the more drives in or added to an array, the greater the probability of rebuilds/restripes which is one of the primary reasons why arrays fail catastrophically. Most users believe, incorrectly, that RAID makes their data "safe". I'd argue that traditional RAID exposes the typical OMV user to an even greater risk of losing everything, due to false assumptions of safety. Without backup, their data is not safe. So, I'm targeting the "worst case" use case.

      (Users with full backup, like yourself, can take additional risk.)
      __________________________________

      When it comes to the Grow button, "Grow" means to enlarge. Adding a drive that becomes a spare, or is used in recovery, is not "growing" an array.

      But, Sc0rp, I'm not trying to split hairs with you over the meaning of the word "grow". :)
      In this case, I believe OMV Dev's are doing the right thing for the majority of OMV users, so someone on the forum can caution them about growing an array without backup.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 2 times, last by flmaxey ().

    • This box is for backups only and is NOT my only backup. My critical data is stored offsite.
      I'm currently using a QNAP TS-212 with 2 x 3TB drives which I'll reuse for the new box along with at least 2 more I'll purchase.
      The NAS is used to store backup images of a Windows server and my desktop PC, StorageCraft continuous snapshots. If I loose them I can start the backups over after rebuilding, I'll just loose my snapshot history. If the house burns down I have my offsite backup.

      My primary server is a Windows 2016 server acting as a file/media server. OS = Raid 1, Data = Raid 5 (4x3TB)

      I have to following older, but fully functional parts at my disposal.

      2 x SuperMicro X8STE Rev. 2.00 server motherboards (1 to use, 1 for a spare)
      It has 2xIntel NIC's and an integrated 6 port Intel ICH10R SATA Controller
      Intel Xeon L5609 1.87Ghz processor
      4 x 4GB of Hynix DDR3 Unbuffered ECC memory (HMT351U7BFR8C)
      In addition to the integrated SATA it also has an add-on LSI MegaRAID controller (w/battery) that supports 6 drives. I have a spare for this too.

      These where 2X SuperMicro 2U servers that I'm gutting, the fans and power supply are just too loud for a home environment.

      I plan to re-use the existing 2x3TB drives from my QNAP and buy at least 2 more. They're WD Red WD30EFRX drives.
      I'll be buying a Thermaltake Commander MS-I Mid Tower case which will support 10 3.5" SATA drives with plenty of airflow, along with a 700 Watt power supply and an extra case fan. My Windows server is setup the same way and I love the case.

      With that said, I know enough about Linux to follow directions and I have zero experience with ZFS. I've read up a little on it and watched some videos, it sounds like the end all end of data integrity but I question if it's necessary in my scenario?

      I have 1TB drives I can mirror for the OS, unless this is overkill. Can OMV be run on a single drive, the config backed up and easily rebuilt and re-attached to the data volume(s). Or am I better off with the OS mirror?

      As for the data volume(s). I'm trying to squeeze out as much storage out of the least amount of drives due to a limited budget. I planned on 2 additional drives but might be able to fit a 3rd in. A 4 x 3TB RAID5 array gives me around 8TB of usable space.

      Where would I be with ZFS, if I understand it, RAID-Z1 is a waste of time and RAID-Z2 would leave me with 4.7TB of usable space? It's not enough.

      I'm up for suggestions/recommendations.

      Thanks,

      Dave
    • Your primary server is Windows..??? 8o For the love of,,,.. Why..?? ?( Just kidding... :D

      I dumped Windows Home server after M$ abandoned it but, during the time I had it, it seemed that compatible add-on's were exorbitantly priced. With that in mind, I think you'll find that OMV is very reliable and more importantly "extensible".
      With OMV, there's a world of server software add-on's, plugin's, and Dockers that are free to try and use.
      ((Nothing is actually free. I make a point of donating at least $10 to a project, if I use their software long term.))
      Further, if you've ever dealt with commercial product support, I think you'll find the open source community is far more responsive.
      ________________________________________________

      While you'd need to test it (compatibility is never guaranteed) your 64bit hardware is more than what you'd need for an OMV home system. In your case, your hardware would be fine for a small business. Unlike Windows, OMV is brutally efficient so it's hardware requirements are very modest. And you'll have ECC ram - excellent.

      On the PS for a file server:
      A file server should be fine with a 300W PS. Your 2ea Xeon processors dissipate 80W (2x40W) in heat, so I'd be surprised if the proc's and 4 hard drives would consume more than 250watts, even at start up. (When drive motors spin up, current is higher.) Even if you wanted to go with a healthy pad, a 400-450W PS should be fine.

      When it comes to cases, I spent the big bucks on this one -> Diablotek EVO III ATX It supports 5 each 3.5" drives, complete with rails (and a freebie 2.5 to 3.5" drive adaptor), and has room for at least 4 more 3.5" drives in open bays. It was $32.99 (USD). Maybe that was a bit on the high side, but I didn't want to wait for a sale. :D
      ________________________________________________

      On the boot drive.
      I use good quality 32GB USB3.0 thumb drives for booting, for the following reasons. -> Post
      In the bottom line, for servers, backup is far more important than boot speed and cloning USB drives is easy to do.
      (Again, RAID1 is not backup. If something gets corrupted on one drive, it's corrupted on the mirror as well.)
      ________________________________________________

      Regarding data storage:
      While there are other ways to create a common mount point for multiple drives, it appears that you've decided on RAID. With solid backup, there's nothing wrong with that choice. However, the "type" of RAID you go with can make a big difference.

      The problem with mdadm RAID (software RAID) or hardware adapters is that they divorce the file system from the underlying hardware. It's possible for these implementations of RAID to write errors to storage that the top level file system is completely unaware of, that slowly corrupt data. For these reasons and the well known "write hole" traditional RAID is no longer considered to be best practice. (In the bottom line, if my only choices were software RAID or RAID adapters, I wouldn't use RAID at all.)

      ZFS and BTRFS are today's answer to the issues mentioned above. They feature a combination of file system, logical volume management, and intelligent RAID that's fully integrated. With the right implementations, they're capable of detecting "bitrot" and file errors, and correcting these issues on the fly. (I.E. they have self healing properties for data integrity and preservation.) BTRFS would have been my first choice because it's really flexible. Unfortunately, for your purposes, (RAID5 equivalent), it's not stable yet. -> BTRFS Status

      That leaves ZFS. ZFS would work for your scenario and it's easy to implement on OMV. There's a plugin for it, and setting up a RAID-Z1 array is easy enough.
      __________________________________

      You've taken the first step with a VM. If you decide to test a ZFS RAID-Z1 array, after you create it, copy and paste the following lines, into the command line, before creating shares and copying data onto the array. You could do the same thing in the GUI by clicking on the pool and "edit" button, but it's easier on the command line.

      - For consistency consider setting your mount point as /srv/yourpoolname (This is OMV's default location for "disks by label".)

      (in the following, yourpoolname is whatever you decide to name it, when you create it.)

      Source Code

      1. zfs set aclinherit=passthrough yourpoolname
      2. zfs set acltype=posixacl yourpoolname
      3. zfs set xattr=sa yourpoolname
      4. zfs set compression=lz4 yourpoolname

      If you don't have them already, you'll find the following utilities very useful, Putty and WinSCP .
      Install them on a Windows Client.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 2 times, last by flmaxey ().

    • Thanks for taking the time with the detailed response.

      Note: my reference to "2 x SuperMicro" above was actually 2 motherboards, not 2 processors. None the less I have 2 complete boards should one go down. From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.

      Prior to your instructions last night I had already stumbled upon a walk-through of installing OMV-Extras so I simply installed the ZFS plugin. It took a bit and at one point I though it hung, but finally finished. Now I have a nice shinny ZFS menu item under Storage!

      I proceeded with creating my one and only Pool, named Pool1, is there a best practice for naming? Probably should have made it all lower case?

      I then added 5 VHD's to it using the mount point /Pool1.

      I saw reference to RAID-Z1 being geared to an odd number of drives and RAID-Z2 a even number of drives, is this correct? Or was this specific to FreeNAS?

      I created a shared folder /backups and an SMB share named backups.

      What's the difference between Shared Folders under Access Rights Management and the SMB share?

      How do I automate a Scrub?

      I found a Open-ZFS Bootcamp video on YouTube, it's a few years old, but figured it was a start. Unless you know of a newer one?



      I was looking at the Diablotek case you referenced, does it really come with 4 120mm fans? The Thermaltake only includes the rear, I have to buy one for the front.

      The Thermaltake I like has one less 3.5" slot, however it includes a removable filter over the bottom facing power supply fan inlet and filter membranes on the inside of the face plate to filter out dust. They really keep the interior clean.

      The only think I don't like about the Thermaltake is how they have the front USB 3.0 port setup. It's a standard external USB cable that you have to route out the back of the case and plug into one of the USB 3.0 ports in the back of the motherboard. What, I don't know, someone just wasn't thinking. I bought a converter so I could plug it directly into the motherboard.

      amazon.com/gp/product/B0056IZD…_detailpage?ie=UTF8&psc=1

      You were right on about the power supply, I re-ran the parts through a couple other power calculators and they both came up with a load of 164W and a recommend 214W PS. I can get a quite 500W with a Silent 135mm fan for cheap.

      If I can get a hold of more then a total of 5 drives, how many and what configuration would you recommend to maximum future upgrades?

      I've seen reference to additional drives being added and just show up in the pool, I've also heard of people replacing say 3TB drives with 6TB drives one at a time and after the last swapout it auto expands?

      Thanks for your time.

      Dave
    • As you test this:
      If you want to replicate data from Windows server network shares, onto OMV, I've done it before. And in one case, with regular replication from my Theater PC (Win 7) to OMV, I'm still doing it.

      If you're interested in a "how to":
      When you're ready, I can guide you through my method. (As it is with most things, there's more than one way.)
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • Re,

      flmaxey wrote:

      When it comes to the Grow button, "Grow" means to enlarge. Adding a drive that becomes a spare, or is used in recovery, is not "growing" an array.
      In the terms of "RAID" growing means only growing the/an array with additional disks (members) - no usecase is intended here.
      RAID does not intend "maximizing the usable space" per default, because it is made for max. redundancy ans safety, which intends:
      - in case of "clean" just add a new member as spare
      - in case of "degraded" take the new member as (hot) spare

      But i think the button should be really named "Add drive" instead of "Grow" ... most people will struggle on RAID internals vs. personal expectations ...

      Anyway, the time for RAID is over. My opinion here is to remove that from OMV, and using ZFS/BTRFS instead - best in conjunction with a script, which asks for the usecase(s) of the storagepool ... for media archives you should use SnapRAID/mergerfs.

      Sc0rp
    • flmaxey wrote:

      As you test this:
      If you want to replicate data from Windows server network shares, onto OMV, I've done it before. And in one case, with regular replication from my Theater PC (Win 7) to OMV, I'm still doing it.

      If you're interested in a "how to":
      When you're ready, I can guide you through my method. (As it is with most things, there's more than one way.)
      I wish I would have spotted the bootcamp I referenced above and OMV a long time ago, ZFS here I come!

      I'm always interested to see how others are handling their environment. For now I'll be working on collecting some drives and building my OMV server out. As mentioned above I'm currently taking images of the entire Windows server and storing them on my NAS.

      Did you get a chance to review my previous post, I responded to some of your questions and had others of my own...

      I did find the Guides section on this board to review but any best practices you can share would be greatly appreciated.

      Thanks,

      Dave

      P.S. Every time I submit a reply I get an error similar to the following, any ideas what's up?

      The server encountered an unresolvable problem, please try again later.

      Exception ID: 9d04e0108d0ba5acdf8e204edc41007f8da30b06
    • New

      daveinfla wrote:

      Note: my reference to "2 x SuperMicro" above was actually 2 motherboards, not 2 processors. None the less I have 2 complete boards should one go down. From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.
      That's not really true, but you'd have to look to find one. I did a new (older tech) build just last month with an ASUS motherboard (for AMD proc's) that will support either unregistered or registered RAM (ECC), in a hand full of different speeds. The change was a simple BIOS selection. You do have to look at the spec's, and details of this type may only be available at the MOBO site.

      daveinfla wrote:

      1. I proceeded with creating my one and only Pool, named Pool1, is there a best practice for naming?

      I then added 5 VHD's to it using the mount point /Pool1.

      2. I saw reference to RAID-Z1 being geared to an odd number of drives and RAID-Z2 a even number of drives, is this correct? Or was this specific to FreeNAS?

      3. What's the difference between Shared Folders under Access Rights Management and the SMB share?

      4. How do I automate a Scrub?

      5. I was looking at the Diablotek case you referenced, does it really come with 4 120mm fans?

      6.
      The Thermaltake I like has one less 3.5" slot, however it includes a removable filter over the bottom facing power supply fan inlet and filter membranes on the inside of the face plate to filter out dust.


      7. The only think I don't like about the Thermaltake is how they have the front USB 3.0 port setup. It's a standard external USB cable that you have to route out the back of the case and plug into one of the USB 3.0 ports in the back of the motherboard.

      8. If I can get a hold of more then a total of 5 drives, how many and what configuration would you recommend to maximum future upgrades?

      9. I've seen reference to additional drives being added and just show up in the pool, I've also heard of people replacing say 3TB drives with 6TB drives one at a time and after the last swapout it auto expands?
      Keyed to the above:

      1. There's no best practice per say - whatever works for you. For uniformity to OMV's standard, the location should be in /srv. So, using your pool name, the mount point would be /srv/Pool1. (You'd have to set the mount point when creating the pool.) I didn't follow this convention. My mount point is at the root of the drive, at /ZFS1.

      Also, those 4 lines form the last post need to be applied before coping data onto the ZFS array. They're important for Linux permissions. Otherwise you'll have Solaris permissions on your zpool, which can cause problems.

      2. RAID-Z1 = RAID5, with one parity disk. RAID-Z2 = RAID6 with two parity disks. The difference is the number of disk failures the array can deal with, without dying altogether. RAID5 allows for 1 disk failure, RAID 6 allows for two failed disks.

      3. In the Linux world, (outside of a user home directory) most of the file structure is set with permissions for the root user and the root group. Among other things, a "shared folder" sets folder access for the "users" group. (BTW: That's the actual name of the group, "users". If you create a new user, it goes in the users group by default.)
      There's 3 primary permissions: The Owner - typically root. The Group - usually root or users. & Other. It's important to note that "Other" means any user or any group that is NOT the Owner or in the Group specified.

      If you want to do anything with a network share, it must start with a "shared folder". I call it, the "base share".

      SMB (Samba) is layered on top of a base share. Meaning, if you don't have a shared folder as the base, you can't create a SMB share. SMB makes a shared folder visible or browsable on the network. Samba permissions can be more restrictive than the permissions on the base share. However, Samba, can not override the permissions set at the base share. What does that mean? If the permission on a base share is Owner:root Group:root Others:none, it doesn't matter if the Samba permission is set to Public "guests allowed" (the equivalent of Others). In this case, only the user or group "root" can get into it.
      On the other hand, if Others has read and write to the base share, and SMB is set to public "no" only the Owner and the permitted Group will be able to access the share. (Samba becomes more restrictive than the base share.)

      4. You don't need to automate a scrub. Twice monthly scrubs are already setup by the plugin. If you wanted to do one, for some reason, the following are commands that would be useful.

      zpool scrub Pool1 Starts a scub for your Pool1
      zpool scrub -s Pool1 Stops a scrub that's in progress
      zpool status -v Pool1 Shows the status of the pool

      (These can be copied into System, Scheduled Jobs if you wanted to automate them.)

      Depending on how much data is on the pool a scrub may take awhile. The status command will give an indication of progress. Starting a scrub and the pool status are available in the GUI.

      5. Yep, it has one in the front, (right in front of the lower hard drive bays), one in the rear, and 2 in the top (just above the Mobo and memory modules). They're all 120mm. The one in the front has a blue circular light (if you go for that sort of thing), and they all have the 4 pin molex (single speed). They all have "T" male/female connections so they don't monopolize a PS connector.
      I can say, with the fan blowing directly onto the drives (It's one inch away), their running temperature is just a couple degrees above room temp. Because it breaths well and all temps are low, right now, I'm only using two of the four fans.

      6. This case has a bottom mounted PS as well, and there's a screen/filter in the bottom for the PS intake. Still, I turned the PS over to put the intake on the inside, because if the case is sitting on carpet of any thickness (it is) the bottom intake would be effectively blocked.

      7. There's only 3 USB3 ports up front (enough for my purposes) and there's a cable plug for the USB Mobo connection.

      8. (Setting aside case accommodation restrictions and the number of sata plugs on a given Mobo.)
      I can't really answer this one - it's what you'd want to do. You could add a single disk and another file system. (I have a similar arrangement - a Z-mirror for data and an ext4 drive for client backups.)

      Where ZFS is concerned: What you did, when you created a pool is:
      A. Create a vdev. B. Put that vdev in a pool.
      With ZFS, you can grow a pool by adding another vdev. And while I've never done it, I understand that it's possible to mix vdev's in a pool, meaning you could add a mirror (vdev) to a RAID-Z1 (vdev) in the same pool. But here's the kicker, if you lose one of your vdev's, you lose the entire pool. There is no fault tolerance at the pool level. Fault tolerance must be taken into account at the vdev level.
      If it was me, since it's the safest way to go, I'd go with vdevs of mirrors adding two at a time, but that comes at a steep cost of 50% loss of disk space. In any case, on a single host, under no circumstances would I ever put more than 5 disks in any kind of RAID5 array, even in ZFS. (Again, this is my opinion.)
      I looked this up for a rule of thumb. What I found was, RAID-Z1 (1 parity disk) - 3 to 5 disks, RAID-Z2 (2 parity disks) - 6 to 8 disks - RAID-Z3 (3 parity disks) 9+ disks.

      9. I haven't heard or read anything about "auto expand" in ZFS. If it worked, it sounds like an odd way to "back" into an expansion. With solid backup, such a technique would not be needed. If a rebuild was in order, that's what backup is for. Otherwise, I'd add another vdev and call it done.
      In the bottom line, I'd provision for what I wanted up front. After that, ANYTHING I do, that involved my array, I'd have already tested in a VM before attempting it. In the initial setup, my rule of thumb for capacity is have 4 times the actual need (25% fill). Thereafter, I prepare to expand at 75% fill.

      ((On the server error when posting, it happens all the time. In my experience, the post is usually there.))
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 2 times, last by flmaxey: edit ().

    • New

      Sc0rp wrote:

      Re,

      flmaxey wrote:

      When it comes to the Grow button, "Grow" means to enlarge. Adding a drive that becomes a spare, or is used in recovery, is not "growing" an array.
      In the terms of "RAID" growing means only growing the/an array with additional disks (members) - no usecase is intended here.RAID does not intend "maximizing the usable space" per default, because it is made for max. redundancy ans safety, which intends:
      - in case of "clean" just add a new member as spare
      - in case of "degraded" take the new member as (hot) spare

      But i think the button should be really named "Add drive" instead of "Grow" ... most people will struggle on RAID internals vs. personal expectations ...

      Anyway, the time for RAID is over. My opinion here is to remove that from OMV, and using ZFS/BTRFS instead - best in conjunction with a script, which asks for the usecase(s) of the storagepool ... for media archives you should use SnapRAID/mergerfs.

      Sc0rp
      Well, at one time (in OMV 2.x) the "grow" button did just that - it would grow an mdadm array. Something changed in OMV 3.X and I've posted in two different threads on the subject. In one thread I asked the question of Dev's, "why the change?", without getting answer. (At least I didn't get a written answer.. :) ) In any case, I agree with you on two different accounts. "Grow" should do exactly that with the selected drive, and another button "Add drive" could be added to install a drive as a hot spare. (But if they make it too easy, the forum might be flooded with array expansion horror stories.)

      I also agree that mdadm RAID has seen its' day and that day has passed. It wasn't designed for today's enormous multi-terabyte drives, so it's not really appropriate for even home use cases. One would be better off with ext4 on single drives.
      Of the two more modern file systems (ZFS/BTRFS), I'd prefer using BTRFS because it has a ton of flexible features. Unfortunately the versions out in userland just aren't stable for RAID5 or 6. And for a mirror, I don't like the idea of losing a drive and having the last one standing locked into a permanent read-only mode. So, until BTRFS fixes make out to us, in new kernel releases, that leaves ZFS. So far, I'm very happy with it.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • New

      daveinfla wrote:

      Server board to support ECC these days and it's a requirement for ZFS
      No, it's not. This is one of the most fundamental misunderstandings around ZFS. ECC DRAM is NOT a requirement for ZFS.

      It's as easy as this:
      • if you love your data and hate bit rot you spend the extra money on ECC DRAM
      • if you love your data and hate bit rot you use modern filesystems that allow for data integrity checking and self-healing (that's on Linux either ZFS or btrfs)
      • if you don't want to spend the money on ECC DRAM ZFS will be even a better choice since bit rot is more likely to happen so ZFS can protect you
      • the scrub of death is a myth and does not exist. ECC DRAM is NOT a requirement to use ZFS
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      Re,

      daveinfla wrote:

      From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.
      That is wrong in both issues:
      - ECC is not an requirment, it is only highly recommended in working envinronment (even SOHO) ... even by the Dev's
      - ECC is not only bound to "serverboards" - but to "servergrade chipsets" - you have to search and read mass more! (I use the ASRock E3V5 WS for my NAS (with ECC-RAM, of course, because i love my data :D) )

      Sc0rp
    • New

      Sc0rp wrote:

      ECC is not only bound to "serverboards" - but to "servergrade chipsets"
      And sometimes those servergrade chipsets appear on hardware that looks like a toy: forum.openmediavault.org/index.php/Thread/18597 ;)

      In my personal opinion ECC DRAM when used should always be monitored (we have quite a bunch of servers in our monitoring and it's interesting that single bit flips -- correctable -- happen here and there though we had never an indication of a memory module starting to show more and more errors over time and therefore needs to be replaced. At least that's the sole reason for monitoring this stuff: an early-warning system for dying memory modules). I'm always surprised by people blindly trusting in technology and not checking stuff like EDAC logs (automatically of course).
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      tkaiser wrote:

      In my personal opinion ECC DRAM when used should always be monitored (we have quite a bunch of servers in our monitoring and it's interesting that single bit flips -- correctable -- happen here and there though we had never an indication of a memory module starting to show more and more errors over time and therefore needs to be replaced.
      Roughly, how often do you see a "flipped bit"? The stories out there about "cosmic rays" flipping RAM module bits are believable but one would think that such an event would be rare.
      On the other hand errors happen and, even at extremely low rates, they corrupt data. If for no other reason but correcting those errors, ECC is a good idea.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • New

      flmaxey wrote:

      Roughly, how often do you see a "flipped bit"?
      Not relevant since sample size too small (less than 30 servers and some of them ECC DRAM equipped but not able to monitor potential problems since running macOS). The only interesting observation is that these servers show single bit flips in production while surviving a 72 hour memtester burn-in test without an error.

      These are the interesting numbers: cs.toronto.edu/~bianca/papers/sigmetrics09.pdf (fairly old so with further process/die shrinks happened the last years the necessity to compensate for bit flips has even rised. And it's no wonder that we see in the meantime on-die ECC specifications even for mobile devices -- a quick web search for eg 'ecc lpddr4' should give the idea)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      tkaiser wrote:

      ... The only interesting observation is that these servers show single bit flips in production while surviving a 72 hour memtester burn-in test without an error.
      These are the interesting numbers: cs.toronto.edu/~bianca/papers/sigmetrics09.pdf (fairly old so with further process/die shrinks happened the last years the necessity to compensate for bit flips has even rised. And it's no wonder that we see in the meantime on-die ECC specifications even for mobile devices -- a quick web search for eg 'ecc lpddr4' should give the idea)
      Wow. That is interesting...

      And just the abstract on the white paper indicates it will be an interesting read.

      Thanks
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119