Add drives to an existing array, then extend the existing partition/file system?

  • I'm new to OMV and just starting playing with it under Virtualbox running OMV 3.0.94.


    I created a RAID5 Array with 4 disks and used EXT4 for the file system. Created 1 share.


    I attempted to add 2 additional drives to the existing array from the GUI in an attempt to Grow/extend (add to the existing size of the array) however you want to refer to is as, but the GUI simply adds them as Spares.


    I've searched, before posting, and all I've found are references to people giving up, backing up their data, and recreating a new array including the new drives.


    This is a simple task under most operating systems, OK I'll say it, a Windows server. I can't believe this is a difficult task under OMV?


    Any takers?


    **Update** - I found a work around from the CLI, however I'd still like to know if there's a way to do this from the GUI?


    https://zackreed.me/adding-an-extra-disk-to-an-mdadm-array/

  • I just something similar where I replaced my current drivers 4GB with bigger drives 6GB and let the GUI rebuild my array after each drive replacement. And then I resized the array with mdadm --grow /dev/md127 --size=max


    I hope this helps


    Growing a mdadm RAID by replacing disks

    Let me clarify, I am not replacing existing disks with larger disks. I have 4 drives configured as RAID5, I want to add 2 additional drives to the same array as additional members to expand the array and then expand the partition (file system).


    At present the GUI has added the two additional disks as spares, not additional members for expansion.

  • Re,


    OMV-WebGUI does (currently) not support the "grow" command from mdadm - i just "add" a drive to an array, with "two" possible behaviors:
    - array is missing a member -> adding as spare and starting the rebuild immediately
    - array is not missing a drive -> adding the new drive as spare
    (seen from RAID, these are the same things ...)


    You can grow your RAID-array only via console/shell, after growing the array, you can grow your FS ...


    EDIT:

    Along other lines, I know you're testing but a realistic real world limit for software RAID5 is 5 disks max.

    Where did you got that? I would say: it highly depends of the usecase ...


    Sc0rp

  • This box is for backups only and is NOT my only backup. My critical data is stored offsite.
    I'm currently using a QNAP TS-212 with 2 x 3TB drives which I'll reuse for the new box along with at least 2 more I'll purchase.
    The NAS is used to store backup images of a Windows server and my desktop PC, StorageCraft continuous snapshots. If I loose them I can start the backups over after rebuilding, I'll just loose my snapshot history. If the house burns down I have my offsite backup.


    My primary server is a Windows 2016 server acting as a file/media server. OS = Raid 1, Data = Raid 5 (4x3TB)


    I have to following older, but fully functional parts at my disposal.


    2 x SuperMicro X8STE Rev. 2.00 server motherboards (1 to use, 1 for a spare)
    It has 2xIntel NIC's and an integrated 6 port Intel ICH10R SATA Controller
    Intel Xeon L5609 1.87Ghz processor
    4 x 4GB of Hynix DDR3 Unbuffered ECC memory (HMT351U7BFR8C)
    In addition to the integrated SATA it also has an add-on LSI MegaRAID controller (w/battery) that supports 6 drives. I have a spare for this too.


    These where 2X SuperMicro 2U servers that I'm gutting, the fans and power supply are just too loud for a home environment.


    I plan to re-use the existing 2x3TB drives from my QNAP and buy at least 2 more. They're WD Red WD30EFRX drives.
    I'll be buying a Thermaltake Commander MS-I Mid Tower case which will support 10 3.5" SATA drives with plenty of airflow, along with a 700 Watt power supply and an extra case fan. My Windows server is setup the same way and I love the case.


    With that said, I know enough about Linux to follow directions and I have zero experience with ZFS. I've read up a little on it and watched some videos, it sounds like the end all end of data integrity but I question if it's necessary in my scenario?


    I have 1TB drives I can mirror for the OS, unless this is overkill. Can OMV be run on a single drive, the config backed up and easily rebuilt and re-attached to the data volume(s). Or am I better off with the OS mirror?


    As for the data volume(s). I'm trying to squeeze out as much storage out of the least amount of drives due to a limited budget. I planned on 2 additional drives but might be able to fit a 3rd in. A 4 x 3TB RAID5 array gives me around 8TB of usable space.


    Where would I be with ZFS, if I understand it, RAID-Z1 is a waste of time and RAID-Z2 would leave me with 4.7TB of usable space? It's not enough.


    I'm up for suggestions/recommendations.


    Thanks,


    Dave

  • Thanks for taking the time with the detailed response.


    Note: my reference to "2 x SuperMicro" above was actually 2 motherboards, not 2 processors. None the less I have 2 complete boards should one go down. From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.


    Prior to your instructions last night I had already stumbled upon a walk-through of installing OMV-Extras so I simply installed the ZFS plugin. It took a bit and at one point I though it hung, but finally finished. Now I have a nice shinny ZFS menu item under Storage!


    I proceeded with creating my one and only Pool, named Pool1, is there a best practice for naming? Probably should have made it all lower case?


    I then added 5 VHD's to it using the mount point /Pool1.


    I saw reference to RAID-Z1 being geared to an odd number of drives and RAID-Z2 a even number of drives, is this correct? Or was this specific to FreeNAS?


    I created a shared folder /backups and an SMB share named backups.


    What's the difference between Shared Folders under Access Rights Management and the SMB share?


    How do I automate a Scrub?


    I found a Open-ZFS Bootcamp video on YouTube, it's a few years old, but figured it was a start. Unless you know of a newer one?


    Externer Inhalt youtu.be
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    I was looking at the Diablotek case you referenced, does it really come with 4 120mm fans? The Thermaltake only includes the rear, I have to buy one for the front.


    The Thermaltake I like has one less 3.5" slot, however it includes a removable filter over the bottom facing power supply fan inlet and filter membranes on the inside of the face plate to filter out dust. They really keep the interior clean.


    The only think I don't like about the Thermaltake is how they have the front USB 3.0 port setup. It's a standard external USB cable that you have to route out the back of the case and plug into one of the USB 3.0 ports in the back of the motherboard. What, I don't know, someone just wasn't thinking. I bought a converter so I could plug it directly into the motherboard.


    https://amazon.com/gp/product/…_detailpage?ie=UTF8&psc=1


    You were right on about the power supply, I re-ran the parts through a couple other power calculators and they both came up with a load of 164W and a recommend 214W PS. I can get a quite 500W with a Silent 135mm fan for cheap.


    If I can get a hold of more then a total of 5 drives, how many and what configuration would you recommend to maximum future upgrades?


    I've seen reference to additional drives being added and just show up in the pool, I've also heard of people replacing say 3TB drives with 6TB drives one at a time and after the last swapout it auto expands?


    Thanks for your time.


    Dave

  • Re,

    When it comes to the Grow button, "Grow" means to enlarge. Adding a drive that becomes a spare, or is used in recovery, is not "growing" an array.

    In the terms of "RAID" growing means only growing the/an array with additional disks (members) - no usecase is intended here.
    RAID does not intend "maximizing the usable space" per default, because it is made for max. redundancy ans safety, which intends:
    - in case of "clean" just add a new member as spare
    - in case of "degraded" take the new member as (hot) spare


    But i think the button should be really named "Add drive" instead of "Grow" ... most people will struggle on RAID internals vs. personal expectations ...


    Anyway, the time for RAID is over. My opinion here is to remove that from OMV, and using ZFS/BTRFS instead - best in conjunction with a script, which asks for the usecase(s) of the storagepool ... for media archives you should use SnapRAID/mergerfs.


    Sc0rp

  • As you test this:
    If you want to replicate data from Windows server network shares, onto OMV, I've done it before. And in one case, with regular replication from my Theater PC (Win 7) to OMV, I'm still doing it.


    If you're interested in a "how to":
    When you're ready, I can guide you through my method. (As it is with most things, there's more than one way.)

    I wish I would have spotted the bootcamp I referenced above and OMV a long time ago, ZFS here I come!


    I'm always interested to see how others are handling their environment. For now I'll be working on collecting some drives and building my OMV server out. As mentioned above I'm currently taking images of the entire Windows server and storing them on my NAS.


    Did you get a chance to review my previous post, I responded to some of your questions and had others of my own...


    I did find the Guides section on this board to review but any best practices you can share would be greatly appreciated.


    Thanks,


    Dave


    P.S. Every time I submit a reply I get an error similar to the following, any ideas what's up?


    The server encountered an unresolvable problem, please try again later.


    Exception ID: 9d04e0108d0ba5acdf8e204edc41007f8da30b06

  • Server board to support ECC these days and it's a requirement for ZFS

    No, it's not. This is one of the most fundamental misunderstandings around ZFS. ECC DRAM is NOT a requirement for ZFS.


    It's as easy as this:

    • if you love your data and hate bit rot you spend the extra money on ECC DRAM
    • if you love your data and hate bit rot you use modern filesystems that allow for data integrity checking and self-healing (that's on Linux either ZFS or btrfs)
    • if you don't want to spend the money on ECC DRAM ZFS will be even a better choice since bit rot is more likely to happen so ZFS can protect you
    • the scrub of death is a myth and does not exist. ECC DRAM is NOT a requirement to use ZFS
  • Re,

    From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.

    That is wrong in both issues:
    - ECC is not an requirment, it is only highly recommended in working envinronment (even SOHO) ... even by the Dev's
    - ECC is not only bound to "serverboards" - but to "servergrade chipsets" - you have to search and read mass more! (I use the ASRock E3V5 WS for my NAS (with ECC-RAM, of course, because i love my data :D) )


    Sc0rp

  • ECC is not only bound to "serverboards" - but to "servergrade chipsets"

    And sometimes those servergrade chipsets appear on hardware that looks like a toy: https://forum.openmediavault.org/index.php/Thread/18597 ;)


    In my personal opinion ECC DRAM when used should always be monitored (we have quite a bunch of servers in our monitoring and it's interesting that single bit flips -- correctable -- happen here and there though we had never an indication of a memory module starting to show more and more errors over time and therefore needs to be replaced. At least that's the sole reason for monitoring this stuff: an early-warning system for dying memory modules). I'm always surprised by people blindly trusting in technology and not checking stuff like EDAC logs (automatically of course).

  • Roughly, how often do you see a "flipped bit"?

    Not relevant since sample size too small (less than 30 servers and some of them ECC DRAM equipped but not able to monitor potential problems since running macOS). The only interesting observation is that these servers show single bit flips in production while surviving a 72 hour memtester burn-in test without an error.


    These are the interesting numbers: http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf (fairly old so with further process/die shrinks happened the last years the necessity to compensate for bit flips has even rised. And it's no wonder that we see in the meantime on-die ECC specifications even for mobile devices -- a quick web search for eg 'ecc lpddr4' should give the idea)

  • you've been testing ZFS on non-ECC ram

    Sorry, but there is nothing to test (besides Row hammer and friends). It's only about expectations.


    One basic misunderstanding about hardware is that 'hardware works as it's supposed to'. No it's not, hardware fails pretty often. The idea that 0 is 0 and 1 is 1 is nice but it needs sometimes a lot of efforts for this to really happen. The most basic learning here is that DRAM can fail and fails from time to time even with some added redundancy like simple ECC (there's more advanced stuff like IBM's Chipkill too).


    Then there's this other misunderstanding that ZFS due to its implementation of data integrity checking and self-healing capabilities would cause great harm when used on systems lacking ECC memory. That's just the 'scrub of death' myth unfortunatenly an awful lot of people believing into (me for example being amongst for quite some time).


    This needs no testing but just some understanding: ECC DRAM and ZFS are two different things, the former is not requirement for the latter, if you love your data you choose both and if you can't or don't want to afford ECC DRAM there are even more reasons to choose ZFS then (or btrfs or ReFS, not talking about Apple's APFS yet)

  • If you want to do anything with a network share, it must start with a "shared folder". I call it, the "base share".

    I knew this, I think it was the way it was presented in the GUI that confused me. It's the same thing with NTFS permissions and shares.



    4. You don't need to automate a scrub.

    Good to know, I'm sure it mentioned that if I dug far enough?



    I use 32GB USB 3.0 thumb drives

    USB drives are easy to clone

    I spotted mention about the thumb drive plug-in elsewhere, can you explain how you clone your thumb drives? Can it be done while the production one is in use or do you have to do this on say a workstation using a clone utility?


    What's involved in recovery if the flash drive fails?


    On these same lines, I've seen mention of using SSD drives as a cache. Is this overkill for say a 4 -5 drive (Raid-Z1) system running off of a USB flash drive?



    ECC is not only bound to "serverboards" - but to "servergrade chipsets"

    if you love your data and hate bit rot you spend the extra money on ECC DRAM

    I miss spoke, what I meant to say is exactly what you both mentioned about Server chipsets and ECC. Other then maybe an actual workstation motherboard and of course ASRock, which I did entertain until I found the 2 Supermicro boards laying around with 16GB of ECC memory in one of them :). To me using ECC is a no brainer, however I can see the added cost of a server grade board, processor, or the ASRock would put some people off.


    I think the ECC required rumor maybe started with FREENAS, they simply insist you use it.


    Someone mentioned AMD, I'm personally not a fan of AMD and I've read elsewhere that their support for ECC isn't fully implemented. I tried to find the original article but found the following instead:


    http://www.hardwarecanucks.com…amds-ryzen-deep-dive.html



    I have a couple of new questions, just trying to plan ahead.


    I understand that ZFS wants direct control of the drives so using the on-board SATA controller, in JBOD mode, is preferred to a hardware RAID controller. The Supermicro has 6 connectors which will be plenty for now. However, if I ever need to upgrade in the future, is there a preferred add-on JBOD card anyone can recommend?


    Someone please explain the versions listed on Sourceforge, I totally missed the link "Looking for the latest version?" with 3.0.86 listed and downloaded 3.0.94 at the top of the list. Is 3.0.94 beta? What about 4.0.14? Neither the download nor the Sourceforge page is very clear on whats the latest release vs. beta...


    How about UPS support? Is there a plug-in for monitoring and automatic shutdown?


    I have a APC 1500VA on my primary server, however it's USB based. Is there a Windows utility that would report to the OMV box allowing it to auto shutdown? Or should I just use a separate UPS for the OMV server?


    Thanks,


    Dave

  • However, if I ever need to upgrade in the future, is there a preferred add-on JBOD card anyone can recommend?

    If it's about 4 more SATA ports combined with spinning rust and almost all topologies IMO a cheap Marvell 88SE9215 is sufficient (there exists a slightly more expensive version using 2 PCIe 2.x lanes that is a better idea when accessing SSDs or using performant storage topologies though). Personally I also prefer that these things aren't reflashed RAID controllers since those might adjust drive geometry when operated in the wrong modes (most probably one of the reasons ZFS stores its own metadata at the end of the disks?).


    Wrt ECC DRAM 'requirement' IMO to postulate this makes some sense from a support perspective (you don't want to deal with average Joe every other day playing RAID-Z on crap, failing due to crappy hardware and blaming your software being the reason, it's boring, annoying and just a waste of time). And while I'm a strong supporter of ECC DRAM I'm not happy with this wrong conclusion that got spread. Especially if your system is not equipped with ECC DRAM then using ZFS, btrfs, ReFS (and hopefully soon APFS) is even more important. You increased the risk of bit rot so now you deserve to be informed easy and early whether that happened. That's what this whole checksumming stuff is all about.


    OMV 3 is stable since at least 3.0.8x and for OMV 4 you just need to wait for an official announcement.

  • How about UPS support? Is there a plug-in for monitoring and automatic shutdown?


    I have a APC 1500VA on my primary server, however it's USB based. Is there a Windows utility that would report to the OMV box allowing it to auto shutdown? Or should I just use a separate UPS for the OMV server?

    For OMV there is the NUT-Plugin available, which is for monitoring and automatic shutdown of the OMV box. To use the UPS for both servers you have to do some things:

    • Connect the power lines of both servers to the UPS
    • Remove the USB connection of the UPS from your Windows server and connect it to the OMV box
    • Enable and configure remote monitoring in the NUT plugin of OMV.
    • Use a nut client (e.g. WinNUT) on the Windows server to watch for the UPS state of OMV

    Maybe it is also possible to do it vice versa with an Nut client on OMV. Never searched for this.


    In this thread an other user had a similar question: UPS Nut remote monitoring where he wrote how to configure.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • 3.0.86 vs. 3.0.94

    There's no such thing. OMV 3 is stable so simply use whatever installation method you want, apply latest upgrades and you're at 3.0.94 anyway.


    I don't try to do one-to-one consultations here (just trying to clarify some stuff from time to time in the hope some ideas get spread) so it's totally up to you to decide whether you start with OMV 3 or (soon being ready) OMV 4.


    IMO it's very important to understand what you're doing when choosing your NAS setup and that's also my main point of criticism when we're talking about OMV in general. The ease of use allows users doing stupid things way too often (talking about this RAID5 madness here)

  • The intent of the post was to point out that ZFS works on non-ecc systems and that (even if unnoticed) Dave verified that for himself.

    'Works' means nothing.


    You need data corruption caused by bit flips in memory to get the idea how ZFS copes with that. Funnily with ARM boards this is pretty easy since for whatever reasons even mainline u-boot maintainers usually don't give a shit about reliability and commit wrong DRAM clockspeeds upstream.


    So if you really want to test for this specific issue it's most easy with a cheap ARM board like NanoPi NEO2. Simply overclock DRAM to 696, 720 or 744 MHz and you get bit flips for free.

  • for whatever reasons even mainline u-boot maintainers usually don't give a shit about reliability and commit wrong DRAM clockspeeds upstream.

    Just as a reference (for Google or whoever). This is what Armbian (and so OMV too) tries to prevent this: https://github.com/armbian/bui…lt-dram-clockspeeds.patch


    The majority of these lowered clockspeeds is confirmed to work stable by at least some users. If I roll out tiny NAS devices to do this and that I usually apply patches that lower DRAM speed by another 48 MHz just to get some safety headroom.


    But with this in mind it's really easy to test through ZFS data integrity issues. Simply use a platform that allows you to turn your memory into something producing bit flips every now and then (as soon as you do you also understand a certain amount of Windows BSOD happening)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!