Beiträge von Sc0rp

    Re,

    4x read and 2x write speed gain

    absolutely theoretically ...


    When you create an array, it gets mostly 2 status:

    or broken/dead/false ...


    What does degraded mean?

    That one or more drives are missing. In your mdstat-output are only two of four drives listed and stated ...


    You should fix that first - look in the logs for information about the failing drives!


    Sc0rp

    Hi,


    right, raid-devices must be the full number of all drives inkl. the new one(s) ... so, if you upgrade from 5->6:
    mdadm --grow --raid-devices=6 /dev/mdX
    and if you upgrade from 5 to 8:
    mdadm --grow --raid-devices=8 /dev/mdX


    The -v switch turns on the verbose level ...


    After finishing the reshape, you have to "resize" your filesystem too.


    Sc0rp

    Hi,

    I have a 6 disk RAID 6

    Why using RAID / why choosing RAID6?


    so I suspect it is recoverable

    Nope, R6 with three missing disks is not recoverable ... but may be you'll get one more member running.


    but have no idea where to begin.

    List your hardware (some details about the mainboard and/or the sata-controller, and of course the HDDs used) ... then look into the log messages of your system, eventually you reveal some hints about the errors ...


    The only chance you have (before backup :P), is to recover on or more superblocks from the RAID - it is a common bug, that this superblocks are not written "physically" to the media - and getting lost during reboot (or powerloss), but this will be very difficult ...


    Did you read https://raid.wiki.kernel.org/index.php/RAID_Recovery ? Or any other RAID-recovery-related article, which handles with more lost drives than redundancy has? (which ones?)


    Start with playing around with:
    mdadm --examine /dev/sd[...] may be you'll find some backup-superblocks ... otherwise your data is gone



    Sc0rp

    Re,

    Which possibilities are now valid?

    Always use Backup ... then you are save! Especially while/when using R1 ...


    Next is to use the switch "--backup-file=" and point it to an real HDD-directory (not tmpfs!!!), then you can migrate and grow in one step from R1 to R5!


    but this RAID-5 is not the same as a real RADI-5

    How do you get that? If you migrate from R1 to R5, mdadm will do a reordering, als well as all raid-controllers, so you'll have after finishing this process a "real" R5 ...


    Sc0rp

    Re,

    Ja, da war was mit Partitionen "nach vorne" schieben.

    Das kommt ganz darauf an, wie OMV die SWAP-Partition angelegt hat ... aber auch nur wenn man vom alten /dev/sdX-Numbering ausgeht! OMV3 nutzt die UUIDs der Partitionen, also muss man nix verschieben!


    Einfach die erste Partition verkleinern, und in dem freigewordenen Platz eine neue (primäre) Partition anlegen ... den Rest (formatieren und einbinden) macht man dann wieder unter der OMV-WebGUI.


    Sc0rp

    Re,

    hat er beim Setup IMMER die komplette Festplatte genommen... Ich weiß nicht Warum

    Das ist das Standard-Verhalten des OMV-ISOs, da kann man auch nix während der Installation ändern.


    Nimmt man dagegen den NetInstall-Weg, kann man die Partitionierung während der Installation manuell ändern ... um danach OMV nach o.g. Thread nach zu installieren ...


    Sc0rp

    Re,

    the funniest part is in my console I have this message :
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays

    That's normal, because OMV uses pure superblock autodetection ... no need for static configuration ... normally.


    This situation come back again

    Because one of the drives from array "md2" makes problems ... just check the logs and SMART data on both members AFTER the resync is finished:

    Update Time : Wed Dec 6 13:34:43 2017
    State : clean, resyncing
    [...] Resync Status : 7% complete

    you can check the ongoing with:
    cat /proc/mdstat



    Btw. ... may i ask why you use this layout:

    Basicly I have 3 RAID1
    md0 for boot
    md1 for LVM
    md2 for data

    RAID1 is only for a drive failure, but this will occur much later than any data corruption (silent or accidently).


    Sc0rp

    Re,

    From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.

    That is wrong in both issues:
    - ECC is not an requirment, it is only highly recommended in working envinronment (even SOHO) ... even by the Dev's
    - ECC is not only bound to "serverboards" - but to "servergrade chipsets" - you have to search and read mass more! (I use the ASRock E3V5 WS for my NAS (with ECC-RAM, of course, because i love my data :D) )


    Sc0rp

    Re,

    When it comes to the Grow button, "Grow" means to enlarge. Adding a drive that becomes a spare, or is used in recovery, is not "growing" an array.

    In the terms of "RAID" growing means only growing the/an array with additional disks (members) - no usecase is intended here.
    RAID does not intend "maximizing the usable space" per default, because it is made for max. redundancy ans safety, which intends:
    - in case of "clean" just add a new member as spare
    - in case of "degraded" take the new member as (hot) spare


    But i think the button should be really named "Add drive" instead of "Grow" ... most people will struggle on RAID internals vs. personal expectations ...


    Anyway, the time for RAID is over. My opinion here is to remove that from OMV, and using ZFS/BTRFS instead - best in conjunction with a script, which asks for the usecase(s) of the storagepool ... for media archives you should use SnapRAID/mergerfs.


    Sc0rp

    Re,

    RAID-Z2 würd ich für Nearline-Storage einsetzen wenn Downtimes weh tun könnten. Aber daheim braucht das niemand (außer man setzt auch auf Snapshots und will sich Backup sparen -- dann sieht es wieder völlig anders aus)

    Ich mache das ehr an der Anzahl der Platten im Verbund abhängig - hier würde ich gefühlsmäßig schon ab 6 HDDs ein Z2 nehmen, die Wahrscheinlichkeiten von Fehlern steigen ja nicht nur mit der Basisgröße EINES Datenträgers, sondern auch mit der Anzahl an verbundener Platten ... aber ich bin (noch) kein ZFS-Profi.


    Sc0rp

    Re,

    I guess if the RAID has been rebuilt after disks had dropped out there's going to be inconsistencies all over the place.

    From my expirience: the inconsistences occured WHILE the drive was dying ... but the result remains the same ...


    Will swap out the other 2 discs when they arrive, and then maybe run xfs-repair and see what's salvageable but I'm thinking this might just have to be given up as a bad job, and rebuild my media collection from scratch (and of course, create a back-up next time!).

    Yeah, may be you can finally "force" xfs_repair to get a clean state (at least you can zero the journal ...) just search the inet for "man xfs_repair" :D


    Btw. i had also many problems while using xfs over an old areca-hw-raidcontroller, due to a bug in the driver (kernel-module), but i never lost data ...


    If you have ever the chance to make your current array new, consider using ZFS-Z1 or ZFS-Z2 instead, it's more convinient nowadays ... and as a special benefit 4 me, @tkaiser is then in charge :P ... uhm, just kidding ... a bit.


    Sc0rp

    Re,


    yeah, it's degraded, because drive sdd is lost ... check the logs and the SMART for that drive to get the root-cause.


    I have currently to less time to dig deeper here, but there are many posts from me in other threads covering such problem, please use the search-function.


    Note: your blkid shows, that now your USB-drive is sdd! (so sdf is missing ... it was renumbered?)


    Sc0rp

    Re,


    OMV-WebGUI does (currently) not support the "grow" command from mdadm - i just "add" a drive to an array, with "two" possible behaviors:
    - array is missing a member -> adding as spare and starting the rebuild immediately
    - array is not missing a drive -> adding the new drive as spare
    (seen from RAID, these are the same things ...)


    You can grow your RAID-array only via console/shell, after growing the array, you can grow your FS ...


    EDIT:

    Along other lines, I know you're testing but a realistic real world limit for software RAID5 is 5 disks max.

    Where did you got that? I would say: it highly depends of the usecase ...


    Sc0rp

    Re,

    Geht das im OMV, dass man separat ein weiteres RAID aufsetzt und dann z.B. 2 oder 3 HD's zu einem RAID-Verbund zusammenfasst?

    Das ist etwas schwer verständlich - aber unter Linux geht prinzipiell so gut wie Alles (sogar ZFS!).
    Wichtig ist, das du dir ein Konzept überlegst - einfach nur Platten kaufen und dann hinterher i'wie zusammenschustern ist etwas ... platt. :D


    Aus der Erfahrung würde ich Variante eins eher nicht nutzen wollen.

    Welche Erfahrungen? Variante 1 wäre immerhin der native Weg ... und recht einfach zu bewerkstelligen.


    Wenn du etwas für die Zukunft möchtest, wäre mit Sicherheit ZFS auf neuen 12TB-Platten der beste/bessere Weg!


    @tkaiser: kann man von ZFS-Z1 auf ZFS-Z2 "hochleveln" (inline, wie man das von RAID-Lvl5 auf Lvl6 kann) ?


    Sc0rp

    Re,

    Auf was basieren eigentlich diese Anekdoten?

    Auf eigenen Messungen mit HW-RAID-Controllern und MDRAID, allerdings schon ein paar Jahre alt (damals mit EXT3 und versch. "stride" und "strip-width" Kombinationen, getestet mit iozone)


    ... also zumindest was EXT & XFS angeht, können diese stripe und sw/su Angaben schon ordentlich die Performance beeinflussen - natürlich darf man bei korrekten Werten keine Wunder erwarten (die zünden keinen Turbo), aber Einbußen bei "Fehlkonfig" sind deutlich messbar.


    und dann allen Ernstes auch noch LVM reinzumischen, was ja auch viele gerne tun

    Der Ansatz, auf einem MDRAID (oder HW-RAID) ein LVM einzusetzen kam damals von O+P Insights (sofern ich mich recht erinnere). Er hatte DAMALS durchaus technische Relevanz (da gab es noch kein ZFS/BTRFS für Linux) - heute ist das obsolet.


    das ist doch alles nur ein Witz?

    Bei dem Artikel ... ja, beim schnellen Überfliegen stellten sich mir auch die Nackenhaare auf ... man darf halt nicht Alles glauben :D


    Sc0rp