Defekte Platte im RAID5 tauschen

    • Offizieller Beitrag

    I just have made a Backup.

    :thumbup:


    Well there are 2 options, because at present I can't find why the raid will not stop;


    1. From Storage -> Disks wipe /dev/sdd, then File Systems and format that drive the same as the raid which is ext4, then try the recovery option again.


    2. Remove all SMB shares, then remove all shared folders, remove one of the drives from the array, then the array can be deleted, wipe the drives recreate the array and your shared folders then your smb shares. Then restore your data.


    What version of OMV are you on?

    • Offizieller Beitrag

    I mounted the disk was this correct?

    No, there is no need to mount the disk, selecting the raid then recover on the raid menu should display that disk to be added to the array.


    What's the output of wipefs -n /dev/sdd and the same for wipefs -n /dev/md127

  • No, it is not diplayed as you can see on the 2. picture in post 47


    root@OMVneu:~# wipefs -n /dev/sdd
    offset type
    ----------------------------------------------------------------
    0x200 gpt [partition table]




    root@OMVneu:~# wipefs -n /dev/md127
    offset type
    ----------------------------------------------------------------
    0x438 ext4 [filesystem]
    LABEL: homeSAVE
    UUID: 2f22f941-e6a7-424e-8ce2-da43bc4a56e3

    • Offizieller Beitrag

    No, it is not diplayed as you can see on the 2. picture in post 47

    To be honest, I don't know, this has been done so many times here from the GUI and from the command line and it simply works this is first time I have come across a raid that will not stop.


    Ok because I have 4 monitors, I've been going back over some of the outputs and something has just hit me, (I think 'the penny has just dropped')


    Post the output of wipefs -n /dev/md127 and mdadm --detail /dev/md127 and cat /etc/fstab also a screenshot from raid management showing the full column width under name

  • #
    root@OMVneu:~# cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sda1 during installation
    UUID=22af3500-51d1-4eda-ad95-1106caa7e01b / ext4 errors=remount-ro 0 1
    # swap was on /dev/sda5 during installation
    UUID=c611ba4f-b7bc-4fc0-9b3a-b00eae14443f none swap sw 0 0
    tmpfs /tmp tmpfs defaults 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/homeSAVE /srv/dev-disk-by-label-homeSAVE ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3HSCNJH-part1 /srv/dev-disk-by-id-ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3HSCNJH-part1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    # <<< [openmediavault]






    root@OMVneu:~# mdadm --detail /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Fri Sep 16 05:50:43 2016
    Raid Level : raid5
    Array Size : 5860270080 (5588.79 GiB 6000.92 GB)
    Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
    Raid Devices : 3
    Total Devices : 2
    Persistence : Superblock is persistent



    Update Time : Sun Jan 19 15:59:07 2020
    State : clean, degraded
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0



    Layout : left-symmetric
    Chunk Size : 512K



    Name : openmediavault:raid5home
    UUID : 69bd812a:cd81f419:8a590c2f:87234e77
    Events : 61359



    Number Major Minor RaidDevice State
    3 8 0 0 active sync /dev/sda
    1 8 16 1 active sync /dev/sdb
    - 0 0 2 removed





    root@OMVneu:~# wipefs -n /dev/md127
    offset type
    ----------------------------------------------------------------
    0x438 ext4 [filesystem]
    LABEL: homeSAVE
    UUID: 2f22f941-e6a7-424e-8ce2-da43bc4a56e3

    • Offizieller Beitrag

    Ok are you sitting comfortably :)


    In English: It's f*cked In German (according to deepl.com) Es ist beschissen.


    Why;


    This is from wipefs output;
    LABEL: homeSAVE
    UUID: 2f22f941-e6a7-424e-8ce2-da43bc4a56e3


    This is from mdadm detail;
    Name : openmediavault:raid5home
    UUID : 69bd812a:cd81f419:8a590c2f:87234e77


    This from fstab;
    /dev/disk/by-label/homeSAVE /srv/dev-disk-by-label-homeSAVE ext4


    This is from blkid from your first post;
    / dev / md127: LABEL =" homeSAV9e22U-8U-8E-2 "U2 da43bc4a56e3 "TYPE =" ext4 "


    all of the above should be the same at least that's my understanding and would explain why a drive can't be added or why the raid cannot be stopped.

    • Offizieller Beitrag

    So would it be better/ easier to create a new system?

    Yes, I can't and no one else could resolve this, I'm actually amazed the system has been working at least the raid side of it.


    If you are running OMV3 then do a clean install of OMV4, you could try OMV5 but this would be a greater learning curve but in my opinion worth the effort.


    Take screenshots of your shares and any settings, backup your data, do a clean install (no data drives connected) make sure it's fully up to date before connecting data drives, wipe the drives before creating your raid.


    If you get stuck, post on here or drop me a PM Good Luck :thumbup:

    • Offizieller Beitrag

    If you are running OMV3 then do a clean install of OMV4, you could try OMV5 but this would be a greater learning curve but in my opinion worth the effort.

    I would go with OMV5. Just reinstalled both my systems with OMV5. No issues.

  • I`m sorry for that mass of work which I brought to you. Thanks for your time and help.


    I´ll leave it like it is for the week (hope that it stays working - Musik and Movies :-)). Next Weekend will I try to build up an new and smart one :)


    I´ll give you feedback. If you come to Berlin, lets have a beer !!!

    • Offizieller Beitrag

    I´ll leave it like it is for the week (hope that it stays working - Musik and Movies :-)). Next Weekend will I try to build up an new and smart one


    Consider to use a new drive for the OS. If something doesn't work out, you can use the old one until you have time to fix it.

    • Offizieller Beitrag

    I`m sorry for that mass of work which I brought to you. Thanks for your time and help.

    That's OK, at least I finally found the issue it's a pity it wasn't a simple do this do that and it's all singing and dancing again.

  • I`m back - so as my Problems :)
    I´d like to poste that after rebuild everything goes well, but it doesn`t.


    Could you help me with installing the Raid?


    I cleand all the drives with geparted, installed OMV 4 (as latest version). It runns well.


    As I couldn`t see the disks to choose them in the Raid-Management, eigher in "File-systems" but in "disks", I tried trobbleshootig by creating them in "File-Systems". I mounted and unmounted them. I deleted and rebuildet unter File-System.


    I find in the posts that we should use the clean once (discs), without anything on it - so as they had been after cleaning with geparted.


    I tried everything, just for not having to ask this again here and take your time... but i faild.


    why am I not be able to choose the disks in anyway to build the Raid?


    See the last settings




    I´m sure it must be easy for you...


    Thanks for Information

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!