Raid Issues after noexec plex fix

  • Hi all,


    I'm still relativate new to OMV/Linux as a whole so hopefully somebody can help. I'd like to try and understand what I broke and if I can fix it short of recreating the raid etc (not a big problem as this is a pretty new install of OMV5, not a lot to loose)


    I'm just using Raid 0, one 2 TB and one 6tb. I had an issue with playback on Plex (Transcoder error each time I tried to play something) and came across this potential fix from TDL (https://www.youtube.com/watch?v=f1aOiFtBG3Q) I tried to reverse the change by adding noexec back but still missing a raid!


    If I stop the the drive using (mdadm --stop /dev/md0) and then force and assemble (mdadm --assemble --force) I can then mount the drive in CLI, but I can't use it as expected. My shared drives don't work etc.


    Following this guide and the reboot the raid is now 'Missing' on the file system, is there anyway I can get this back?


    Thanks in advance and merry christmas!


    cat /proc/mdstat

    Code
    root@OMV5:~# cat /proc/mdstat
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdc[1] sdb[0]
          7813772800 blocks super 1.2
    
    
    unused devices: <none>

    blkid


    Code
    root@OMV5:~# blkid
    /dev/sdc: UUID="b754e39e-5878-4e66-d3dd-9019bfbec6a0" UUID_SUB="ca8387a0-efb4-75c7-fbd2-bcb2b9820337" LABEL="OMV5:Data" TYPE="linux_raid_member"
    /dev/sdb: UUID="b754e39e-5878-4e66-d3dd-9019bfbec6a0" UUID_SUB="63f658c8-92fb-a2b9-9f34-05cb77e12535" LABEL="OMV5:Data" TYPE="linux_raid_member"
    /dev/sda1: UUID="96C4-B874" TYPE="vfat" PARTUUID="7ec3139b-9e5d-4604-89f3-3f9e23fd3acd"
    /dev/sda2: UUID="108bde8c-9b99-4dab-b28b-47b613558d5a" TYPE="ext4" PARTUUID="309cbc0e-b3d9-4b8a-91a3-8a70cbddb714"
    /dev/sda3: UUID="4cc8e552-1869-4691-afbf-62085006a79b" TYPE="swap" PARTUUID="4e0eeeee-6cad-498b-b943-c3f3fdfe77c5"

    fdisk -l | grep "Disk "



    Code
    root@OMV5:~# fdisk -l | grep "Disk "
    Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
    Disk model: WDC WD60EFAX-68S
    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: WDC WD20EURS-63S
    Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
    Disk model: KINGSTON SV300S3
    Disk identifier: F0121397-D55E-4A79-946B-1A9D7C388B06


    cat /etc/mdadm/mdadm.conf



    • Offizieller Beitrag

    came across this potential fix from TDL

    From memory that video references one drive that plex uses, in your case I would assume you would have to change the actual raid /dev/md0.


    Does the video also tell you to run omv-mkconf fstab if it does the video is related to OMV4 for 5 you would need to run omv-salt deploy run fstab


    As the drives are clearly there in blkid you could try;
    madam --stop /dev/md0
    mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc]


    However that may or may not work, I take it you do realise that if one of those drives fail you lose everything.

  • Hi Geaves,


    Thanks fore the reply!


    I did indeed use omv-salt deploy run fstab and I have tried the two commands you have suggested (Stop and then assemble)


    I went to follow your commands again and realised I missed a step in the my original post.


    If i run the two commands:


    madam --stop /dev/md0
    mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc]


    I get the below:


    Code
    root@OMV5:~# mdadm --stop /dev/md0
    mdadm: stopped /dev/md0
    root@OMV5:~# mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc]
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
    mdadm: added /dev/sdc to /dev/md0 as 1
    mdadm: added /dev/sdb to /dev/md0 as 0
    mdadm: failed to RUN_ARRAY /dev/md0: Unknown error 524


    Googling the error code brought me to https://www.linuxquestions.org…ot-assembling-4175662774/


    Running echo 2 > /sys/module/raid0/parameters/default_layout first and then the two commands I am able to assemble the drive


    Code
    root@OMV5:~# echo 2 > /sys/module/raid0/parameters/default_layout
    root@OMV5:~# mdadm --stop /dev/md0
    mdadm: stopped /dev/md0
    root@OMV5:~# mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc]
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
    mdadm: added /dev/sdc to /dev/md0 as 1
    mdadm: added /dev/sdb to /dev/md0 as 0
    mdadm: /dev/md0 has been started with 2 drives.

    It then shows in OMV as /dev/md0 but when I mount it (on the WebGUI)I get the below error:



    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-id/md-name-OMV5:Data' 2>&1' with exit code '1': mount: /dev/disk/by-id/md-name-OMV5:Data: can't find mount source /dev/disk/by-id/md-name-OMV5:Data in /etc/fstab.

    The name of the drive when I set it up in OMV was /dev/disk/by-id/md-name-OMV5:Data but changes to /dev/md0 using the above commands which seems to be the problem, but I'm not sure how to fix this!


    Thanks


    EDIT: fully aware of the complete loss of data if one of these fails but its nothing really important, I'm only using RAID instead of UnionFS in this case because Radarr Docker had issues working when I used UnionFS.

    • Offizieller Beitrag

    Just to answer this, indeed it does.

    I take it that is the current setting? What's the output of cat /etc/fstab One thing you should always do when changing that file is to make a backup/copy before making changes.

  • I take it that is the current setting? What's the output of cat /etc/fstab One thing you should always do when changing that file is to make a backup/copy before making changes.


    If theres one thing I will learn from this is to make backups before changes like this!


    cat /etc/fstab:


    • Offizieller Beitrag

    Apologies I have missed some of your posts, re reading through you could try omv-salt deploy run fstab which should recreate the fstab, then reboot.


    Regarding the MergerFS alternative you have to change one of the options, replace direct_io as this is cause of docker config errors.

  • recreated the fstab and rebooted.


    This was the output from the command to recreate:


    cat /etc/fstab after reboot:



    Still missing!

    • Offizieller Beitrag

    The name of the drive when I set it up in OMV was /dev/disk/by-id/md-name-OMV5:Data

    Just going back over this and the above makes no sense, under raid management name I'm sure should display as /dev/md0 with the device displaying the label given to the array when it was created. The mdadm conf file clearly shows an md0 array, what also does not show is a reference to the raid in blkid which it should, neither does fstab, something has gone wrong with config but I don't where to start.

  • Just going back over this and the above makes no sense, under raid management name I'm sure should display as /dev/md0 with the device displaying the label given to the array when it was created. The mdadm conf file clearly shows an md0 array, what also does not show is a reference to the raid in blkid which it should, neither does fstab, something has gone wrong with config but I don't where to start.


    Just to resolve the thread and say thanks Geaves, didn't manage to fix the issues but appreciate the help!


    Ended up using your suggestion for the fix to MergerFS instead, up and running again thank you!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!