How to re-add a drive that didnt show when omv first started up

  • I have two external usb drives on a RPI running OMV. On a recent startup, only 1 of two drives (sdb) showed up after OMV started. I found the cause of the problem and restarted omv. I can see both drives under the storage, disks option. The array hasnt started a rebuild as I would have expected.


    When I look at the array status in ovm gui, it only shows sdb1, not sda1. Given the situation, wasnt sure how to readd the drive to the raid 1 array.
    Would appreciate any suggestions.


    Ron

  • USB RAID is no longer supported via the OMV 4 user interface - and not recommended (too many problems, for RPI´s often regarding the power supply of the connected disks).


    Therefore you will have to do the job by CLI.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • that is what I am looking for help with. I have both usb drives connected via a power usb hub so that I dont overload the raspberry pi with trying to power more than one drive.


    I am trying to learn how to deal with omv at this level before putting money into a proper system.

    • Offizieller Beitrag

    that is what I am looking for help with.

    TBH without appearing condescending you won't get any, using a raid option with usb drives is a bad idea, creating one on a Pi even more so, using a powered usb hub does not negate the problem of under powering.
    If using a Pi the drives should be independently powered, if you are using a powered hub I'm guessing you are using 2.5" drives.


    Granted there are 'youtube gurus' out there that set up a raid option on a Pi, the question is why? There is no benefit to using a raid option in a home environment even more so on a Pi.

    • Offizieller Beitrag

    As you already have a raid, you need it back working. After that you should really read a bit about raid and if you need it. Most likely not.


    OMV is using mdadm for the raid. You will find a lot how to assembly an existing raid in the internet.


    You need something like


    mdadm --assemble /dev/md0 /dev/sda /dev/sdb --force


    You need to adjust md0, sda, sdb according to your setup.


    Or you get rid of the raid right away


    • mount the filesystem that is visible and make sure you have access to all your data
    • wipe the other drive and create a new ext 4 filesystem
    • copy the data from the visible filesystem to the other drive
    • make sure all your data are on the other drive
    • wipe the original drive
    • create a new filesystem on the original drive
    • set up a rsync or rsnapshot scheduled job to backup the date from the "other drive" to the original drive
  • I am having a similar issue. I built a NAS box using a retired barracuda backup appliance. I loaded OMV on a 128gb SSD. used two additional SSD's and created a Mirror. Works great..


    During a power off session, one of the drives became disconnected, upon power up, I could no longer access the SMB shared folder.
    So, I thought, would good is the array, if I can't see any data with a drive failure..
    With a little searching, I found out, I need to mount the drive at the console, outside the GUI.. no problem.. except I can't seem to get it done..
    https://forum.openmediavault.o…disk-from-a-failed-Raid1/
    Help please..


    attached it a photo of the error message..
    Of course if I power up with both drives, everything is fine .. But I would/need to know how to handle a drive failure..
    thanks in advance
    Scott

    • Offizieller Beitrag

    I am having a similar issue.

    What are you using usb drives?


    Your screenshot is wrong mdadm --create? why are you attempting to create an array?


    Power up with both drives and leave it alone, if a drive fails on it's own the raid will still function until you get another drive.

  • What are you using usb drives? nope, as my post states "used two additional SSD's and created a Mirror."


    Your screenshot is wrong mdadm --create? why are you attempting to create an array? According to the link provided, forum.openmediavault.org/index…disk-from-a-failed-Raid1/ The OMV UI doesn't allow building a degraded array. However you can use the mdadm tools re-build your raid1 as degraded array.
    You can construct a mirror with one disk missing (sda available and sde missing), after that OMV should be able to see it and mount the array, from there you can move your files to md0.


    Power up with both drives and leave it alone, if a drive fails on it's own the raid will still function until you get another drive. with one drive failed (disconnected to simulate) I can not read/access the SMB shared folder(s).


    "Of course if I power up with both drives, everything is fine .. But I would/need to know how to handle a drive failure.. (ie... immediate access to files)
    thanks in advance"

    • Offizieller Beitrag

    I'm sorry, I thought an unaccessible/dead/non powered/disconnected drive would simulate a failure.

    :) nothing new there, with the drive disconnected post the output of each using the </> on the menu bar this allows for easy reading and I'll explain.

  • root@OMV:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdb[1](S)
    249928024 blocks super 1.2


    root@OMV:~# blkid
    /dev/sda1: UUID="b0da7f4e-85e4-4ba4-8154-7486c668bfc9" TYPE="ext4" PARTUUID="6de55d8d-01"
    /dev/sda5: UUID="dec21aca-b6b8-4946-96d3-8e6933f57d5d" TYPE="swap" PARTUUID="6de55d8d-05"
    /dev/sdb: UUID="8094f6c0-cb72-1e55-f65a-5aa978e7dc1f" UUID_SUB="55e3b8dc-3491-569d-6a23-2c955bcd3af2" LABEL="OMV:RAD1" TYPE="linux_raid_member"
    root@OMV:~#


    root@OMV:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 119.2 GiB, 128035676160 bytes, 250069680 sectors
    Disk identifier: 0x6de55d8d
    Disk /dev/sdb: 238.5 GiB, 256060514304 bytes, 500118192 sectors
    root@OMV:~#



    root@OMV:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 name=OMV:RAD1 UUID=8094f6c0:cb721e55:f65a5aa9:78e7dc1f
    root@OMV:~#


    root@OMV:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=OMV:RAD1 UUID=8094f6c0:cb721e55:f65a5aa9:78e7dc1f
    devices=/dev/sdb
    root@OMV:~#



    thanks in advance..
    PS this is with one SSD drive NO POWER.

    • Offizieller Beitrag

    cat /proc/mdstat


    This tells you that the raid in inactive, the raid is running but it reports inactive (dead) it can be brought back up but this is not the way it behaves.


    blkid


    This is helpful in so much that it gives information regarding each drive that it finds, it's UUID, Label, Type etc.


    fdisk


    Self explanatory gives you information about the drive sizes etc.


    mdadm.conf


    This mdadm's configuration file and gives information of the arrays on the system including the array reference /dev/md0, this could also be /dev/md127 or /dev/md1


    mdadm --detail


    Gives you information about the array and the devices it finds in that array, as yours is a mirror one is missing.


    OMV uses mdadm which is software raid unlike a hardware raid where there is chipset looking after it, mdadm has to be 'told' what to do, OMV's raid GUI does that for you in the background.


    If a drive fails whilst the raid is running it would appear as clean/degraded, mdadm knows a drive has failed, you remove the failed drive using the GUI, then remove the drive from the machine, add a new drive, wipe it (sometimes it has to be formatted) then using the GUI add (recover) the raid to the array. The array will begin to rebuild from the other drive/s.


    So to simulate a drive failure, select the array, then delete, from the drive box that appears select the drive to remove, the raid will then appear as clean/degraded.


    Likewise if you want to increase the size of the array you can do it all from the GUI, remove one, add the larger drive, raid will rebuild, when done do the same with the other drive, once complete you then grow the array from the GUI so that your larger drives are now used to their max, you then have to change the file system size to get the whole thing working to the new size.


    Simples :D


    I'm signing off now any questions post them and I'll answer tomorrow.

  • been in the IT "windows" business for 20 years.. just dabbled in Linux environment for recovery/clone/virus issues..I love the fact that basically there are NO boundaries and it MIGHT ask you ONCE.. ARE YOU SURE..


    I'm gathering by your responses that, since I brought OMV "UP" with only one drive attached.. thats why I can't see the shares? And if the drive went "offline" while OMV was UP, i would still be able to see the data and copy it?


    If that's correct,, lets assume the power went out, UPS fell on its face.. OMV goes down... Upon power UP.. drive A or B, goes tits up and dies... not recognized or seen by the BIOS or OMV... how does one access the data on the operable drive..


    thanks
    Scott {beginner}

    • Offizieller Beitrag

    I'm gathering by your responses that, since I brought OMV "UP" with only one drive attached.. thats why I can't see the shares?

    Yes, the raid is running but inactive so nothing is available, you can bring that raid up with just one drive and it will appear as clean/degraded.


    And if the drive went "offline" while OMV was UP, i would still be able to see the data and copy it?

    Yes again, because mdadm has done this for you, it knows the drive has gone offline, it doesn't know why.


    If that's correct,, lets assume the power went out, UPS fell on its face.. OMV goes down... Upon power UP.. drive A or B, goes tits up and dies... not recognized or seen by the BIOS or OMV... how does one access the data on the operable drive..

    You then come in to the options from that link, but here cat /proc/mdstat will tell you the raid is inactive and in the gui (if memory serves me correctly) there is no raid showing in Raid Management. So using the gui you can locate/identify the missing drive. Using your scenario I'm guessing your missing drive was /dev/sdc.


    So we know that the Raids reference is md0 and that there is one drive /dev/sdb active. the following should bring the raid up in a clean/degraded state;
    mdadm --stop /dev/md0 followed by mdadm --assemble --verbose --force /dev/md0 /dev/sdb if that failed then mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc] either one would tell you that the array is starting with just one drive.


    None of the above is necessary as most options can be completed in the gui, but where a raid disappears the cli is the only option.


    This is where SMART can help you out as this monitors the state of the drives and can report errors during tests via email, this gives you an early warning of a potential drive failure, so you can replace the drive drive before it dies.


    Another way of setting up your server is to have one drive for data and the second running rsync or rsnapshot, if one drive does fail you still have one working, for information on this check out the guide


    If all fails you obviously have a backup of your data :)

  • That worked perfectly..((**THANKS FOR YOU HELP**)) after running mdadm --stop /dev/md0 followed by mdadm --assemble --verbose --force /dev/md0 /dev/sdb.


    I was able to see the smb shares again.. I ran a shutdown command, reconnected the other drive..


    OMV GUI once again see the complete RAID and reports "clean, degraded" can I assume this will shake itself out ? Or is there a command(s) that need to be run to "straighten" things out? If so, can this be done from the GUI ? or require more CLI


    I will take a look at rsync and rsnapshot..


    Thanks again..
    Scott

    • Offizieller Beitrag

    OMV GUI once again see the complete RAID and reports "clean, degraded" can I assume this will shake itself out ?

    Linux nor mdadm will shake itself out :) but a :thumbup: for the concept.


    You may have to re add that drive you pulled if the raid is showing as clean/degraded, go through the link again, if it the pulled drive is there you may have to re assemble the array with the pulled drive, so option two in my post above.

  • i used mdadm --add /dev/md0 /dev/sdb


    root@OMV:~# mdadm --add /dev/md0 /dev/sdb
    mdadm: added /dev/sdb
    root@OMV:~# mdadm
    Usage: mdadm --help
    for help
    root@OMV:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 spares=1 name=OMV:RAD1 UUID=8094f6c0:cb721e55:f65a5aa9:78e7dc1f
    devices=/dev/sdb,/dev/sdc


    OMV GUI now reports "recovering"


    Adding more to my notes..


    Mission complete.. thanks for your time.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!