RAID 1 Mirror Showing as clean, degraded but no RAID name given

  • This setup has been running for probably 5 years w/o an issue. Today I'm suddenly seeing this. I cannot use the recover dialog as it appears a valid RAID name is needed?


    • When I went to list the disks, one of them was indeed not listed, when I scanned, the drive reappeared in the list.
    • It appears that the RAID configuration believes one of the drives has failed though.
    • Is there a good way to verify the drive health with OMV?
    • Is it possible the SATA controller (I'm using a separate board for this) somehow lost connection briefly to the drive or possibly the drive lost power enough to do something to the array?
    • I only noticed this as the data was not presenting in my shares. (I figured the RAID1 configuration would still present the data even in a degraded state???).
    • I humbly submit myself here, I'm no where as competent as the rest of you folks.
    • it could be the case I just need to present a new drive to the device so i can rebuild the RAID.

    thank you very much!


    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

    Edited 4 times, last by BackWoodsTech: added RAID info ().

  • attempting to gather some more information based on post-requirement I may not be able to provide all


    Drives: 2 1TB Samsung 860 EVOs 1TB (Man where does 5 years go!?)

    To my knowledge, nothing has happened. I've not power cycled nor updated for a long long time. the OMV logs I can't pinpoint when the issue occurred. My Plex media was working as recent as of a week ago so sometime between now and then is the best I can place this.



    I guess if this isn't worth it I may just get rid of the RAID and copy the data from the healthy drive (presuming the other drive is indeed toast) to a new SSD and forget the RAID 1 and just use RSync to backup from one SSD to another. Hope I got all the info you are looking for!

    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

    Edited 2 times, last by BackWoodsTech: I was dumb and didn't read the post submission requirments for degraded or missing RAID arrays. ().

  • Update:

    So after a reboot there are a few new observations:

    1. OMV has a name for the RAID array now
    2. It still shows the array as clean but degraded
    3. I can access media from PMS (which pulls data from the same drive)
    4. I still cannot access data from my shares (again same data PMS pulls from)

    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

    • Official Post

    I guess if this isn't worth it I may just get rid of the RAID and copy the data from the healthy drive (presuming the other drive is indeed toast) to a new SSD and forget the RAID 1 and just use RSync to backup from one SSD to another.

    In all honesty this is the most sensible way forward, looking at the output, mdadm has removed the drive for whatever reason, you can't add that drive back to the array unless it can be repaired.


    Looking at the information you have provided, OMV is running on an SBC, your 'sata controller' is simply a usb to sata bridge, whilst the connections are sata the underlying hardware is still usb. The creation of raid using usb was removed in OMV4 (I think) when some users were experiencing data loss when using usb for raid creation.


    At this moment in time you have no idea if this is power related, the 'sata controller', the drive itself or the filesystem on that drive. Is the system recoverable by replacing that drive and rebuilding the array, possibly, but there would be no guarantee. Your best option is go down the Rsync route as you've suggested and hope that the existing drive continues to function.


    You could run fsck on the failed drive, fsck /dev/sdb I would hope that that would check the filesystem and ask you if you want to repair.


    As you have not added an OMV tag I'm guessing you are running an EOL version and would suggest, get the data off the working drive and deploy OMV6, re-create your shares etc. Whilst this might seem extreme it would about a day to get this set up

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 6x amd64 running on an HP N54L Microserver

  • geaves, much appreciate the thorough response!


    I failed to mention a few other specifics (a couple of which are in my signature but I should have added into the details I provided (apologies!): The device is indeed an SBC but does have a PCIe slot which allowed me to use a PCIe 2.0 -SATA III adapter, (but yes, not a controller proper). And you are also correct that I'm using an older version 5.x.

    To be honest, for my purposes, this OMV (I have run 4.x before) has been so damn stable for me I haven't even thought about upgrading, but I guess this will be the impetus for this change.


    I'm sure it will completely shock you to learn that Linux isn't my forte (although I'm not a complete dummy about basic server management). I'm going to proceed with the plan of attempting to recover the healthy drive.


    • If I were to use the OMV GUI, I would first delete the RAID configuration, correct?
      • This will allow me to access the individual member drive. From there I can inspect the drive health.
    • Why might PMS show allow media access & playback (albeit intermittent) when the file shares will not allow any file exploration?
    • How is the array not accessible at all in a degraded state? I've worked with RAID 1, 5, 10 configurations in a production environment (all different flavors of Windows though (physical RAIDs, Storage spaces etc.) and the file system was always still accessible even in a degraded state. So this has me a bit nervous.

    Thank you so very much ,hopefully i'll be off to fix this soon!

    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

    Edited once, last by BackWoodsTech ().

    • Official Post

    If I were to use the OMV GUI, I would first delete the RAID configuration, correct

    No, that should not be necessary, what should be possible, is to use Rsync to create a backup of the complete drive to a new drive, have a look here then before doing anything else confirm the backup by using WinSCP on windows or midnight commander on linux to ensure the data has been copied.


    Then proceed with a clean install of OMV6, look in the guides section there are some comprehensive install guides written by a user so they are easy to follow.

    How is the array not accessible at all in a degraded state

    It should be, and I can't offer an explanation simply because I don't use SBC's other than for standalone specific tasks, I could emulate what has happened to you on a VM on a Windows machine (test rig) and the data would be accessible, as the array is in a clean/degraded state.


    But WinSCP would allow you to explore the degraded array and check the file content, the fact that PMS sees your media albeit intermittent could suggest there is a hardware issue, something that is degrading or failing.

  • No, that should not be necessary, ,

    My problem there is that i cannot seem to mount & navigate to the individual drive while it is a RAID member:

    mount: /mnt/temp: unknown filesystem type 'linux_raid_member'.

    So, this is why I was thinking I'd need to break/delete the current array. Again, not my forte...If you could point me in that direction, I should be out of your hair shortly!

    Quote


    the fact that PMS sees your media albeit intermittent could suggest there is a hardware issue, something that is degrading or failing.

    Makes me wonder if it's not the cheapo SATA card itself...lots of speculation on my part.


    New drive & card are on order. I'd sure like to be able to directly access the supposed healthy drive in the meantime to confirm a few things. I do have a near full backup in cold storage on external drive. But I'm also trying to take the opportunity for a learning experience here which is why I'm still pursuing the working with the existing drive.

    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

  • mofo@Muninn:/mnt/temp$ sudo mount /dev/sdd /mnt/temp

    mount: /mnt/temp: unknown filesystem type 'linux_raid_member'.

    mofo@Muninn:/mnt/temp$ sudo mount -a /dev/sdd /mnt/temp

    mofo@Muninn:/mnt/temp$ ls

    mofo@Muninn:/mnt/temp$ sudo umount /mnt/temp

    umount: /mnt/temp: not mounted.

    mofo@Muninn:/mnt/temp$

    I must be doing something dumb. Perhaps I cannot access due to the degraded state of the RAID.


    Is there any reason I shouldn't just unmount the RAID array at this point if that's possibly what's preventing me from accessing the member drive?


    I'm a little cautious of trying to repair the file system with fsck with the current RAID in place (but perhaps that's completely unwarranted).

    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

    Edited once, last by BackWoodsTech ().

  • Right I see what you mean, I didn't realize they were listed under /srv (i'm an idiot). Curiously, if I navigate there I'm only presented with the array, not the drives. Trying to access the array give i/o errors.


    mofo@Muninn:/$ sudo mount -a

    [sudo] password for mofo:

    mofo@Muninn:/$ ls

    bin dev export lib media opt root sbin sharedfolders svr tmp var

    boot etc home lost+found mnt proc run selinux srv sys usr

    mofo@Muninn:/$ cd srv

    mofo@Muninn:/srv$ ls

    dev-disk-by-label-LookingGlass dev-disk-by-label-Storage ftp pillar salt

    mofo@Muninn:/srv$ cd dev-disk-by-label-Storage

    mofo@Muninn:/srv/dev-disk-by-label-Storage$ ls

    ls: cannot access 'lost+found': Input/output error

    ls: cannot access 'Music': Input/output error

    ls: cannot access 'Users': Input/output error

    ls: cannot access 'plexmediaserver': Input/output error

    ls: cannot access 'Pics': Input/output error

    Music Pics Users aquota.group aquota.user lost+found music pics plexmediaserver vids

    mofo@Muninn:/srv/dev-disk-by-label-Storage$ cd pics

    mofo@Muninn:/srv/dev-disk-by-label-Storage/pics$ ls

    ls: reading directory '.': Input/output error

    mofo@Muninn:/srv/dev-disk-by-label-Storage/pics$


    is there any reason I cannot/should not unmount the array?

    Off-Grid Home Server Project:

    ROCKPro64 4GB (Rockchip RK3399 Hexa-Core)

    OMV 5.5.11 on Armbian Buster Server

    SD boot | 128GB SSD Docker Storage | 2x 1TB SSD RAID1 Storage

    • Official Post

    if I navigate there I'm only presented with the array, not the drives

    That's because the two drives are linked into a Raid1, therefore OMV presents the array to the user as a workable storage unit, in this case a raid array.

    Trying to access the array give i/o errors

    That points to the drive or your sata controller, google i/o errors linux

    is there any reason I cannot/should not unmount the array

    No, but to what end


    If this was me I would be using a live linux distro and a simple usb to sata adaptor to see if the drive would mount


    At this present moment in time this is not looking good, is there any way you can connect the surviving drive to the usb3 port, with an external usb case or usb to sata adaptor.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!