Recovering from two damaged disks with UnionFS and SnapRaid - Can't Mount New Drives

  • Had a quick power surge that didn't affect my computers but cut off my cooling supply to my computer closet. By the time I noticed it, two of my hard drives in my 24 hard drive array had failed. I couldn't start up OMV and it went into emergency mode. I had seen this before and knew to comment out the mergefs lines in the fstab file. Once done I could get into the OMV dashboard.


    Once in, the two damaged disks were shown as missing in the filesystems. I tried to delete them, but the delete button was greyed out. I knew I had to remove the dependencies, so I went to snapraid and removed them there, no problems. I went to unionfs and I could not remove them from the pool they were in, they were shown as n/a. I still couldn't delete the filesystems.


    I pulled the two damaged drives and added two new drives and rebooted. Nothing changed. The filesystems still showed the two filesystems as "missing" and not removable. Unionfs stilled showed them as n/a but not removable from the pool. I try to add one of the new disk as a filesystem and get the error notice which I'll put at the bottom of this post. I can't add the new disks , so I can't add new filesystems to Snapraid, so I cannot recover the disks so I cannot bring back my pools and get my files. Ugh.


    So I get cute. I go into fstab and comment out the uuid of the two dead disks from both the filesystems and the mergefs entries. I do the same anywhere I can find them in /etc/opernmediavault/config.xml including from snparaid and unionfs. I uncomment the mergefs lines in fstab and reboot. Back to the emergency prompt. I recomment the mergefs in fstab and reboot. Back to the OMV dashboard. The filesystems no longer show the missing filesystems, the UnionFS no longer show the N/A's. Great I think, let's add those two new disks and let SnapRaid do its thing.


    No such luck. Same error message, which I dont' understand. If anyone can give me a clue, I would really appreciate it. What I'm thinking of doing next is to re-install OMV from scratch and install UnionFS on the drives on the old pools - would that give me access to my old files, such as they are without the two disks?


    Error #0: OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': debian:


    {this goes on and on looking at each of my filesystem mountpoints, each time with forced remounts saying the target was already mounted , but still seemingly ok until ... }


    ID: mount_filesystem_mountpoint_26a44533-650c-4657-846e-2a685c2f8286 Function: mount.mounted Name: /srv/2e5eb5fd-d840-4e59-b7bc-531fd272c557 Result: False Comment: fuse: mountpoint is not empty fuse: if you are sure this is safe, use the 'nonempty' mount option Started: 19:38:54.889842 Duration: 103.351 ms Changes:


    { a bunch more stuff until it is summed up with}


    <<< [openmediavault] Summary for debian ------------- Succeeded: 48 (changed=24) Failed: 1

  • The messages says that the directory you are using as a mount point for mergerfs is not empty. Probably something has writing to the location of the mergerfs while the mergerfs was not working / mounted


    Boot to recovery and cd to that directory / mergerfs mountpoint, move all files to a different location or delete them (make sure the mergerfs is really not mounted before deletng something).


    How to fix it depends on what has been written there. Someone will be missing the files when the mergerfs is mounte again or some updates are loft.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!