Beiträge von Dejh0n

    Hey guys,


    Just a quick update.


    After your advice gderf, I went to the FSTAB and commented the drive out in the top section.

    I also found it referenced twice in the MargerFS pool section in the FSTAB. I deleted out BOTH of these references and after a reboot I was able to 'remove' the old faulty and already removed drive from the MergerFS pool without errors!


    I then went on to mount the replacement of Drive2 with Drive12 and it was able to mount with no problems this time!
    I also was able to go into the SnapRAID service and modify the data drive, Drive2 with the newly added and mounted Drive12.
    I then ran the fix error via the GUI and left if for a few days and a lot of data has been put onto the replacement drive12.

    I will investigate the SnapRAID commands via a Screen session as instructed shortly. I was just VERY HAPPY to be able to get the volume online even if only partially thanks to both of your help.


    So thank you both!


    I will post any potential updates after I see how the SnapRAID fixes go - I am less hopeful, as I don't believe I had set this up correctly initially or maintained it as I think I was supposed to..

    Any service that references that drive will prevent the GUI from allowing you to unmount it. It can be mergerfs or any other. You need to remove all references to be able to unmount that disk.

    Your first problem would be to recover that data from the pool and Snapraid. Check the Snapraid documentation to do this.

    This is the issue I seem to be having. I am unable to mount the replacement drive to the OS to then be able to add it into the SnapRAID setup.

    According to the documentation here.

    I need to add the replacement drive (now DATA12),

    modify the config to change my SnapRAID 'DATA2' drive from my DATA2 disk to the DATA12 disk,

    then I can fix the drive and begin restoring the data.

    If I navigate to the SnapRAID service and attempt to make those changes I am hit with the following screen:



    I don't even know what that text string in the drive box is referring to. None of my disks have that UUID.
    If I click the drop down box I can see all the other drives, just not the recently re-added 'DATA12' drive

    The resetperms plugin worked the same in OMV5 as it did in OMV6.

    Check the services, it could be some rsync job or something else. Also check the SMART section.

    There was a couple of old SMART tests which had a N/A device, which I removed.
    I tried deleting the shared folders from NFS which it would not let me and threw up big errors..


    I feel like it is the MergerFS/UnionFS folder causing these issues as I cannot reset any of the permissions as that volume is not and seemingly cannot be mounted.


    I need to try and restore as much data onto the replacement disk from my SnapRAID config.

    Would I still be able to do this if I were to do a fresh install of OMV?

    I have a few docker containers I use (I use portainer) which I really dont want to loose data from and have briefly read there is a compatibility issue with upgrading 5 > 6.

    Thanks for coming back to me.


    I am on OMV 5.10 by the way - so the readme linked wasn't super clear to me.
    I have reverted both FSTAB and snapraid.config file to include the original disk UUID now.

    I am still having the same issue as described. When I go to remove the N/A drive I can't un-select DATA2 at all to remove it from the pool.

    If I click 'Save' it appears to remove the N/A drive but when I hit apply then it throws a large error and will not let me apply the changes.

    When looking through the 'File Systems' tab I can see the old UUID DATA2 drive says it is still referenced.

    I did go to my shared folders where there was a folder which referenced the failed disk - I changed to DISK4 now. There was also a folder in there called Volume1 which I also deleted.

    Unfortunately, I am still a bit lost at this point in how to find where this failed disk is still referenced.

    Any further guidance would be appreciated

    Hi hive mind,


    I need some assistance on this one please.


    I was in the process of backing up my data to a newly acquired NAS for backup purposes when I had a HDD fail on me.

    Great.
    Tried connecting to it, OMV showed me there was IO errors - I was unable to read anything from the drive. Bugger. Dead drive.

    Luckily the drive was still under warranty, so got it replaced with a brand new drive.


    Plugged in the new drive and was able to mount it under 'File Systems' - formatted it and called it the exact name as the old drive - "DATA2"


    Oddly enough, the failed drive still turns up and I can't seem to remove it either.

    I did try replacing the UUID in FSTAB with the new drive UUID and still no luck!


    When going to 'Union Filesystems' - I can see the main storage volume (Volume1) with all the drives in there and one showing N/A - obviously for the old DATA2 drive.


    When I try to edit Volume1 to remove the old DATA2 drive, it does not show up in the list for me to remove.
    When I try and make a new volume, the replacement DATA2 drive does not show up as an option.


    Hopefully, someone can help me and let me know what I've done wrong in this process!

    So following up - gave this a go.


    Disconnected all drives except my boot drive, turned the server back on and navigated to that folder.

    It showed quite a few errors (obviously) during boot up.


    So I navigated to the folder in question which was there.


    There was quite a few folders in there (of names and structures I was familiar with), however it appeared there was 0 files in there.

    I used the ls -a command and there seemed to be nothing in any of the folders and subfolders.


    I navigated to the root folder again /srv/94e87264-55f0-49c3-b699-0efa2dbff704 and did a rm -r of the 2 sub folders in that directory,

    rebooted the server and it came online!

    I will have to continue to monitor for any oddities that may be in place - I did notice that during this boot there was an error saying something about unable to rotate the log files(?)?

    Hi OMV team,


    I have an issue I have been trying to troubleshoot, however I am fairly useless at debian and cannot for the life of me figure out what is breaking and how to fix it.


    Last week Friday I found I had no access to my data - my MergerFS volume appeared to be down.
    So I rebooted the server and it did not come back up like it usually does when I have an issue.


    I've got it hooked up to a screen and keyboard and it appears that OMV cannot mount the MergerFS volume (I'm guessing)


    Here is the boot screen and result of the command it recommended I type to see what the error is:



    I could really use some help to get my server back up and running again please! ?( ?(  =O ;(

    Yes, mergerfs just combines the contents of the filesystems you put together.

    Okay. Well when I browse the folder it's completely empty, save for 2 .DS_Store files.


    What setting should I have used for setting up the pool? I used MFS but should I have used EPMFS?

    okay then. I have re-done those commands with permissions and I have mounted all the drives.

    What else do I need to configure in OMV? Do I just mount the drives and add them all to a pool as I had before?

    ok great. You can follow the steps i described above. Start with docker or the data disks as you like.

    If you now have an additional disk for the docker stuff start with the data disks and we move docker off the root partition later.

    Alright, so I started with docker. did those 2 folders as you said with a simple "cp -r /mnt/folder/* to /folder". The containers all show up. Great!


    However somehow in my infinite wisdom (/s) I put the /config folder on my DATA1 drive...



    So i've managed to get all my disks plugged in and I can see them all under Storage>Disks. Should I just mount them?


    I'm going to need the SnapRAID and MergerFS config files from the original OS drive right?

    Sorry. To be clear: I have a fresh installation of OMV with all updates applied on a new 500GB SSD (was an old 3TB HDD)

    I would like to have everything running on the SSD with all my previous data and dockers containers


    Yeah, I’m not entirely sure as it has been running for 18 months and continuously online for about 35 days since the last reboot.


    Great, I had a feeling you may have asked me to do that which I’ve gone ahead and done.

    Latest version downloaded and installed and have checked and installed all the latest updates as well as OMV-extras and SnapRAID and UnionFileSystems which I know I was using before.

    I have all my data and parity drives still disconnected

    Alright, I have made a clone of the OS HDD and verified the cloned one by having all the same GRUB boot options as the source HDD.


    Now what?


    As an aside I would actually like to move the OS to a 500GB ssd which I have on hand. If this makes things any different..

    OK, I am currently booting into Clonezilla Live and will copy the OS drive to another HDD the exact same size.

    I will post again when it's all complete

    No I didn’t. I remember a guide I used told me to put it off of the MergerFS pool of which the only drive available was the boot drive..


    I do not have a backup of a working boot drive.

    It was unfortunately on my list of things to try and figure out (I was planning on moving it to a 500GB SSD)