Beiträge von theedo

    Ok....my bad.


    I had to many of the boxes checked.
    If I leave it just as archive mode it works without the protocol option.


    I had to turn off extended attributes and preserve acls and modification times.


    Considering archive mode provides that anyway no biggie.


    Consider this pebkac issue solved.


    Just in case it's unfamiliar, pebkac = problem exists between keyboard and chair.

    I can't seem to get OMV 4's rsync plugin to push a backup to a server on the same network.


    If I run the command manually in a shell it works just fine.
    I have had to use the --protocol=29 for backwards compatability of the rsync daemon running on an iomega storcenter ix2-200 running on the same network. The iomega is eol so I can't expect an update.


    running rsync --version on both reveals the following:
    OMV 4 = rsync version 3.1.2 protocol version 31
    IOMEGA = rsync version 2.6.9 protocol version 29


    As I said previously, I can ssh into OMV and run:
    rsync -avz --protocol=29 /srv/dev-disk-by-label-somelabel/someshare/ user@host::module and it works just fine.



    I can't seem to get this to work in the plugin to work accordingly.


    Under extra options I have tried putting in both --protocol=29 which didn't work. Tried --protocol='29' which didn't work.



    I could set up a cron job via webmin to run it manually but would really like to use the convenience of the plugin.



    Can anyone point me in the right direction?



    Thanks in advance.

    Here's my issue:


    My system disk died. I was still on OMV 2 so decided it was time to upgrade to omv4. All my data was backed up so data loss was not an issue.


    While I was at it I changed my 2 raid 6 arrays to snapraid and mergerfs. Other than the time it took to copy data to snapraid from my backup after flattening one of my raid arrays and the time it took to do the original sync everything went fine.


    I added the plugins I wanted (mysql, autoshutdown, and connecting to vpn on boot) configured them and everything is running as it should.


    Now I need to get my media apps going that are now deprecated in OMV 4 so it's time to install docker. I enable the repo, install the plugin and get errors. I've installed, reinstalled searched for solutions to no avail.


    I figured I would try an OMV 4 clean install again and see if it helps. The issue? the fresh install will rename my drives and I will have to do a forced sync again (snapraid noob here). I just don't want to wait that long again.


    Is there a way to back up my snapraid config and mergerfs config etc and copy the info back after a fresh install?


    My system is an older supermicro rack server.
    My snapraid is 8 x 2tb content/data and 3 x 3tb parity. I have taken the second raid 6 8 x 1tb drives off line a will decommission once i get OMV 4 set up to my liking.


    Any advice/suggestions.


    I'm open to trying to get docker going, but at this point I'm tired of trying at least until I do a fresh omv install to see if that helps. I'm tired of the Failed to execute XPath query '/config/services/docker' error.
    Found several threads that talk about it, but none of the suggestion to fix it worked for me.


    Thanks in advance.

    Ok...let me start by saying that although I have been using linux for more than a decade, I have learned just enough to make me dangerous.
    I love OMV and have been using it for a media storage server for a couple years now.


    I had a 8 x 1TB RAID 6 array until this past weekend (May 10 & 11) that was functioning just fine. 5.46 TB formatted with EXT4 over LVM.
    I say had because over this past weekend I attempted to grow the array by 2 x 1TB drives.


    During the grow process something has happened to cause the whole array to disappear from the OMV web interface.I know the array is still there because when I ssh in and run some commands, it's there but not active. As I have read many different posts the last few hours, I thought I should ask more direct questions because, being an idiot, I don't have a backup and am hoping I can get even a degraded array up at least to copy/backup the data so I don't loose it. The array only had about 1.5tb of data on the 5.46tb array.


    Like i said...just enough to be dangerous. anyway, here goes:


    One drive developed issues during the grow.


    There is no md127. SMART shows sdf to have 230 errors which developed during the grow with eminent failure.

    Code
    md127 : inactive sdf[8](S)
          976761560 blocks super 1.2
    
    md0 : inactive sdb[0] sdg[9] sdk[7] sdj[6] sdh[5] sdi[4] sde[3] sdd[2] sdc[1]
          8790854040 blocks super 1.2


    The UUID of the array seems to have changed. I believe OMV generates the media directory entry from the UUID which is different in fstab.

    Code
    root@nas:~# mdadm --examine --scan
    ARRAY /dev/md/md0 metadata=1.2 UUID=69098bc2:b0c27cc8:97388544:52bf6a73 name=nas:md0



    fstab


    If anyone can point me in the right direction, I'd greatly appreciate it. And I promise, from this day forward I will do a back up. WHY? because raid (even raid6) is no substitues for proper backups. lesson learned.