Beiträge von Wek

    ryecoaaron yep already doing it :) actually while I was writing the post, I just ate the bullet and tried to report back if anyone will find in the same situation.

    So I launched it, everything seems to be going on without an issue at least from the plugin point of view, but then I have another question a more important one at this point.

    How can I check the integrity of the backup? what I mean is is there something from rsnapshot plugin I can do to be sure that the files are correct and the base is solid? i.e. no hard links destroyed or backup files missing etc.etc.

    I'm checking some files manually but there is 3 tera worth of data would be insane to do it hand by hand, is there a simpler way?

    Hi everyone, I have a little doubt about migrating rsnapshot backups from one hard disk to another and even through different filesystems.

    I have a rsync snapshot working since years, but now I have to migrate everything into a bigger hard disk since the space is ending in the old one.
    Looking through various forums and wiki I ended up that doing:

    rsync -azhvHAE /oldbackup /newbackup should do the trick and respect the hard links created

    And the I can restart the rsnapshot plugin of OMV to keep backing up from where it left on the new bigger hard drive.


    My doubt is what about when you do a migration to a different filesystem?


    Let's say the main hard disk where the original data was formatted in ext4 and I need to migrate all the nas original files to a zfs filesystem, would then the reactivation of the rsnapshot backup still working if I manually change name to the "root folder" of the rsnapshot backup with the proper pool name?

    Or it doesn't make any sense and rsnapshot will not recognize those are basically the same files just root directory is changed?
    So basically is there a way to migrate a rsnapshot backup in this situation or is it just better forget about the old backup and start a new one from scratch on the new filesystem?

    yep unfortunately I have to give access to external collaborators and they are not techy enough to setup a vpn on their own this would make it painful :D moreover most of them I don't want to access the whole network that the only reason I went through nextcloud route, otherwise I would already have done it thrugh vpn.

    Well, thanks anyway at least now I know all my options, at this point I have to think if it's better to use some proprietary stuff like google drive or one drive alltogether and share just that stuff on the cloud, and bypass alltogether the issue :)

    I will think through the risks and benefits of proprietary cloud/expose my lan to the internet.

    Thank you very much you have been really helpful

    yep I was using the npm as well, then I tried cloudflare tunnel, it sucks, so back to the idea of using the npm and opening ports, I'm not that fond of put it pubicly though, since I saw nextcloud git issues of nextcloud and the teams seems really slow to patch bugs, don't really know if it is worth it.

    is there a way to mask the ip? instead of putting a service on the open internet just like that with an open port of the router?
    I was looking at people using vps etc etc. it seems a mess though, and accessing it through tailscale or something like thtat is not really doable for the avarage user.

    So I guess the only way here is to put on the internet just the real ip unless I'm missing some other way to do it properly

    chente ok I reached a road block with this setup, I tried so many stuff so far, ending up with cloudflare tunnels etc or reverse proxy through nginx and cloudflare dns, but both of this scnario has a HUGE problem, the 100MB limit of cloudflare to upload stuff, esecially through the nextcloud desktop app, also I think the web interface has the same issue.

    So how do you solved it? as I think just use port forwarding and exposing 443 of ngix-proxy to the public with a real IP is not that great from a stand point security.

    But the limitation of clouflare tunnel or dns proxy of the 100MB make nextcloud useless..is there any other way?

    crashtest nope this would be overkill for the use at the moment.


    It's fine, they have plenty of offsite backup of the important data, so if the server will die for any reason they can keep working.

    In the future I'm planning to experiment with building another machine with proxmox as a base in that case I can think of making clusters and so on, but not for the moment :)

    you put in words exactly my experience crashtest , when I used snapraid back then I wasn't impressed, I had multiple errors on syncs and alike.

    Is always better to ask though as I used it many years ago, you never know, but yes I already ruled it out, was interesting especially the part on on mergerfs as I didn't know anything about it, I'm always happy to learn new stuff :)

    I will go full zfs experience then, hoping the server will not crap itself :D

    yep maybe I didn't explain myself correctly, the SMR disks were from a previous owner and I'm about to substitute them with all CMR that's why my questions about a more "friendly" migration.


    About the ecc ram, the server already has it, so my point was having that ram already it could be even more worth it to switch to zfs as well, the system has 32 gb of ram, which is plenty for the amount of users with zfs and the few services active on the server, so shouldn't be a problem.

    I guess I'll go into zfs modality then test out a few weeks to understand better zfs tools and then migrate everything.
    Plus I like the fact I can easily manage\control zfs integrity from what I see, instead of using something like mergerfs+snapraid


    I don't know why, but snapraid never convinced me too much I tried it in a previous build.

    So my doubt really right now is just if go mergerfs+snapraid configuration or alltogether to the zfs route.


    I guess for something work related (not home usage) zfs would be better, I was a bit worried about it being part of a plugin system as you said (for maintenance longevity), but I guess is something that will be ported over and over on new omv versions.

    Krisbee this is exacrtly why I'm asking :) so thank you for your input, really useful, also the testing part I hope will help me out if it's worth it, I'm more prone to zfs to be honest since data integrity and a more robust "easier" way to manage zfs pool for restore appeals to me more than btrfs since doing pools of different hard disks doens't really make too much difference for me.

    I'm not worried about recreate arrays if something fails through cli for zfs if at least I have notitification from the omv about it, that if I'm not mistaken you are saying monitoring and notification are supported on the GUI for zfs am I right?

    So it seems the way to go, about the migration is what concern me the most infact I hoped there was a simpler way, this still concern me to be fast since I don't have many days to do the process only a weekend.


    From your msg, zfs still seems the good option though more features for data integrity especially having the ecc ram, notification and monitor of issues from omv if I'm not mistaken, and recreation from the cli of the zfs array seems acceptable.


    I have nonetheless to substitute the ext4 raid, as I discover the previous admin used SMR disks for the raid not that great for raid...so I guess at the end an easy migration wouldn't be possible even if I keep the old system intact since the UUID of the new disks would be different nonetheless and a "forced fail of the raid" to rebuild on a new one with SMR disks is not that great a solution either...

    So I guess PAIN is the only doable way :D


    mmmmmh

    thank you Quacksalber I will look into this as well as it seems cool, although the issue that omv doesn't support snapshots and managing raids through the GUI was the problem here so I guess I will go the zfs way
    My last concern of all of it.

    Is there a way to "migrate" the configurations of fstab to the new pool created in zfs or to the new btrfs system without recreate every link or mount inside the GUI for applications\services\plugins that where using some specific UUID?

    What I mean is after all the tests I will do how can I switch smoothly to the new filesystem without make omv lost its mind?

    - As of now I have my main os on a separate nvme, it also contains home folders for users in ad hoc partition and docker in another partition, so that will remain the same etc etc. formatted in ext4 and I will keep it this way, so no problem here.
    The problem arises when I have to migrate all the users\nas data to the new pool and reconnect everything to the new UUID path.

    • The files itself of course will be done through rsync to respect ownerships etc.etc. that will be easy (is there any faster way that I'm not thinking of?)
    • Docker compose\containters wise, should be streight forward to migrate to the new pool as I used the "global.env" file so I guess I will just change the UUID of the variables to new pool created and everything should be back online out of the box.
    • What about share folders of the samba service? and all the other plugins how can I reconnect them to the new pool\UUID smoothly? is there a way or the only way is recreating all the shares from scratch on the new pool, or is there a faster way to do it maybe modify the fstab?
    • Basically what would be the fastest way to migrate sambashares\plugins\and everything that point\map to specific old ext4 UUIDs to the new pool that will be created?

    crashtest definitely I got you the zfs snapshot feature seems perfect, but again I would use it just for me, as it involves going in the shell making the .zfs folder visible etc etc retrieve the file and hide back everything, this would require my interventon.

    instead the 4th drive backup solution will be basically already set it up with the file browser plugin so they can recover whatever they want without my help ever.

    And of course I will use the snapshots of zfs for myself to recover a more serious disaster if I need :)

    Seems doable

    to be honest my backup concern was more from a "non techy" users perspective,

    exactly the ones that will use the nas.


    Since tehre are some people in the team that have the tendency to delete files that they shouldn't touch, and everytime they call me to make their files reapper, not even the recycle of the samba share is sufficient to mitigate their armageddon.

    ...so my idea to get rid of all this cumbersome of managing every single cry and destuction, is to create a rotational backup on another hard disk and with the file browser plugin make a pratical access for them to access the backup and retrieve whatever they can manage to destroy.

    All of this without giving them any access to the OMV gui or even worst to shell to manage zfs :°D

    I guess this way would be almost idiot proof for them to not cause too many issues, plus I retain the zfs snapshot for myself when they really really really really mess up...other than that...only GOD can save them :D

    Great Krisbee thank you very much for the heads up, and answered my other doubt sounds a good plan then, about ACLs is not a problem in this case since I tend to not use them, despite I know my way around them, I prefer to keep it simple, so I just used permissions for the share share folders for users and so far it's sufficient for what the production needs, but great to know :)

    I really appreciate all your inputs, it sounds like I will give it a whirl then.
    Also thanks for the explanation of the proxmox kernel, I was wondering what is that all about.
    It seems from your comments that is a cool addition to manage zfs

    thank you crashtest for the detailed explanation, the zfs approach seems really good and interesting I will fireup a test machine and study it, before deploy but if it's well integrated in omv as you are saying seems a great solution.

    Last question, the utilization of zfs snapshots from what I got from you, it makes redundant to setup an hard disk dedicated solely to backup with rsnapshot plugin then? (or borgbackup) or am I mistaken your words? so from my OP the setup with a 4tb hard disk with the function of just backup for the files in the pool in this case would be just redundant? if I use zfs snapshots?

    thank you very much Quacksalber I already got btfrs on my linux workstation, so I'm some familiar with it, I just didn't know if it was now mature enough to integrate it to OMV, from what I remembered a while ago btrfs wasn't fully supported yet in the GUI for things like raid construction\recostruction and snapshots, and I had to do it through the shell instead.

    Is this situation changed in omv7?

    and thanks also to crashtest I was valuating about zfs as well especially in conjunction with ecc ram to build something secure for data integrity,
    my main concern with it is from what I can see in omv is only supported through an omv-extras plugin of openzfs is that right?

    How's your experience with it? I'm a bit concerned about future development or stability of it being built on an external plugin, can you tell me your experience?

    Plus as well what is the situation for support of it? is it manageable from the GUI the raid\restoration\snapshots features or is something that could be done only through shell (I saw your documentation related to automatic snapshots thanks)?

    Kind regards really useful food for thoughts :)

    chente I was following the documentation you gave me especially for the nextcloud one, but I'm a bit stuck as the containers nginx+nextcloud-aio are not working properly.

    I'm looking into it to understand what I'm doing wrong.
    I have a couple of concerns though since on the guide is not really explained well could you give me any hint?

    1. I tried to launch the container nginx-proxy-manager with ad-hoc user nginx as PID and GUID to not make it execute with root privileges could it be a problem? is it better to leave it alone and run it as root?


    2. on the nginx itself configuration to make it work with nextcloud-aio it says to cofigure it this way:

    Code
    Adjust localhost or 127.0.0.1 to point to the Nextcloud server IP or domain depending on where the reverse proxy is running. See the following options.
    
    On the same server in a Docker container
    For this setup, you can use as target host.docker.internal:$APACHE_PORT instead of localhost:$APACHE_PORT. ⚠️ Important: In order to make this work on Docker for Linux, you need to add --add-host=host.docker.internal:host-gateway to the docker run command of your reverse proxy container or extra_hosts: ["host.docker.internal:host-gateway"] in docker compose (it works on Docker Desktop by default).
    Another option and actually the recommended way in this case is to use --network host option (or network_mode: host for docker-compose) as setting for the reverse proxy container to connect it to the host network. If you are using a firewall on the server, you need to open ports 80 and 443 for the reverse proxy manually. By doing so, the default sample configurations that point at localhost:$APACHE_PORT should work without having to modify them.

    I think this is a big roadblock for me as I tried to put what it says, but apparently it gives me error unless I just put localhost. but then again when I tried to connect through internet to the server it gave me the 502 error page.

    So definetely I'm doing something wrong.

    3. does the router needs to have open ports for 80, 443 only tcp or udp as well?

    4. For the nextcloud container instead I don't see options to make it run unprivileged, does it need to be run as root as well? as I don't see any PID PGUID variables for the container.

    Hi I'm about to build a new nas since a long time, my last omv configuration it's basically going strong since 6 years so it's been some time since I build one from scratch.

    It is composed basically by 3 hard drivers in raid1 formatted in ext4 (1+1 and one hotspare all of them of 1 tb each) for the users data and documents and another hard drive of 2tb for backup through rsnapshot all of them formatted in ext4.

    As of now I think this configuration is not the best and I was thinking to ask for some suggestions for a build that could make sense for my use case.

    What I want to achive is to have enough space for the users to put documents and files (no video nor jellyfin nothing like that it will be used mostly as a nas to serve internally files for work and a couple of services like duckdns, nginx and nextcloud to serve remote collaborators)

    I was wondering what is as of today the best configuration possible to achive with the hardware I have?

    my costraints are:

    3x hard disk wd red 2tb each (that I would like to use for data and\or redundancy, I might get a 4th one if totally necessary, but I would rather not)

    1x hard disk 4tb ironwolf (that I would like to use to backup all the data from the pool)


    • My questions are which technology to use?

      For the data raid? which one? or mergerfs+snapraid? or even more zfs?! and what format? ext4, btrfs, something else?

      For the backup solution of the biggest drive instead is using rsnapshot still the best way to go or something else?


      Thank you in advance for any suggestion :)