Beiträge von TijuanaKez

    So I'd been rocking this setup (subfolder) for years on my ISP supplied router and everything was great.


    I recently switched to pfSense, copied over the same port forwarding rule and corresponding firewall rule, and I just get this.

    Nothing else has changed.

    I have a bunch of other port forwarding rules in the NAT and all they work fine.


    I don't know where to start troubleshooting because Nextcloud isn't generating any log errors, and ngnix/access.log and ngnix/error.log remain empty after I cleared them. The only thing I have to go off is this error in the firewall logs.

    Code
    "/usr/local/www/nextcloud/index.php/204" failed (2: No such file or directory), client: 10.0.0.x, server: , request: "GET /nextcloud/index.php/204 HTTP/1.1", host: "abcdefg.duckdns.org"

    The docker containers logs for nextcloud, nextclouddb, and swag all report normal operation and no errors.

    Anyone care to point me what else to investigate next?


    Note, I tagged onto this thread for context, but I'm on the latest OMV6 update


    UPDATE:


    Solved. NAT reflection has to be set to Enabled (NAT + Proxy) for it work on the private lan.

    Okay for anyone else needing to do this, if you follow those instructions you'll likely end up with a bunch of write permissions and lock file errors because cp -R wont preserve permissions.


    So don't do this.

    Code
    cp -R /var/lib/docker /srv/some-uuid/docker

    but his instead.

    Code
    sudo rsync -avzh /var/lib/docker /srv/some-uuid/docker

    Needed to do this just now, and it mostly worked, though there is no "Click Install Docker" for my version of OMV6 (latest).

    Only a "Reinstall Docker".


    All the dockers are running and their paths point to the new drive which is good.


    But now the mariadb docker is complaining it doesn't have permissions to /var/tmp


    Code
    2024-05-08 [ERROR] mariadbd: Can't create/write to file '/var/tmp/ibXXXXXX' (Errcode: 13 "Permission denied")
    2024-05-08 [ERROR] InnoDB: Unable to create temporary file; errno: 13
    2024-05-08 [ERROR] mariadbd: Can't create/write to file '/var/tmp/ibXXXXXX' (Errcode: 13 "Permission denied")
    2024-05-08 [ERROR] InnoDB: Unable to create temporary file; errno: 13

    I manually chmod 777 /var/tmp from within the docker's bash terminal, and it fixed it for now, but from what other's have said this might break again next boot.


    Any ideas?

    Can't figure out why accessing OMV6 SMB shares aren't working from Ubuntu 22.04.2 LTS machine.


    • The NAS shares show up in Nautilus, SMB, NFS, AFP, SSHFS versions all show up.
    • I can access the shares via SSHFS (root really)
    • I can access the shares via AFP from the Ubuntu machine just fine.
    • I can access the shares via SMB on the Mac systems on my network just fine
    • I can access SMB shares on another NAS on my network running OMV5 just fine
    • I can access SMB shares on a MacBook on my network just fine.
    • The latest cifs-utils is installed
    • The shares just have standard permissions they get when creating a new shared folder and adding it as a samba share in omv


    Interestingly, I also get connectivity issues trying to access the NFS shares. "Unable to access location, mount point doesnt exit"

    Just trying to remove NFS shares and turn it off, but I get dependancy error.


    A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.
    [ERROR ] retcode: 1
    [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. in /usr/share/php/openmediavault/system/process.inc:220


    sudo journalctl -u nfs-server

    &

    systemctl status nfs-server.service


    give the same incredibly helpful cyclical error


    Code
    systemd[1]: Stopping NFS server and services...
    systemd[1]: nfs-server.service: Succeeded.
    systemd[1]: Stopped NFS server and services.
    systemd[1]: Dependency failed for NFS server and services.
    systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
    systemd[1]: Dependency failed for NFS server and services.
    systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
    systemd[1]: Dependency failed for NFS server and services.
    systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.

    Just a question out of curiosity really.

    But say e.g I have a drive, wrong partition type or filesystems, and it needs a complete reformat, fresh partition table and quick format of an ext4 partition covering the whole disk.

    If I do that gparted, or let the installer do it as part of an ubuntu install, the format happens more or less instantaneously.

    I could then mount that ext4 partition in OMV and be good to go.


    Conversely, If I put that drive in my NAS and do a Wipe, then Filesystem Create, I end up watching inode numbers for a good 15-20 minutes on a 4TB drive.


    what's happening and is it really necessary?

    Trying to figure out some stability issues with my NAS and ned to monitor uptime reliably to see if changes are having positive or negative effect of uptime before it KPs.


    Problem is, the OMV uptime seems to be getting cleared whenever I have to manually reset the NAS.


    E.g If I'm monitoring system uptime per day/week, I can see up to 9hrs. Go to bed, and the system has KP'd some time during the night, so I reset it.


    Go back to OMV6 Performance Statistics > Uptime and the Max Uptime for day and week only reports 3 hrs where it said 9 hrs the night before.


    Any ideas?

    So S.M.A.R.T monitoring is telling me a 4TB disk is about to go. Raw reallocated sectors is at about 200, but slowly climbing every day.


    I bought a new NAS drive to replace it, have mounted it, and currently using "rsync -avxHAX" to copy the folder structure over.


    The drive in question has all the appData folders for docker containers and associated volumes etc so need everything to stay put in the transition.


    What's the recommend way to make this transition as painless as possible?


    Thanks.

    Is there a recommended way to transition a data drive to a larger one without messing mount paths up etc?


    I have a 2TB drive that has contains all the shared folders etc that the dockers use. (OMV etc is on MMC)

    I just want to swap that out for a 4TB.

    I could just rsync all the files over, but how would I have the new drive then mount in place of the old one without hiccups?


    The box is headless (no display output) so clonezilla would require ssh and a bit of a learning curve (for me)

    I'd rather rsycn the files over, I just need to know how to tell the system to mount the new drive in place of the old one and keep the same mount points.


    Just from seeing disk/by-uuid a lot in version files, I think this might not be straight forward.


    Thanks.

    do you have the directory /srv/ftp?

    No I don't, only thing in /srv/ is disk-by-uuid drives.

    Not sure why it's there, the only thing I've ever done with ftp is add one share.


    Should I manually change this to a directory that does exist? What should that directory be on a standard OMV5 installation?


    EDIT:


    Ok I just created that folder and everything appears to work now. Strange that it's the default setting, yet OMV doesn't appear to create that folder automatically, at least not for me.

    Everything fails with this...


    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color proftpd 2>&1' with exit code '1': hell6:
    ----------
    ID: configure_proftpd_mod_core
    Function: file.managed
    Name: /etc/proftpd/proftpd.conf
    Result: True
    Comment: File /etc/proftpd/proftpd.conf updated
    Started: 16:05:43.955376
    Duration: 283.115 ms
    Changes:
    ----------
    diff:
    ---
    +++
    @@ -43,56 +43,3 @@
    </Limit>
    </Directory>

    -<IfModule mod_auth.c>
    - DefaultRoot /srv/ftp
    - MaxClients 5
    - MaxLoginAttempts 1
    - RequireValidShell on
    - # This option is useless because this is handled via the PAM
    - # pam_listfile.so module, so set it to 'off' by default.
    - UseFtpUsers off
    -</IfModule>
    -<IfModule mod_auth_pam.c>
    - AuthPAM on
    - AuthPAMConfig proftpd
    -</IfModule>
    -<IfModule mod_ban.c>
    - BanEngine off
    - BanControlsACLs all allow user root
    - BanLog /var/log/proftpd/ban.log
    - BanMessage Host %a has been banned
    - BanTable /run/proftpd/ban.tab
    -</IfModule>
    -<IfModule mod_ctrls.c>
    - ControlsEngine on
    - ControlsMaxClients 2
    - ControlsLog /var/log/proftpd/controls.log
    - ControlsInterval 5
    - ControlsSocket /run/proftpd/proftpd.sock
    -</IfModule>
    -<IfModule mod_ctrls_admin.c>
    - AdminControlsEngine off
    -</IfModule>
    -<IfModule mod_delay.c>
    - DelayEngine on
    -</IfModule>
    -<IfModule mod_facl.c>
    - FACLEngine on
    -</IfModule>
    -<IfModule mod_quotatab.c>
    - QuotaEngine off
    -</IfModule>
    -<IfModule mod_ratio.c>
    - Ratios off
    -</IfModule>
    -LoadModule mod_vroot.c
    -<IfModule mod_vroot.c>
    - VRootEngine on
    - VRootLog /var/log/proftpd/vroot.log
    - VRootAlias "/srv/dev-disk-by-uuid-a188658e-405e-4eee-8ffd-5f8e37dc68e7/Storage/" "Storage"
    -</IfModule>
    -<IfModule mod_wrap.c>
    - TCPAccessFiles /etc/hosts.allow /etc/hosts.deny
    - TCPAccessSyslogLevels info warn
    - TCPServiceName ftpd
    -</IfModule>


    error.txt

    So my raid went down, but instead of rebuilding I figure I might take this opportunity to expand the RAID.


    But in the meantime while I'm waiting for new drives, how can I mount a single drive back up of the raid file system in place of the actual RAID in OMV?


    As in I have a 6TB drive that has an exact clone of the entire filesystem via rsync. I'd like to mount that so that all the shared folders connect to the right place etc.


    Thanks.

    run mdadm --readwrite /dev/md0


    However I would run a long smart test on that /dev/sdc just in case it's breaking down

    That doesn't do much, it goes into readwrite on first write anyway.


    But none of this restores the superblock on that drive, it still says it's missing and it won't assemble or mount on reboot.


    Self test says the drive is ok.


    The are aging drives, but the are quality WD Gold and still having good smart reports.


    Is there not a way to force rebuild the super block on one drive?


    And if the mdadm -D /dev/md0 report is as pasted (4 working drives), I'm definitely using 4 drives and not falling back to parity right?

    So after first thinking there was something wrong with SMB, I realised the RAID FS had mysteriously gone offline after a Shutdown and Powerup.


    On a fresh boot, the RAID filesystem doesn't mount, and various commands tell me that /dev/sdc is missing its superblock


    On fresh reboot:


    So /dev/sdc is missing superblock.

    But if I now run mdadm --assemble --scan -v



    It seems to assemble fine, with no complaints. It says all drives are online. I can mount it and all my files are there.


    Yet still,

    mdadm --examine /dev/sdc

    mdadm: No md superblock detected on /dev/sdc.


    root@h4:~# systemctl status mdadm

    ● mdadm.service

    Loaded: masked (Reason: Unit mdadm.service is masked.)

    Active: inactive (dead)


    Fdisk -l does not list /dev/sdc



    Is there a way to restore the superblock on sdc and have it mount at boot as before?


    Thanks.