Posts by DaveOB


    Thanks a lot. turns out in Scheduled Tasks, I can change the '20' to 15 and 35 and 55 as each minute has a tick box to select.
    So it's still 1 scheduled task, set to run on the minutes of 15, 35 and 55.

    I have an existing Scheduled Task that executes a php script :

    Code
    docker exec nginx php /config/www/MYdata/MYfile.php


    The Scheduled Task is set to run 'every N minute' = 20, so it runs :

    on the hour

    20 past the hour

    40 past the hour


    Problem is our country has scheduled power outages that start exactly on the hour at pre-scheduled time.


    So I want to run the task, still at 20 minute intervals, but not starting 'on the hour', so at :

    15 past the hour

    35 past the hour

    55 past the hour


    Is it possible to do this ?

    update :

    Found I was missing the autoload.php line at the start - but still not working :


    I'd just add the following to your Dockerfile:

    Thanks neubert

    I did ( to a point ) get it working late last night, and I think I have recalled the steps correctly and listed below :



    I now have, in my MYdata folder ( where all my php script files reside that I can run from my browser ), the following :


    now it breaks my test php script, which is to upload the file to the SFTP server ( all previously working with phpseclib v1 ) :

    It seems to be so close, but unless I can find more brain cells, it seems the solution is constantly just out of reach.

    pi > docker > nginx > php > install Composer for phpseclib


    Really need some expert guidance here. I have been researching and reading and I just can't see this clearly with my limited knowledge and understanding.


    I have a Pi4, OMV6, with Docker. Portainer in my browser.


    As far as I understand it, the Docker consists of Containers, and each Container operates as a separate entity.


    One of the Containers is for nginx. nginx runs a web server and uses php.


    I have a shared folder on the pi ssd, at /srv/dev-disk-by-label-NAShd1/www

    This directory contains the php scripts that I can access from my browser, like : http://192.123.1.123/MYdata/phpinfo.php


    All working great so far.


    One of my php scripts creates a data file ( 1 - 4 mb ) that I need to upload to 2 remote FTP servers.

    The 2 remote FTP servers recently changed that uploads can only be done using SFTP.


    To do the SFTP upload, I need to use the phpseclib library.

    Version 3 of the phpseclib library requires 'composer' to use the library.


    So I am completely lost on where or how to install 'Composer'.

    Do I ssh to the pi IP and do it there ?

    or in a different directory ?

    or do I need to Console in to the nginx Docker Container and install there ? which directory ?

    Using Pi4 ( on LAN ), OMV6, Docker with nginx. Also have Portainer running.


    I am using the nginx as a php server for some php script ( upload a local file to a remote FTP server ) that I need to run on a regular basis ( using the OMV6 > Scheduled Tasks ). All been running great for a long time.


    Now the server that I need to upload to, has changed and FTP not working - have to change to SFTP on port 22.


    I can connect and upload from my win PC using Filezilla, so I know the user / pass and paths are all correct.


    Trying to modify the php script code to work with SFTP, but research tells me I need to

    Code
    install php-ssh2


    Where / how do I do this ?

    Do I need to use the Console in the nginx Container, Putty, other ?


    I am a novice and lot a lot of experience with linux.


    Many Thanks for any guidance.

    I have a Pi4 with OMV6

    Operating System: Raspbian GNU/Linux 11 (bullseye)

    Kernel: Linux 5.10.103-v7l+


    The OS is on the SD card, and ( I assume ) this includes all the docker containers, etc).


    The data ( shared folders, SMB, etc are on a USB connected SSD ( in an external case ).



    The SSD ( when plugging in to my win pc and viewed in Disk Manager ) shows only a single partition 447 Gb ( 480 Gb SSD )


    Using Pi > OMV > Storage > File Systems, it shows /dev/sda1 as type : EXT4


    using : sudo fdisk -l


    Disk /dev/sda: 447.13 GiB, 480103981056 bytes, 937703088 sectors

    Disk model: nal USB 3.0

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: gpt

    Disk identifier: **************


    Device Start End Sectors Size Type

    /dev/sda1 2048 937703054 937701007 447.1G Linux filesystem


    I want to keep all the existing data on the SSD and not have to go thru the process of setting everything up again. Is this possible ?


    If I look in Cloud Commander, the shared folders are all listed on the SSD here :

    /mnt/fs/svr/dev-disk-by-label-NAShd1/


    Had a discussion on Discord in the general Pi group, and it seems that the best would be :

    step 1 - clone the existing SD card to a new SSD.

    step 2 - copy the data from the old SSD to the new SSD.


    But I don't understand WHERE to copy the data to.


    Ideally, I don't want to have to redo the OMV setup, and have the OMV OS still see the files just like they did when the OS as on an SD card, and the data files / folders was on the old SSD.


    Is this possible ?


    I have nginx, duplicati, and a few other docker containers that all work with those data files.

    Using ACL permissions in general causes problems. You should only use them in certain situations where there is no alternative to achieve what you need. In most cases you don't need them.

    I have published the long explanation of this several times in different threads. I guess if you do a search you should find one of those threads.

    Thank You. I'm just going to run with your statement "In most cases you don't need them." as it appears the Permissions on the Privilege screen does the job.

    Thank You for the link. It was a very interesting and informative read.


    Unfortunately, I am still left with the basic question of WHY do we have Permissions ( R/W, R, No Access ) in both the Privileges screen AND on the ACL screen. From what I understand in the linked explanation, Privileges should be used, ACL should not be used, yet the examples in the document show the use of the ACL permissions, without mentioning if the exact same thing can be done using the Privileges Permissions.

    Is there a simple mans explanation for this.

    The more I read, the less it makes any sense.


    I have OMV6 with a number of Shared Folders.


    Under the USERS section of OMV6 :

    I have Users for 4 family members ( parent1, parent2, child1, child2 - let's call them P1, P2, C1, C2 )

    I have 2 Groups - GroupParents ( GP ) ( with P1 and P2 ) and GroupKids ( GK ) ( with C1 and C2 )


    The Storage > Shared Folders > ACL page has the settings for Admin / Users / Other.

    If I understand correctly, these are 'general overall default' settings. So if the Folder ACL is set to 'Users = Read/Write/Execute' ( R/W/X ) then all users ( P1, P2, C1, C2 ) will have R/W/X for this Folder, UNLESS the Privileges for the User is set otherwise.


    Through experimentation, it appears that any changes I make to the "R/W - W - No Access" settings in OMV6 > Users > Users > C1 ( or any specific user ) are also shown in OMV6 > Shared Folders > Privileges for the Folder.


    So my understanding (?) is that :

    if I'm adding a new User, and want to set Privileges for each folder, it would be easiest to do this using the OMV6 > Users > Users > 'new user name'.
    or

    if adding a new Folder, then use OMV6 > Storage > Shared Folders > Privileges to set access for all the known users for the new folder.


    I still don't understand the point of having all the "R/W - W - No Access" settings in the ACL of each Shared Folder ? Isn't this going to contradict / clash / override the settings already created in the Privileges ?


    Then with Groups, if a Group is set to 'no access' to a Folder, but one of the Group members ( as a User ) is set to "Read/Write' for the same folder, which setting takes preference ?

    Likewise if a Group is set to 'Read/Write' but a group member is ( as a user ) set to 'No Access' then does that user get access to that folder ?

    Update :
    Couldn't get it working, even though the ISP had allocated me a Static IP the week before.
    Took them a while to confirm that cgnat was not enabled.

    Eventually removed the mikrotik hap lite router from the setup as I no longer needed that for the purpose that it was originally installed.

    Forwarded port 51820 to the IP for the Pi on the Zyxel router, and everything worked perfectly.

    Side note : I used a data connection on my android phone to test, and ran WireGuard on the phone to connect to the VPN on the Pi.
    At first I couldn't see any files in the SMB shares, then found X-plore File Explorer for android - awesome app that allowed me to setup LAN connection to the Pi Shared folders, as well as FTP connections to my other domains.

    Also found that if I enabled Hotspot on the phone, my laptop could connect to the phone over wifi. If I also enabled WireGuard on the laptop, I could not access the files on the Pi. I think having WireGuard active on the phone and the laptop at the same time was conflicting. Turned off WireGuard on the laptop and then I was able to easily see the files on the Pi ( from the laptop ).

    On the laptop, it wouldn't see the Pi's files on the mapped network drives that I have set up. I had to go to File Explorer > Network > Pi, so I could see the files.

    Is WireGuard the right solution for this ?


    I have a Pi4 running OMV6, docker, Containers for Nginx, Portainer, and Shared Folders which I can access on my LAN using SMB and FTP ( Filezilla ).


    Very happy and it's very stable.


    The Pi has a Static IP address allocated by the network Mikrotik HAP Lite router. My Fixed Fiber internet connection has a Static IP allocated by my ISP.


    I use a different port number on the Pi IP for OMV6, Portainer, FTP, SSH ( Putty ), etc


    I want to be able to access all of these items when away from home, either from my Laptop, or my Android phone.


    Any pointers on how to set this up ? If I activate a WireGuard Client on my laptop or phone, does that mean that all internet comms for those devices will be channeled thru the vpn ?

    I think it's solved.

    As a precaution, I exported each of the duplicati backup profiles that I had to my local pc.

    Based on the info in this page :

    Problems with duplicati in docker on raspi
    I am just starting off on my journey through realms of Openmediavault, Docker, Portainer, etc. I have always used images from linuxserver image simply because…
    forum.duplicati.com


    I changed the stack editor to :

    image: duplicati/duplicati # ghcr.io/linuxserver/duplicati


    Did the Update and it pulled the new image, updated, and left me with Duplicati with zero profiles in the list.


    Busy importing the profiles from the backups I saved before starting the process.

    Have you tried to redeploy the stack? You don't have to change anything, just go to the stack and redeploy it. When you redeploy, a popup will come up asking you if you want to repull the image and redeploy. Click the trigger on that, and I would think that would upgrade you.

    went in to Portainer > Stacks > Duplicati > Editor, and Clicked on Actions : Update the Stack.
    Popup asked 'Do you want to force an update of the stack' and set that On.

    Few seconds and an error appeared :


    Failure

    Failed to pull images of the stack: duplicati Pulled no matching manifest for linux/arm/v7 in the manifest list entries.