Posts by DaveOB

    I'd just add the following to your Dockerfile:

    Thanks neubert

    I did ( to a point ) get it working late last night, and I think I have recalled the steps correctly and listed below :



    I now have, in my MYdata folder ( where all my php script files reside that I can run from my browser ), the following :


    now it breaks my test php script, which is to upload the file to the SFTP server ( all previously working with phpseclib v1 ) :

    It seems to be so close, but unless I can find more brain cells, it seems the solution is constantly just out of reach.

    pi > docker > nginx > php > install Composer for phpseclib


    Really need some expert guidance here. I have been researching and reading and I just can't see this clearly with my limited knowledge and understanding.


    I have a Pi4, OMV6, with Docker. Portainer in my browser.


    As far as I understand it, the Docker consists of Containers, and each Container operates as a separate entity.


    One of the Containers is for nginx. nginx runs a web server and uses php.


    I have a shared folder on the pi ssd, at /srv/dev-disk-by-label-NAShd1/www

    This directory contains the php scripts that I can access from my browser, like : http://192.123.1.123/MYdata/phpinfo.php


    All working great so far.


    One of my php scripts creates a data file ( 1 - 4 mb ) that I need to upload to 2 remote FTP servers.

    The 2 remote FTP servers recently changed that uploads can only be done using SFTP.


    To do the SFTP upload, I need to use the phpseclib library.

    Version 3 of the phpseclib library requires 'composer' to use the library.


    So I am completely lost on where or how to install 'Composer'.

    Do I ssh to the pi IP and do it there ?

    or in a different directory ?

    or do I need to Console in to the nginx Docker Container and install there ? which directory ?

    Using Pi4 ( on LAN ), OMV6, Docker with nginx. Also have Portainer running.


    I am using the nginx as a php server for some php script ( upload a local file to a remote FTP server ) that I need to run on a regular basis ( using the OMV6 > Scheduled Tasks ). All been running great for a long time.


    Now the server that I need to upload to, has changed and FTP not working - have to change to SFTP on port 22.


    I can connect and upload from my win PC using Filezilla, so I know the user / pass and paths are all correct.


    Trying to modify the php script code to work with SFTP, but research tells me I need to

    Code
    install php-ssh2


    Where / how do I do this ?

    Do I need to use the Console in the nginx Container, Putty, other ?


    I am a novice and lot a lot of experience with linux.


    Many Thanks for any guidance.

    I have a Pi4 with OMV6

    Operating System: Raspbian GNU/Linux 11 (bullseye)

    Kernel: Linux 5.10.103-v7l+


    The OS is on the SD card, and ( I assume ) this includes all the docker containers, etc).


    The data ( shared folders, SMB, etc are on a USB connected SSD ( in an external case ).



    The SSD ( when plugging in to my win pc and viewed in Disk Manager ) shows only a single partition 447 Gb ( 480 Gb SSD )


    Using Pi > OMV > Storage > File Systems, it shows /dev/sda1 as type : EXT4


    using : sudo fdisk -l


    Disk /dev/sda: 447.13 GiB, 480103981056 bytes, 937703088 sectors

    Disk model: nal USB 3.0

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: gpt

    Disk identifier: **************


    Device Start End Sectors Size Type

    /dev/sda1 2048 937703054 937701007 447.1G Linux filesystem


    I want to keep all the existing data on the SSD and not have to go thru the process of setting everything up again. Is this possible ?


    If I look in Cloud Commander, the shared folders are all listed on the SSD here :

    /mnt/fs/svr/dev-disk-by-label-NAShd1/


    Had a discussion on Discord in the general Pi group, and it seems that the best would be :

    step 1 - clone the existing SD card to a new SSD.

    step 2 - copy the data from the old SSD to the new SSD.


    But I don't understand WHERE to copy the data to.


    Ideally, I don't want to have to redo the OMV setup, and have the OMV OS still see the files just like they did when the OS as on an SD card, and the data files / folders was on the old SSD.


    Is this possible ?


    I have nginx, duplicati, and a few other docker containers that all work with those data files.

    Using ACL permissions in general causes problems. You should only use them in certain situations where there is no alternative to achieve what you need. In most cases you don't need them.

    I have published the long explanation of this several times in different threads. I guess if you do a search you should find one of those threads.

    Thank You. I'm just going to run with your statement "In most cases you don't need them." as it appears the Permissions on the Privilege screen does the job.

    Thank You for the link. It was a very interesting and informative read.


    Unfortunately, I am still left with the basic question of WHY do we have Permissions ( R/W, R, No Access ) in both the Privileges screen AND on the ACL screen. From what I understand in the linked explanation, Privileges should be used, ACL should not be used, yet the examples in the document show the use of the ACL permissions, without mentioning if the exact same thing can be done using the Privileges Permissions.

    Is there a simple mans explanation for this.

    The more I read, the less it makes any sense.


    I have OMV6 with a number of Shared Folders.


    Under the USERS section of OMV6 :

    I have Users for 4 family members ( parent1, parent2, child1, child2 - let's call them P1, P2, C1, C2 )

    I have 2 Groups - GroupParents ( GP ) ( with P1 and P2 ) and GroupKids ( GK ) ( with C1 and C2 )


    The Storage > Shared Folders > ACL page has the settings for Admin / Users / Other.

    If I understand correctly, these are 'general overall default' settings. So if the Folder ACL is set to 'Users = Read/Write/Execute' ( R/W/X ) then all users ( P1, P2, C1, C2 ) will have R/W/X for this Folder, UNLESS the Privileges for the User is set otherwise.


    Through experimentation, it appears that any changes I make to the "R/W - W - No Access" settings in OMV6 > Users > Users > C1 ( or any specific user ) are also shown in OMV6 > Shared Folders > Privileges for the Folder.


    So my understanding (?) is that :

    if I'm adding a new User, and want to set Privileges for each folder, it would be easiest to do this using the OMV6 > Users > Users > 'new user name'.
    or

    if adding a new Folder, then use OMV6 > Storage > Shared Folders > Privileges to set access for all the known users for the new folder.


    I still don't understand the point of having all the "R/W - W - No Access" settings in the ACL of each Shared Folder ? Isn't this going to contradict / clash / override the settings already created in the Privileges ?


    Then with Groups, if a Group is set to 'no access' to a Folder, but one of the Group members ( as a User ) is set to "Read/Write' for the same folder, which setting takes preference ?

    Likewise if a Group is set to 'Read/Write' but a group member is ( as a user ) set to 'No Access' then does that user get access to that folder ?

    Update :
    Couldn't get it working, even though the ISP had allocated me a Static IP the week before.
    Took them a while to confirm that cgnat was not enabled.

    Eventually removed the mikrotik hap lite router from the setup as I no longer needed that for the purpose that it was originally installed.

    Forwarded port 51820 to the IP for the Pi on the Zyxel router, and everything worked perfectly.

    Side note : I used a data connection on my android phone to test, and ran WireGuard on the phone to connect to the VPN on the Pi.
    At first I couldn't see any files in the SMB shares, then found X-plore File Explorer for android - awesome app that allowed me to setup LAN connection to the Pi Shared folders, as well as FTP connections to my other domains.

    Also found that if I enabled Hotspot on the phone, my laptop could connect to the phone over wifi. If I also enabled WireGuard on the laptop, I could not access the files on the Pi. I think having WireGuard active on the phone and the laptop at the same time was conflicting. Turned off WireGuard on the laptop and then I was able to easily see the files on the Pi ( from the laptop ).

    On the laptop, it wouldn't see the Pi's files on the mapped network drives that I have set up. I had to go to File Explorer > Network > Pi, so I could see the files.

    Is WireGuard the right solution for this ?


    I have a Pi4 running OMV6, docker, Containers for Nginx, Portainer, and Shared Folders which I can access on my LAN using SMB and FTP ( Filezilla ).


    Very happy and it's very stable.


    The Pi has a Static IP address allocated by the network Mikrotik HAP Lite router. My Fixed Fiber internet connection has a Static IP allocated by my ISP.


    I use a different port number on the Pi IP for OMV6, Portainer, FTP, SSH ( Putty ), etc


    I want to be able to access all of these items when away from home, either from my Laptop, or my Android phone.


    Any pointers on how to set this up ? If I activate a WireGuard Client on my laptop or phone, does that mean that all internet comms for those devices will be channeled thru the vpn ?

    I think it's solved.

    As a precaution, I exported each of the duplicati backup profiles that I had to my local pc.

    Based on the info in this page :

    Problems with duplicati in docker on raspi
    I am just starting off on my journey through realms of Openmediavault, Docker, Portainer, etc. I have always used images from linuxserver image simply because…
    forum.duplicati.com


    I changed the stack editor to :

    image: duplicati/duplicati # ghcr.io/linuxserver/duplicati


    Did the Update and it pulled the new image, updated, and left me with Duplicati with zero profiles in the list.


    Busy importing the profiles from the backups I saved before starting the process.

    Have you tried to redeploy the stack? You don't have to change anything, just go to the stack and redeploy it. When you redeploy, a popup will come up asking you if you want to repull the image and redeploy. Click the trigger on that, and I would think that would upgrade you.

    went in to Portainer > Stacks > Duplicati > Editor, and Clicked on Actions : Update the Stack.
    Popup asked 'Do you want to force an update of the stack' and set that On.

    Few seconds and an error appeared :


    Failure

    Failed to pull images of the stack: duplicati Pulled no matching manifest for linux/arm/v7 in the manifest list entries.

    I have OMV6 on a Pi4, Portainer, Docker


    In Portainer there's a Stack for Duplicati as well as a Container


    When I browse ( from pc ) to the Duplicati IP, I see the Duplicati UI.


    On the 'About' screen, I see :


    Code
    You are currently running Duplicati - 2.0.5.1_beta_2020-01-18
    Update 2.0.6.3_beta_2021-06-17 is available, download now
    Check for updates now

    Tried both links and neither Updates the version.


    What's the correct procedure to update this app running in a Docker Container ?

    Soma again, Thank You for the input and guidance


    If it helps anyone in the future :

    I have a Pi400 4gb with the Raspberry Pi OS ( with desktop ) running.

    My source drive was 500Gb

    Target drive was 480 Gb ( SSD in a small USB 3 case )


    On the Pi, I used System > Add / Remove Programs

    Searched for, and Added gparted to the Pi

    Searched for, and Added Clonezilla to the Pi


    Used gParted to reduce the source drive partition to 460 Gb

    Opened Terminal, and ran : sudo clonezilla


    I first tried the 'Local Drive -to- Local Drive' option, but it failed as the target drive was smaller.


    Then used the 'Local Part -to- Local Part', to clone the main partition from the source drive to the target drive.


    Process took around 3 hours to complete ( maybe should have been faster - see problem below about drive connection )


    I then tried a reboot, and could not get a reliable connection to the new drive

    Connection was erratic. It would appear, then disappear.

    Tried using a powered hub for the drive - same problems.

    Swapped USB cables out with another identical case - same problems.

    Swapped the SSD to another identical case - same problems.


    Did some google-fu and found that the cases / adaptors that use the JMS578 chips, are problematic when used on a Pi. All my cases are identical and all have these chips in the adaptor.


    Eventually stumbled on this page :
    https://forums.raspberrypi.com/viewtopic.php?t=245931


    Did the changes in the article, and 'bingo' - drives load, fast to navigate and read / write, etc.


    I understand that the changes made do have an effect on drive speeds, but considering I have 4 of these enclosures, and not looking to buy more ( different brands may also very well have the same chips in the adaptors ), I am going to go with this solution.

    OMV6 running on Pi4 with external USB 512Gb HD for data storage


    Any step-by-step instructions out there for this ?


    Looking for a way to :


    Clone the USB 512Gb HD ( ext4, around 200Gb used ) to a smaller new 256Gb SSD ( also USB ). Both ext4


    I have both drives mounted in OMV6.

    I have ssh access from my windows laptop using Putty


    I have a load of Shared folders for different family members on the drive, and would really like to not have to set those all up again.

    Also have NGINX running in a Docker Container, and data for that is also on the drive.

    I explained all that in #13 above. Did I waste my time?

    you did not waste your time. I tried your suggestion and it did not work in this case.

    Setting the uid=1000,gid=100,file_mode=0644 had no effect whatsoever. It still mounted as u0 g0 and made the nas folder read only.

    I don't know if it was user error, or something else ( Pi400, 64bit OS ? ) that didn't like it, but at least I now have a setup that is working as needed.

    Thank you for your input

    I decided to change direction slightly with this, and instead of mounting the NAS shared SMB folder at bootup, I created a Pi Desktop shortcut to a script to connect ( mount ) to the NAS, and another to disconnect ( unmount ) from the NAS.


    Created a file gonas.sh with the contents :

    Bash
    #!/bin/bash
    sudo mount.cifs //192.168.1.136/LauraOldPC /home/Laura/NASdrive -o credentials=/home/Laura/.smbcreds,noperm

    and a file nasOff.sh containing :

    Bash
    #!/bin/bash
    sudo umount -a -t cifs -l


    Set both files as executable :

    Code
    chmod +x gonas.sh
    chmod +x nasOff.sh


    Also a credentials file ( so the password is hidden ) : .smbcreds with mode 600, containing :


    Code
    user=Laura
    pass=lTY7567GH
    domain=WORKGROUP


    I must add that I struggled with this for hours as the mount seems to be created as root user, and the current user ( 'Laura' in the example ) had no permissions to copy files to the NAS folder, and NAS based files would open as 'read only'


    Everything changed and started working perfectly as needed as soon as I added the 'noperm' at the end of the mount line.


    No, when the user needs something from the NAS ( maybe once a week ), she can Mount, do what she needs, and then unmount at any time.