Beiträge von kattivius

    "compose file version 3 is not supported"

    I have the above error when I try to deploy the stack

    .. and above the editor this is written (among other info):
    "Only Compose file format version 2 is supported at the moment."

    Greetings,

    since this thread already exist, I figure is better to continue.
    if you consider is off-topic, I will create a new one.

    I am also interested on pihole.

    I noticed a latest docker-compose: https://hub.docker.com/r/pihole/pihole

    However, this is Version 3

    I understand (please, correct me if I am wrong) that to run version 3 docker-compose, I need to install Swarm.

    But I also read around that can break previously version 2 installed images.. or some similar side effects.

    Question:

    1 Is it true?

    2 any way safe I can install docker compose Version 3?

    Thanks

    so, here comes my report! :)

    I installed a  X825 for the Pi (http://www.suptronics.com/mini…/images15/X825_1200p1.JPG). I reinstalled OMV with same procedure.
    Installed the omv system backup plugin and I am at the moment copying the image to my local drive.

    I am at 76% and going strong!

    I think we can define that the actual issue I had was the Sata case.

    Finally I am happy to have found the issue.
    In the previews installation I had also other issue with the docker stability ... portainer kept disappearing and therefore could not rely on any docker.

    I hope and think it is all related from the same Sata enclosure issue.
    I will continue the installation with docker and portainer in this new installation and see if keeps stable.

    In such a case, all I have to do is trash the case or use it as a local usb drive for my computer.

    I am going to set this as a solved issue.
    if something goes wrong, I will post again and re-open if is possible.

    PS

    can you tell me how to flag this as solved?
    Cant find any checkbox for it!

    That would suggest a power problem with the Pi, there was another thread on here linked from Reddit where a user was using a Pi4 with 5 2.5" drives attached to a USB hub, it was some sort of charging hub, but the output from the hub was sufficient to power the drives and the Pi via the Pi's usb c. Found the thread on Reddit he was using this hub

    Well... I dont have 5 drives but just one!
    Nevertheless, I will try the powered usb hub.
    I still consider that 1 SSD drive should be supported from the Pi. (5vdc 1.6A requested)
    As I mentioned on my Nextcloud, I have the exact configuration. however, the storage is not SSD but old type HD (2.5").

    Reading what you have just posted it would 'suggest' there could be an issue with the connectivity of the SSD to the USB, are you using a Sata to USB bridge adaptor.

    I use a sata 3 case.
    I also thinking might be the case issue.
    I read on the SSD disk: 5vdc 1.6A

    that should be ok for a raspberry pi!

    I soon going to receive just a sata3 cable to usb3.

    will test that as well

    PS

    I did got the usb case out of the Pi and directly to a computer. no issue transferring from same folder...

    Also some info re drive power consumption

    reading this posts looks like they refer to non SSD disks and.

    For a 2.5" SSD drive would be quite annoying to have extra power on a Pi. Specially when that is the only USB attached.
    Lets hope is not the case...
    There is one thing that actually bugs me; I also had Nextcloud with a 1T external drive attached and I do not have those issues for years.
    I would assume that the power consumption is equal if not higher (as that drive is not SSD). Yet I have no transfer issues or power issue (also same original power supply).

    Shouldn't I have the same power issue to each equal device?


    The inconsistency of this issue worries me a little!

    I will continue researching and testing.

    The downside to USB storage is they are reliant on the USB chipset in the enclosure, in my 'spares' cupboard I have 4 external USB cases, the drives are fine but they will not work in the case, so my assumption is the chipset has died. I have a USB docking station that I use primarily for hard drive upgrades so I can test the drives in that and 99 times out of 100 the drives themselves are fine.


    EDIT: that's a good point that raulfg3 makes, I'm guessing your WD is 2.5" therefore it would draw power from the USB

    Its a SSD and should use less power.
    Yet, I will make a new installation and use a power usb hub to check about it.
    This should supply to the possible Pi power supply issue and the Drive power request.

    I also have a second Pi and a second same drive. But a expansion board to be attached for a drive.
    I will give that a try and see if I have same problems

    PS

    Also using official raspberry pi power supply.

    hi geaves.

    its wired (Ethernet).

    but I noticed a far more disturbing situation.
    I noticed that along previews tests but I thought was jus a glitch...

    Suddenly my external storage is gone !

    The HD case flash, but OMV does not see it.
    I left it 2 day and 1 day never touched...

    I cant make the test you requested.

    I can re-install it again and do the test before the HD disappear again.

    However, that is a serious issue!
    Any suggestion is welcome

    I have a WD 3d nand blue storage (1T)

    Thanks

    greetings,

    I am facing a new challenge!

    The situation:

    Unable to transfer 12GB from a shared folder to a local (window) folder.

    configuration:

    - OMV installed on a raspberry pi 4b 4GB

    - External SSD 1T formatted on EXT4

    - shared folder OMVBACKUP with Guest Allowed access rights

    description:

    Since about a week, I made about 3 clean install. In each of them I wiped (twice) and formatted the disk.
    created and mounted.

    Created a shared folder and SMB share. Guest Allowed.


    Added the OMV system backup plugin.
    Created a dd full disk with target folder the OMVBACKUP

    Run the backup.
    Backup successfully created (6 small files and a large file of 12GB)


    Back on the windows machine, I needed to have a copy of the disk image on local. Started the copy. All small files copy in a blink of the eye.
    The larger file starts good and does not even get to 1% that the transfer quickly decreases till hanging and failing.


    Tests:

    - Upload works fine. tried to upload a similar or larger file and the upload does not break.

    - Opened my linux laptop, connected to the share, tried to download the 12GB file. Had same thing; in seconds the transfer failed.

    - Took the HD locally, and could transfer the file super fast

    - Changed network cable, did not solve the issue.
    - tested transferring large files from an other SMB share on an other server in my LAN and had no problem (to rul out the network issue)


    HELP!

    as it is, I cannot relay on the OMV NAS for backups or any sort of storage.

    Can someone help me identify the problem?

    Thanks

    I must say, I am having quite some hard times make urbackup to work properly.
    I am adding a new client and now I receive storage space issue.
    as I point to /srv/dev-disk-by-label.xxxx/yyyy and that is a new 1T SSD, there are no space issues.
    HOWEVER, I have the feeling that I am not setting something right...
    the following is the Stack script that is modify and changed the database and storage location to the shared folder.:
    ===
    version: '2'
    services:
    urbackup:
    image: uroni/urbackup-server:latest
    container_name: urbackup
    restart: unless-stopped
    environment:
    - PUID=1000 # Enter the UID of the user who should own the files here
    - PGID=1000 # Enter the GID of the user who should own the files here
    - TZ=Europe/Berlin # Enter your timezone
    volumes:
    - /path/to/your/database/folder:/srv/dev-disk-by-label-DATA/urbackup-db
    - /path/to/your/backup/folder:/srv/dev-disk-by-label-DATA/backupPC
    # Uncomment the next line if you want to bind-mount the www-folder
    #- /path/to/wwwfolder:/usr/share/urbackup
    network_mode: "host"
    # Activate the following two lines for BTRFS support
    #cap_add:
    # - SYS_ADMIN
    ===
    In portainer --> Container --> inspect, I see few lines I am not sure are correct:


    "Mounts":
    [ { "Destination": "/srv/dev-disk-by-label-DATA/urbackup-db", "Mode": "", "Propagation": "rprivate", "RW": true, "Source": "/path/to/your/database/folder", "Type": "bind" },
    ===
    in the above, source seems a vage destination.moreover:
    ===
    { "Destination":
    "/backups", "Driver": "local", "Mode": "", "Name": "b259e5d9b915b242e5659ad728561947dc9956d39407e830f9630951cd107d96", "Propagation": "", "RW": true, "Source": "/srv/dev-disk-by-label-DATA/dockers/volumes/b259e5d9b915b242e5659ad728561947dc9956d39407e830f9630951cd107d96/_data", "Type": "volume" },
    ===
    here I notice that the destination is /backupsI have manually changed that also in the server. A little after this part of the script, the correct folder is named!
    What is really happening?
    where did I miss to point out the correct storage folder?
    Look forward some suggestions...
    kattivius
    PS
    Apologies for formatting.. I tried to fix it manually. for some reason, the copy and paste was unformatting the code.

    Don't use the /sharedfolders/ path in Docker. Always use /srv/dev-disk-by-label.xxxx/yyyy
    EDIT: ok, you mentioned, that you tried the absolute path as well ;)

    @macom
    actually you are correct.
    Now that I have connection between client and server, I noticed that if i use the SMB shared path, it simply does not work.
    When using the absolute path, it does .
    /srv/dev-disk-by-label.xxxx/yyyy


    but I cannot see any file in the SMB share.

    Yes sorry! I forgot to delete the -multiarch. I did in a previous draft of the post :|



    do I have to modify the path to database folder?
    I did not modify anything database path. If I live it as is, will it get the correct path?
    If not, where is it in OMV?
    thanks

    You have to specify a path which is valid on your system. If you are running OMV from a SD-Card you should choose a folder on your harddrive to reduce wrintings on the SD-Card. For example put in the folder where your other docker config folders are located.

    OK.
    UPDATE:
    it seems that my mistake was to accept the PUID and PGID
    By default:
    PUID: 1000
    PGID: 100
    turns out both have to be 1000 (thats for me)


    I also change the backup path to the absolute path o my /srv/dev-disk-by-label-DATA/backupPC shared folder.
    I did not modify the path to the DB (here a question at the end of the update [1])
    communication and back are now successful. :thumbup:


    Strange:
    the backup is successful and I can see it from the logs. HOWEVER... when I open the shared folder in windows, it is empty.
    When I look at the shared folder in WinSCP, the folder is empty.
    When I check with putty using absolute path, the files ARE there!
    If I check the shared folder using the SMB shared path, the folder is EMPTY
    Should all those path point to same location?
    Absolute path and SMB path are the same folder at the end, correct?


    =====
    [1]
    in relation to the DB path, since I defined all dockers images to be under a docker folder that is located in my 1T external SSD, doesn't the urbackup DB also be stored in the docker image folder?
    Or it would still write it externally on the SD Card?
    =====
    regards

    Yes sorry! I forgot to delete the -multiarch. I did in a previous draft of the post :|



    do I have to modify the path to database folder?
    I did not modify anything database path. If I live it as is, will it get the correct path?
    If not, where is it in OMV?
    thanks