Beiträge von RPMan


    - Is the system enough to run Proxmox + OMV virtualized whith good performance? And another server OS (like ubuntu server) just with the idea to run docker or small app like transmission indepently?

    Yes it's enough. You can even create an LXC container with docker in it instead of a full VM.



    - Is it possible to mount my RAID1 data disk on OMV? I see that is possible to "pass" the disk to a VM, but the OMV VM will be capable to "see" the RAID without loss of data? Any other solution (without build a new array)?

    Proxmox can see your disks "as is" and you can pass-through them to your OMV VM . After that you need to remount the RAID , if you know how to do it. I think it can work but not sure, I don't use raid.

    You can also add an HBA card. Proxmox can pass-through the card and your disks will be directly managed by your VM and proxmox will not see them anymore. Pretty sure this solution will work.

    No performance hit on the omv vm. It's easier to backup a vm or lxc container than proxmox himself with radarr etc in docker containers.
    My strategy is to keep proxmox clean and install everything in VMs or LXC containers. The day my proxmox HDD fail, I only have to reinstall proxmox and restore VMs and containers.

    Hi RPMan, and thanks for the info. A short info though: as you have now your configuration with the bridged local IP and the port 7000, does that mean that you are accessing the seafile server on the same ip as the omv host ip but on port 7000?

    Yes.



    which, by my understanding is that, the docker was trying to make himself available on the same IP as the OMV host on port 80, conflicting thus with the OMV's nginx...

    Yes.



    this is one way to do it but I think I will move to a solution where the docker will take his own local IP and I can access it directly on port 80.... seems a bit more clean and lean...

    It's totally fine to access seafile with omv ip and with port forwarding (8080:80). It's not worse than a dedicated IP ...

    People can either use docker-compose or write a script or manually start the containers or something else.

    Agreed (my first post).


    If you set the container to start automatically and reboot the system, the plugin has nothing to do with how the containers start. Just trying to make it clear that the plugin is not causing this problem. You would have this same problem if you configured everything from the command line.

    Agreed too.


    I'm only pointing the fact that Technodadlife first post was not really true as is and added additional informations.


    ps1: I work as a DevOps Engineer, I'm a docker certified professional, I give docker courses for my company. I depoyed thousands of docker container on large scale systems (openstack, GCP, AWS) --> I understand what we are talking about.


    ps2: Do you know why this option is called "volumes_from" in OMV plugin instead of "depends_on" ? With docker, "volumes_from" tells docker to use a volume that is already defined/used by another container.

    It does exactly what the docker run command does. This isn't a problem with the plugin. This is easy with docker-compose.

    I'm not saying the plugin has a problem but it cannot implement real dependencies between services. It's the same behaviour as depends_on on docker-compose v2. You can order container startup (not true anymore with compose v3) but you cannot be sur that application inside the container has trully finished to start unless you use healthchecks or scripts (unlike omv docker plugin).
    A container is in state 'running' even if your application inside has not finished to boot.


    @TechnoDadLife: The only reason you think it works is because your database is up and running quickly and/or nextcloud (maybe) retry many connections to DB during X seconds.
    The day your database will take more time to boot that nextcloud, you will see, your nextcoud container will be stopped with errors (or restarted with restart=always) but it is not the right way to do it even more with more complex dependencies.



    "volumes from" in omv plugin or depends_on (without healthchecks or custom srcipt) in compose can only start containers in the order you define but there is nothing that can guarantee application startup time.

    It will restart chronograf container after influxdb container but there is no guarantee that application inside the container (influxdb) has started before (chronograf application).
    A container is considered as started even if the application inside has not finished to start.
    If influxdb application (not container) take more time to start than chronograf application (inside the container) ---> Error even if the influxdb container started before chronograf.


    More info :
    https://stackoverflow.com/ques…ainer-x-before-starting-y
    https://docs.docker.com/compose/startup-order/


    Docker OMV plugin does not implement real dependencies between services.

    You can change docker folder and use a shared folder of your choice in docker OMV plugin.


    By default, your data will not be saved.
    If you create a volume and use it for postgres it will store data in /var/lib/docker of your host OR in sharedFolderOfYourChoice (if you configure the plugin to change default docker folder)


    What you sould do:


    Bind mounts (also called volumes sometimes)-> Map an OMV host folder from your data drive to the container.


    "The -v /my/own/datadir:/var/lib/postgresql/data part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/postgresql/data inside the container, where PostgreSQL by default will write its data files."


    Just read postgres docker image documentation: https://hub.docker.com/_/postgres