Posts by tux1337

    I run OMV on Proxmox with physical disks mapped to the omv vm, i can not notice any performance difference to a bare metal installation if this is what you mean with experience.


    Before I used proxmox i had a Debian kvm /libvirt hypervisor, this was also fine but not as comfortable as proxmox from an hypervisor view.

    What are you hoping to accomplish? It doesn't do anything that samba doesn't. It is just faster (probably) while being less configurable. Unless you have some very fast networks, I don't see much reason to use it. And since it would require the proxmox kernel or a custom kernel ( no 5.15 in backports yet), it wouldn't make since to have a plugin yet. Most arm boards don't have a 5.15 kernel either.

    From my perspective it would be interesting if the kernel implementation can have more throughput on old x86 hardware as Samba have it nowadays.

    Hello all,


    I've two systems upgraded from OMV4 to OMV5. Both systems used backports kernel with OMV4.

    I removed on both systems the backports kernel manually.


    One system does generate the file /etc/apt/preferences.d/openmediavault-kernel-backports.pref on GUI apply config. The other not.

    I've set the following on both systems:

    Code
    cat /etc/default/openmediavault | grep KERNEL
    OMV_APT_USE_KERNEL_BACKPORTS="NO"


    I haven't found the old Kernel selection page in GUI, that we had in OMV4.

    Did I miss something? How can I disable the generation of the backports pinning file?


    Thanks a lot.

    I had the same issue during the downgrade to 4.14.
    I can confirm, that the UUID's of the ZFS filesystems changed.


    I fixed the issue with editing the config file stored /etc/openmediavault/config.xml


    I created a test shared folder in each file system to get the new UUID. blkid does not show the ZFS UUIDs, therefore I decided to go this way.


    Search for the Tag "sharedfolder", you will find a construct like that:

    Code
    <sharedfolder>
            <uuid>beefbae3-4fb8-6a47-bac9-64aed8bf57f7</uuid>
            <name>GOLD</name>
            <comment/>
            <mntentref>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</mntentref>
            <reldirpath>/</reldirpath>
            [...]
          </sharedfolder>


    Search for the new shared folder and copy the mntentref value. This is the UUID of your ZFS Filesystem.
    After that search for the existing old folders and replace the mntentref with the copied new value.


    Safe the file, and confirm the config change in the webinterface. (I had an outstanding confirmation)




    There is also a section "mntent" with all your mountpoints and the corresponding UUID's. Maybe you can get the UUID from this section without adding a new shared folder.


    Good Luck!


    Code
    <mntent>
            <uuid>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</uuid>
            <fsname>data/GOLD</fsname>
            <dir>/data/GOLD</dir>
            <type>zfs</type>
            <opts>rw,relatime,xattr,noacl</opts>
            <freq>0</freq>
            <passno>0</passno>
            <hidden>1</hidden>
          </mntent>

    I just submitted the bug report to Debian. Just have to wait now.

    Do you have a link to your bug report in the debian tracker?


    I encountered the same issue on Saturday, I've done an downgrade to the 4.14 kernel.


    Is there anybody that can answer the following question:
    Is there an improvement for the ZFS Plugin with the newer backports kernel instead of the stable kernel?
    I would say no.


    I was thinking about to use the stable kernel instead of the backports. I have no hardware issues with the stable kernel. Is there any side effects with OMV if I use the stable kernel?

    I have the same issue after upgrading from OMV 3 to OMV4.


    When I execute
    apt-get update


    I recieve the same message at the bottom of the command.


    On my system are now two python version installed:

    Code
    # python3.5 -V
    Python 3.5.3
    # python -V
    Python 2.7.13

    Does anybody know if it is safe to remove python 2.7.13 and point the python command to the python3.5 executable? Does OMV4 need python?

    Du wirst wohl ums ausprobieren nicht herum kommen, was genau du brauchst wird dir hier nur schwer jemand beantworten können.


    Installiere OMV einfach in einer virtuellen Maschine (zB VirtualBox). Dann kannst du die Plugins einfach ausprobieren und per Snapshot wieder auf den letzten Zustand zurück.

    it is the notation that cronjobs use.


    In the german wikipedia there is a really useful explanation with examples, unfortunately it is not available in the english one.
    But maybe you will find another one when you search for crontab or cronjobs.


    In your specific configuration, for example the first line,
    it will start on the 1. day of the month (for example 1st December) on 11:26am, it use 24hour format.
    With day of the week you can do something like "every sunday"

    I fixed it for me with changing
    /usr/share/openmediavault/mkconf/rsync Line 117:

    Quote from /usr/share/openmediavault/mkconf/rsync Line 117

    -o "echo \"Please wait, syncing <${srcuri}> to <${desturi}> ...\n\"" -n \

    to


    Quote

    -o "echo \"Please wait, syncing <${srcuri}> to <${desturi}> ...\n\" > /dev/null" -n \


    This is a dirty hack - not really nice. hopefully there is a better solution.

    Hello,


    I have several rsync jobs with the --quiet option set in the interface. I've also choosen that the output is send me via mail when some output is generated.


    I expected to get only a mail when an error occures.
    Unfortunately in the rsync script of omv, there is a hard coded output:


    Quote

    echo "Please wait, syncing <rsync://omv@storage.lan:/media/> to </media/e57601e6-f19b-427d-8ffb-88bd0ab135ed/media> ...\n"


    Because of this, I will get every run of rsync a mail, which this message inside.



    Is it possible to suppress this message to archieve the goal, to get only error mails from rsync?



    Thank you very much for your reply.