Posts by tux1337

    I run OMV on Proxmox with physical disks mapped to the omv vm, i can not notice any performance difference to a bare metal installation if this is what you mean with experience.


    Before I used proxmox i had a Debian kvm /libvirt hypervisor, this was also fine but not as comfortable as proxmox from an hypervisor view.

    What are you hoping to accomplish? It doesn't do anything that samba doesn't. It is just faster (probably) while being less configurable. Unless you have some very fast networks, I don't see much reason to use it. And since it would require the proxmox kernel or a custom kernel ( no 5.15 in backports yet), it wouldn't make since to have a plugin yet. Most arm boards don't have a 5.15 kernel either.

    From my perspective it would be interesting if the kernel implementation can have more throughput on old x86 hardware as Samba have it nowadays.

    Hello all,


    I've two systems upgraded from OMV4 to OMV5. Both systems used backports kernel with OMV4.

    I removed on both systems the backports kernel manually.


    One system does generate the file /etc/apt/preferences.d/openmediavault-kernel-backports.pref on GUI apply config. The other not.

    I've set the following on both systems:

    Code
    cat /etc/default/openmediavault | grep KERNEL
    OMV_APT_USE_KERNEL_BACKPORTS="NO"


    I haven't found the old Kernel selection page in GUI, that we had in OMV4.

    Did I miss something? How can I disable the generation of the backports pinning file?


    Thanks a lot.

    I had the same issue during the downgrade to 4.14.
    I can confirm, that the UUID's of the ZFS filesystems changed.


    I fixed the issue with editing the config file stored /etc/openmediavault/config.xml


    I created a test shared folder in each file system to get the new UUID. blkid does not show the ZFS UUIDs, therefore I decided to go this way.


    Search for the Tag "sharedfolder", you will find a construct like that:

    Code
    <sharedfolder>
            <uuid>beefbae3-4fb8-6a47-bac9-64aed8bf57f7</uuid>
            <name>GOLD</name>
            <comment/>
            <mntentref>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</mntentref>
            <reldirpath>/</reldirpath>
            [...]
          </sharedfolder>


    Search for the new shared folder and copy the mntentref value. This is the UUID of your ZFS Filesystem.
    After that search for the existing old folders and replace the mntentref with the copied new value.


    Safe the file, and confirm the config change in the webinterface. (I had an outstanding confirmation)




    There is also a section "mntent" with all your mountpoints and the corresponding UUID's. Maybe you can get the UUID from this section without adding a new shared folder.


    Good Luck!


    Code
    <mntent>
            <uuid>2fb14b0b-3961-4d9f-b6f8-52ed1f8e3c1a</uuid>
            <fsname>data/GOLD</fsname>
            <dir>/data/GOLD</dir>
            <type>zfs</type>
            <opts>rw,relatime,xattr,noacl</opts>
            <freq>0</freq>
            <passno>0</passno>
            <hidden>1</hidden>
          </mntent>

    I just submitted the bug report to Debian. Just have to wait now.

    Do you have a link to your bug report in the debian tracker?


    I encountered the same issue on Saturday, I've done an downgrade to the 4.14 kernel.


    Is there anybody that can answer the following question:
    Is there an improvement for the ZFS Plugin with the newer backports kernel instead of the stable kernel?
    I would say no.


    I was thinking about to use the stable kernel instead of the backports. I have no hardware issues with the stable kernel. Is there any side effects with OMV if I use the stable kernel?

    I have the same issue after upgrading from OMV 3 to OMV4.


    When I execute
    apt-get update


    I recieve the same message at the bottom of the command.


    On my system are now two python version installed:

    Code
    # python3.5 -V
    Python 3.5.3
    # python -V
    Python 2.7.13

    Does anybody know if it is safe to remove python 2.7.13 and point the python command to the python3.5 executable? Does OMV4 need python?

    Du wirst wohl ums ausprobieren nicht herum kommen, was genau du brauchst wird dir hier nur schwer jemand beantworten können.


    Installiere OMV einfach in einer virtuellen Maschine (zB VirtualBox). Dann kannst du die Plugins einfach ausprobieren und per Snapshot wieder auf den letzten Zustand zurück.

    it is the notation that cronjobs use.


    In the german wikipedia there is a really useful explanation with examples, unfortunately it is not available in the english one.
    But maybe you will find another one when you search for crontab or cronjobs.


    In your specific configuration, for example the first line,
    it will start on the 1. day of the month (for example 1st December) on 11:26am, it use 24hour format.
    With day of the week you can do something like "every sunday"

    I fixed it for me with changing
    /usr/share/openmediavault/mkconf/rsync Line 117:

    Quote from /usr/share/openmediavault/mkconf/rsync Line 117

    -o "echo \"Please wait, syncing <${srcuri}> to <${desturi}> ...\n\"" -n \

    to


    Quote

    -o "echo \"Please wait, syncing <${srcuri}> to <${desturi}> ...\n\" > /dev/null" -n \


    This is a dirty hack - not really nice. hopefully there is a better solution.

    Hello,


    I have several rsync jobs with the --quiet option set in the interface. I've also choosen that the output is send me via mail when some output is generated.


    I expected to get only a mail when an error occures.
    Unfortunately in the rsync script of omv, there is a hard coded output:


    Quote

    echo "Please wait, syncing <rsync://omv@storage.lan:/media/> to </media/e57601e6-f19b-427d-8ffb-88bd0ab135ed/media> ...\n"


    Because of this, I will get every run of rsync a mail, which this message inside.



    Is it possible to suppress this message to archieve the goal, to get only error mails from rsync?



    Thank you very much for your reply.

    Just out of curiosity, why are you encrypting the OS?

    On my root drive are key files for data encryption that is used to make an encrypted backup "to the cloud", so it makes sense to decrypt the root drive in this case.



    May I suggest a solution?


    It would solve the problem for the various configurations if you could add a new field to the omv-backup interface which the administrator can choose the drive for the MBR.
    The field could be prefilled by your exisiting algorithm to find out the root drive - but could be overwritten by the administrator if it not fit.


    What do you mean?

    Hi guys,


    I've the same issue with another configuration.
    My system is fully encrypted with luks (root partion also).


    When I try to make a backup with openmediavault-backup extension, the rsync part is successfully done. Then dd of the root partition throws the error:

    Code
    Root drive: dd: failed to open '': No such file or directory


    Issue is maybe the same: root partition is not recognized correctly.



    Configuration:


    MBR: /dev/vda
    Encryption partition (LUKS) for root: /dev/vda1
    root partition /dev/mapper/vda1-crypt


    blkid:


    Code
    /dev/mapper/vda1-crypt: UUID="f674036b-aa8e-49c5-8912-87dfdf35e448" TYPE="ext4"
    /dev/vda1: UUID="f4c663a2-1f50-48b8-99bb-a187c8fce390" TYPE="crypto_LUKS" PARTUUID="c0e211b5-01"
    /dev/vda5: UUID="4302a071-6ba4-4ab8-83c6-a08d680c00c8" TYPE="ext4" PARTUUID="c0e211b5-05"
    /dev/vdb: UUID="fa683cb3-523a-4c78-a509-035e74581fcd" TYPE="crypto_LUKS"
    /dev/sr0: UUID="2017-10-07-13-20-04-00" LABEL="d-live 9.2.0 gn amd64" TYPE="iso9660" PTUUID="10969f64" PTTYPE="dos"
    /dev/vdc: UUID="b52e55e9-a17b-49eb-9d1e-8772c4ab7e95" TYPE="crypto_LUKS"
    /dev/sda: UUID="047f12f2-5b37-4366-8bf4-0712c02e9d44" TYPE="crypto_LUKS"
    /dev/mapper/sda-crypt: LABEL="backup" UUID="5844538795724956145" UUID_SUB="7077405565985440678" TYPE="zfs_member"
    /dev/mapper/vdb-crypt: LABEL="data" UUID="15441237811974705261" UUID_SUB="12513600626665205360" TYPE="zfs_member"
    /dev/mapper/vdc-crypt: LABEL="data" UUID="15441237811974705261" UUID_SUB="9309010814859507645" TYPE="zfs_member"



    Is this a valid configuration you want to support? Would be nice if you could support encrypted root partitions.


    Thank you for your support.