Posts by ala.frosty

    The "current stable" version of OMV is 6.0.24, but the newest release is actually 6.3.0-2 and the improvements in the GUI speed that I've experienced are simply stunning! I have a LOT of drives and shares and the GUI used to take a lot of time to build pages but not any more. I'm not sure whether smaller installations will notice the change quite as much, but for me, I'd say that the GUI interface is 3x - 4x faster than it was.


    I'm just a user, and not a dev, but I felt like this sort of dramatic improvement merited an announcement, so I decided to post one.

    NTFS disk afre not wellcome on linux, please consider to use a Ext4 disk instead.

    I've used ZFS to create block devices without file systems on them, then using ISCSI to share the dataset with WIndows. Windows reads the iscsi device as blank drive and can format it as an NTFS drive. ZFS doesn't see the NTFS data structure created inside the dataset and it doesn't care.

    run systemctl unmask proftpd.service before installing the plugin. This is strange because openmediavault 6.2.0 is doing that during installation.

    I suspect that OMV6.2.0 is *supposed* to be doing that during installation, but after running the aforementioned unmask, the omv-upgrade worked properly, so something is awry.

    Also, at the end of the upgrade, my system is still at 6.2.0-2.

    Adendum: After some clickety-clickety in the GUI, everything feels considerably faster. I don't have a clue whether I'm just imagining it, or not, but there seems to be a performance improvement.

    FTP disappeared from my GUI today after an # apt update and ProFTP is an available System-Plugin. And the bottom line is: FTP under OMV6 is not working right now. I have rebooted several times to no avail.

    Oddly, my OMV6 Version is 6.2.0-2 (Shaitan) and I don't think that I can get to 6.3 because the ProFTP install component is broken.


    Pro-FTP will not install and omv-upgrade generates these errors:


    After a few hours fuming at the back of the struggle bus, I was able to get iSCSI shares working using ZFS datasets by following this guide.

    Once it's configured, it's terrific. While the currently implemented OMV LVM solution for TGTs works, ZFS provides a lot more features including snapshots, RAID-Z, on-the-fly compression and encryption. The ZFS target that I created does not appear in the webGUI but appears in the CLI with # tgtadm. I think that it would be pretty great to be able to add ZFS iSCSI shares right from the webGUI, although I suppose it would be useful to know how many others would find this feature helpful.

    In my use-case, I'm running OMV in a VM on Proxmox. I have Proxmox hand over a SAS HBA controller to the OMV session and run all my network shares through there. I have other proxmox VMs and CTs for various other tasks. In this configuration, I can (theoretically) lift the HBA controller with drives out of the Proxmox box and use it in either a bare metal OMV box or another Proxmox box, or for testing purposes, shut down one VM and passthrough the HBA to another VM with the data configurations all readable directly from the disks with ZFS.

    I'm having almost the exact same trouble. Except for me, after I do the undo, even the web GUI won't let me make changes, throwing the 500 - internal server error. And the error message is massive! I'm running a Proxmox kernel. Error message is too large to include here due to 10,000 char limit!!!



    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color samba 2>&1' with exit code '1': debian:

    Portability! This practice makes the drives and HBA portable between computers. I can pick up the HBA and/or the driveset and drop them in a bare metal server running Debian and read everything on the drives. Or boot from a Debian flash USB stick on the same box and read everything for troubleshooting. And yes, I've done this on multiple occasions when I've upgraded my hardware and OS drives, etc. Maybe it's possible to configure this same sort of thing with Virtual drives, but I don't know how to do it reliably.

    I discovered this same bug today. I created a new dataset in an existing pool and then all the other zfs pool datasets automagically changed from using the pool to using the newly added dataset as their device.

    Note: This is NOT an installation bug report but is instead a bug with the ZFS GUI. Will file a report on GitHub as this does not appear to be an open issue.


    Code
    # dpkg -l | grep openmediavault
    ii  openmediavault                 6.0.42-2                       all          openmediavault - The open network attached storage solution
    ii  openmediavault-kernel          6.3.3                          all          kernel package
    ii  openmediavault-keyring         1.0                            all          GnuPG archive keys of the OpenMediaVault archive
    ii  openmediavault-omvextrasorg    6.1.1                          all          OMV-Extras.org Package Repositories for OpenMediaVault
    ii  openmediavault-resetperms      6.0.2                          all          Reset Permissions
    ii  openmediavault-zfs             6.0.11                         amd64        OpenMediaVault plugin for ZFS
    
    # uname -a
    Linux OMVserver 5.15.53-1-pve #1 SMP PVE 5.15.53-1 (Fri, 26 Aug 2022 16:53:52 +0200) x86_64 GNU/Linux

    Speed update: It's slow!


    I've got six SATA-III drives connected into it using ZFS and it's running at 35.2M/s.

    For contrast, my SAS drives (6Gbs) are also doing a scrub but they're seeing throughput at 126M/s .. Which is roughly five times faster. So, the card works, but I may send it back anyway and use my SAS controller to access the SATA drives. For similar sized ZFS pools, the difference in scrub speed is 8 hours vs around 40 hours!

    I ran some zio test on the SAS ZFS pool and the SATA zfs pool and the SAS pool is about twice as fast. This kinda makes sense as SAS is bidirectional.


    Code
    # fio --loops=5 --size=1000m --filename=/mnt/test_drive/fiotest.tmp --stonewall --ioengine=libaio --direct=1   --name=Seqread --bs=1m --rw=read   --name=Seqwrite --bs=1m --rw=write   --name=512Kread --bs=512k --rw=randread   --name=512Kwrite --bs=512k --rw=randwrite   --name=4kQD32read --bs=4k --iodepth=32 --rw=randread   --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite>~/test.Sata-controller.txt &

    Note that the above test evaluates the speed writing to the file fiotest.tmp at whatever location you select. So, mount your file system up on /mnt/ and set the directory appropriately, then run the code to see how your system is doing.

    My LSI 9201-16i decided to take dirt-nap, necessitating a drive controller purchase.

    I went onto the Amazon and found this 8-port drive controller for $35: PUSOKEI PCI-E to SATA 3.0 Card, 8-Port SATA3.0 Interface Expansion Card

    It works right out of the box and the boot rom POST properly detected all my drives. Debian instantly recognized the controller and all my ZFS drives. I'm running OMV6 in a Proxmox 7.2-3 VM. Proxmox runs on Debian so that was the first test that the controller passed. With that working, I set about passing the entire HBA controller into the OMV instance to Proxmox using passthrough and it also works perfectly with ZFS reading and importing all the drives.

    I haven't run any speed tests on it just yet, but it seems okay so far, with about 6 hours of use. I'm moving a bunch of files around and will test it's speed later. And if you don't care about the speed, then it's simply a very reasonably priced controller. I also found a 10 drive controller for almost double the price, but I think I'd rather purchase two of these 8-port controllers if I really needed more than 8 storage drives.

    I have no financial interest in the sale of these cards. It worked for me on an old dual-xeon x58 BIOS mobo and I figure this info might be useful to some of you.


    My ZFS installation went to borked after an update when backports was not enabled on my system. It stayed borked when I enabled backports. I tried flipping back and forth and a variety of other things last night. Backports now off again. Today, I uninstalled openmediavault-zfs, then installed it again and, after rebooting, everything is back to working.

    Whatever dependency changes were added, or dependency conflicts resolved, worked. Thank you.

    I have no idea whether this will help or not. I attempted to update zfs-dkms and encountered an error "Bad return status for module build on Kernel: 5.4.98-1-pve"


    That appears to be caused by 2.03 dkms, but 2.02 works


    Where did you get 2.02? My zfs-dkms is at 0.7.12 !!!!

    After all the messing about, I only have two options under "update Management -> Install from updates" which are "Pre-release" and "Community-maintained" .. backports is now AWOL.

    Code
    ~# apt list zfs-dkms
    zfs-dkms/stable 0.7.12-2+deb10u2 all [residual-config]

    As I suspected, your disk is 100GiB but your partition is still 50GB:

    Code
    GPT PMBR size mismatch (104857599 != 209715199) will be corrected by w(rite).
    Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 39D43581-36CA-42A2-B921-3BCE200E8619
    Device     Start       End   Sectors Size Type
    /dev/sdb1   2048 104857566 104855519  50G Linux filesystem


    Refer back to my post #6. Use Parted to resize /dev/sdb1. Once you've followed that, return here and repost the "fdisk -l" result.


    you want your new

    Code
    resizepart end=209715200S

    Because that's where fdisk reports the disk ends.