Beiträge von wultyc

    Hi, I have a system with OMV 6 and the ZFS plugin.

    Today I logged in the system and it says the pool status is degraded


    I looked in the CLI for the reason and it shows that I have some errors in some files.


    But I can't understand if this is an error at the file level or at the pool/drive level. I don't feel really confident and keep things this way.



    I tend to think it's a file level issue since the S.M.A.R.T looks good, but I would like some help to confirm this


    Thanks in advance

    so sudo zfs set mountpoint=/srv/nas-data nas-data works but, sudo zfs set mountpoint=/srv/nas-data-zpool nas-data displays as false under File Systems

    It shows false as well


    I might use the workaround of using the default mounting point. Is not big of a deal for me

    Thanks guys

    From what I understood, this is the same of using the -m in the create pool command


    Although I created everything from scratch to test this and it's the same

    geaves I used this one and the result was the same zpool create -m /srv/nas-data nas-data raidz /dev/sda /dev/sdc /dev/sdd

    chente using the set endpoint had the same result. started showing the pool and then shows false

    Code
    # sudo zpool create nas-data raidz /dev/sda /dev/sdc /dev/sdd
    
    # sudo zfs get mountpoint nas-data
    NAME      PROPERTY    VALUE       SOURCE
    nas-data  mountpoint  /nas-data   default
    Code
    # sudo zfs set mountpoint=/srv/nas-data-zpool nas-data
    
    # sudo zfs get mountpoint nas-data
    NAME      PROPERTY    VALUE                SOURCE
    nas-data  mountpoint  /srv/nas-data-zpool  local


    But if I set to the original value is shows again

    Code
    # sudo zfs set mountpoint=/nas-data nas-data


    Isn't that of an issue for me tbh, I was simply trying to have all mount point in the same folder

    Ok strange

    Well I destroyed the pool and created a new one without the mounting point and it appears


    But when I set the mounting point it shows as false


    I think this is where it has gone wrong, installing the sharerootfs plugin allows the name of the pool to be placed in the root of OMV, so what I did (as a test on a VM) was

    Yes sure. Was my intention to mount my zpool on /srv/nas-data-zpool. My question was about the Filesystem tab showing false on the device. I understand maybe the filesystem tab is not ready for ZFS yet.


    But on the ZFS tab everything looks fine

    Hi guys, quick question about ZFS. Does RaidZ (the equivalent to Raid5) needs to make a parity sync the same way Raid5 needs to do?


    Yesterday I installed OMV 6 on my home server and decide to use ZFS. Installed OMV Extras, Kernel plugin. Then installed Proxmox Kernel


    Then I created the Zpool



    On the UI, I've imported the pool and it seams ready to be used, but I thought I need to do a parity sync.




    Other question, now about the way it shows on the file systems tab. It is supposed to show false on the device?

    ryecoaaron thanks for your input and many many many thanks for your work on all the plugins you've been supporting/developing.

    I'll take in consideration your feedback. TBH OMV was my first option, but the KVM and storage questions made me go test unRaid.


    I'll take a look into the ZFS on OMV6 in this forum to try to get me more familiar with this topic. Quick question: once the Proxmox kernel is installed on OMV, do I need to take any special care about the updates (to avoid installing the debian kernel again e.g.)??

    Hi everyone

    I'm looking here for your advice regarding a server build I have. First of all I will explain my current setup


    Server 1

    • HP Microserver Gen8
    • Intel Xeon E3-1240 V2
      • 4C/8T @ 3.4 GHz
    • 16Gb ECC unbuffered DDR3
    • Storage
      • 4x 3.5" bays (3x 2TB drives + 1x empty)
      • 1 odd 2.5" drive caddy
      • 1 internal usb 2.0 port
    • I'm running unRaid trial on this machine
    • I use it for NAS, run some docker containers and a couple of VMs

    Server 2

    • Zotac Zbox ID86
    • Intel Atom D2550
      • 2C/4T @ 1.86GHz
    • 4Gb DDR3
    • Storage
      • 1 internal sata port
      • 1x HDD external enclosure for 2x 2TB 3.5 hard drives
    • I'm running Debian 11 with OMV 6 on this machine
    • I use it only to make a local backup of Server 1
    • It was my main NAS system before I need to run the VMs I'm running now


    I was looking for the best OS for Server 1 (Server 2 is running OMV and I'm more than happy with it).


    My requirements are:

    1. Easy to use and setup
    2. Being able to create multiple users (everyone at my home)
    3. Support for SMB, NFS and RSync (this one is to Server 2 be able to pull the files for backup on schedule)
    4. Support for Docker
    5. Support for VMs
      • I'm looking this to be qemu/kvm, for more advance features such as hardware passthrough and for performance
    6. Support for native apps (like CUPS e.g.)


    At the moment I have the following options

    1. keep unRaid
      • I have no issue with the unRaid file system, that works for me
      • I don't really like the way Docker containers are managed. I installed Portainer, but when I create a new containers through Portioner, even passing the argument restart: unless-stopped, unraid seams to ignore it in favor of it's own ui option AUTOSTART
      • It doesn't support native apps. I have my CUPS server running in the Server 2
    2. True Nas
      • I aimed for this option for the ZFS native support, but
        • True Nas Core is based on BSD so the VMs are not KVM based
        • True Nas Scale refuses to boot from USB, when installed in a SSD connected to the internal USB port
          • I want this OS installed on an SSD connected to the USB 2.0 port not for the speed, but because SSD are more resilient to multiple writes than USB drives and I don't want to lose one of the 5 drive slots I have
          • Besides the the SSD on USB fact, I read online saying True Nas Scale VM support wasn't the best yet for more advanced features. I don't have any specific workload at the moment, but I want this build to be some home future proof
    3. Proxmox
      • I thought on this option because I could create a ZFS pool and then passthrough a volume to use as drive on OMV, having this away may NAS as a VM
      • Proxmox refuses to boot from USB, when installed in a SSD connected to the internal USB port
      • I want this OS installed on an SSD connected to the USB 2.0 port not for the speed, but because SSD are more resilient to multiple writes than USB drives and I don't want to lose one of the 5 drive slots I have
    4. Debian (or Ubuntu, RHEL, etc...) + Cockpit + plugins (ZFS, VMS, docker, smb, nfs, etc...)
      • TBH I don't have any hint on this. I've never used cockpit this way.
    5. OMV 6
      • My backup server is running OMV 6 and I didn't had any problem
      • On this I have some questions/concerns
        • Is best to use ZFS + Proxmox kernel or MergeFS + SnapRaid
          • If ZFS, I saw here the plugin for OMV6 has a simple UI and in the comments it is only for "read" and and change config need to be done by the CLI
          • If MergeFS + SnapRaid, which frequency I should run the SnapRaid parity sync
        • Is the KVM Plugin stable for day-to-day use.
        • I don't have any other box where I can test

    I know its a very long post but I think it explains well my dilemma and can help me with it.


    Zitat

    DISCLAIMER

    With this post, I don't intend by any mean to criticize the volunteers that help developing OMV. It's an amazing software and I like it so much, but at this moment I need to understand if it's the ideal tool for the job I have to it, as we don't want to driving a nail with a screwdriver

    Hi everyone

    I'm testing OMV6 and I found that creating S.M.A.R.T. scheduling tasks on OMV 6 only works for hours greater than 9:00, because the used REGEX requests an hour with two digits, but from mid night to 9 AM the hour only has one digit.


    forum.openmediavault.org/wsc/index.php?attachment/20274/


    The issue only happens with this specific module. The Cron Jobs are totally fine.


    Does Dev team read this forum, or is any other place to share the issue?


    Regards,