Posts by Krisbee

    That error message is telling you that the "showmount" command only works for NFSv3 exports. But as you have both version 3 and 4 enabled in OMV you shouldn't be seeing it unless there's something else wrong with your config.


    I don't have a problem getting a basic NFSv3 share working on OMV7 using the WebUI and the default options. E.g:



    If you've been changing config files directly on OMV you could have introduced an error. If you've never got a list out of "showmount", do you have a blocking firewall rule?

    Things to keep in mind:


    1. The docs you quoted are clearly out of date and refer to early releases in OMV6.

    2. In normal use OMV only mounts the top-level subvolid=5 of a BRTFS filesystem, it does not mount any child subvols separately.

    3. Any NEW shared folder created on a BTRFS filesystem is created a s a child subvol, e.g:




    In theory yes, assign both ports of the NIC or none. Otherwise if the 82576 based intel nic is supported in linux by the "igb" , maybe a bit of wrangling could get sr-iov to work on it. See here, for example.

    The ports on your nics are in pairs in the same IOMMU group, which IIRC means you can't passthrough them individually to a VM. But you still bridge them individually to a VM. This thread should clarify things for you

    geaves We don't know the OP's MD array state immediately post power loss, nor do we know what "tinkering" led to the current active MD array state with 13 drives in the array. One or more drives could have been kicked out of the array including either of the drives marked as spares.


    You are right to say the OP should execute a omv-salt deploy run initramfs mdadm to restore OMV to an internally consistent state, but beyond that going from a 13 disk array back to 11 disks with two spares is risky.

    This command here should resolve the issue

    Are you sure? What was a array with 11 disks and two spares is now an array of 13 disk, i.e effectively grown from 11 to 13 disks but seemly the filesystem size has not been grown. Surely to revert to 11 disks with two spares you have to fault and remove two drives and shrink the array back to 11 disks and then re-add the two previously removed drives as spares. A risky procedure.

    :/ don't you have to create/update the conf file first then run update-initramfs -u ?

    I'd agree if you were doing this all manually. But omv-salt is smart enougt to get the order correct. For example:


    If might boot, but if the RAID was created in OMV it could very well be inactive and the filesystem not mounted and data not accessible. I say this because it sounds like the OP made a HDD swap without first removing the defective drive from the MD RAID from within OMV.

    @inuxgurugamer The answers in my previous post. Look at the link, you need to add the first two lines under "Home directories" extra options using your own absolute path to your "home" shared folder.


    Public shares are about guest user access, so I'm not sure what you want here. A common share that all users have access to is an alternative.


    I looked again at wsdd. I'm not sure why you would want to change from OMV hostname being advertised to SAMBA clients to something else. It can be done, but services are also advertised by AVAHI so there would be a mismatch.


    For wsdd, the details are this:


    1. systemctl cat wsdd:



    2. cat /etc/default/wsdd:


    Code
    root@omv-base-vm2:/etc/default# cat /etc/default/wsdd
    # This file is auto-generated by openmediavault (https://www.openmediavault.org)
    # WARNING: Do not edit this file, your changes will get lost.
    WSDD_PARAMS="--workgroup='WORKGROUP' "


    3. To overside/change name advertised add the --hostname = 'xxxxx' option to the wsdd params. But you don't want to edit the /etc/default/wsdd file, you to need to edit the systems wsdd service to point to another "EnviromentFile". So, for example, (a) create a local wsdd param file:


    Code
    root@omv-base-vm2:/etc/default# cat /root/wsdd.conf
    WSDD_PARAMS="--workgroup='WORKGROUP' --hostname='OMVSAMBA'"
    root@omv-base-vm2:/etc/default#

    (b) use systemctl edit wsdd to crete an sysstmed service override:



    4. Restart wsdd service with systemctl restart wsdd and check process is now running with new params (using ps, htop or whatever)


    Code
    root@omv-base-vm2:/etc/default# ps auxxx | grep -m 1 wsdd 
    wsdd       25446  0.0  3.1  37564 28924 ?        Ss   09:29   0:00 python3 /usr/sbin/wsdd --shortlog --chroot=/run/wsdd --workgroup=WORKGROUP --hostname=OMVSAMBA
    root@omv-base-vm2:/etc/default#


    New name as shown in Dolpihin File Manager, note mismatch with AVAHI service name.


    linuxgurugamer Your first post neither lists nor includes screenshots of your SAMBA settings in OMV. So I couldn't say if you have a config error, here's an old example of mine for using the SAMBA home directories:



    OMV is using wssd to advertise the SAMBA server to windows clients and defaults to using the OMV hostname. I don't know if you can override the hostname by using a separate conf file in something like /etc/default/wsdd/wssd.conf.d/hostname.conf

    barje Glad to see you found the solution. Votdev regards the ZFS as a 3rd party plugin and prefers BTRFS, so the first response was not surprising. But to be told to scrap what you done because you don't have the "know how" was v.poor advice when you'd obviously put the work into getting to grips with using zfs in OMV and probably only had to sort out perms ( or possibly over mounting folders under zfs pool root with later datasets of the same name). Let's hope you get a more positive response next time you com the forum with questions.

    Thanks.


    Created btrfs file system at cli. This was 6 years ago. I have smart monitoring on all disks and keep an eye on them.


    Apart from regular btrfs "device stats" check and "scrubs", you don't want your BTRFS fs to get too out of balance across how ever many drives are in your RAID0, nor for metadata space to run out. A filesystem that flips to read-only and/or has an "out of space" (ENOSPC) condition can be awkward to fix. In OMV, whether or not to run any kind of filtered BTRFS balance is left to the individual user to figure out.


    Obviously in your case if you suffer a single disk failure on both your main server and backup server then you're in trouble. But these things are always a case of weighing risks against costs and determining if data loss is just an inconvenience or a disaster.


    Keeping your BTRFS healthy for six years is a very good record.

    Thanks Krisbee. You really know this stuff well and also explain it really well too. Thank you!


    I have 2 a disk btrfs config on omv with data raid0 and metadata raid1. I do a daily rsync bavkup to another omv server with the same disk setup. Anything really important I also: store on iCloud.


    Is this btrfs config a reasonable setup for data that is not super important. Media etc.?

    It's reasonable, but your backup is crucial and make sure your "restore" routine works.


    You must have create your filesystem at the CLI (or converted later) as OMV RAID0 defaults to RAID0 for data, metadata and system block groups.

    The advantage of metadata raid1 in this case is made clear, for example, in this thread:

    Aus der Community btrfs auf Reddit
    Entdecke diesen Beitrag und mehr aus der Community btrfs
    www.reddit.com

    Yes that's correct. I "wiped" the two discs within the storage plugin and get back to the zfs plugin and they were not listed in the drop down field for usable discs. So I created the pool on the command line and the auto refresh in the plugin now showed the new pool so I created the filesystems with the zfs plugin.

    Sounds like user error. Are you sure the disks were wiped? I hope you're not using usb attached drives.


    Creating a "pool" at the CLI risks incorrect pool setup. The default pool settings in OMV are shown in the "zpool history".


    Example of two pools created via GUI, but props shown via CLI:


    When selecting BTRFS you create a filesystem that can span one or more block devices (HDD, SDD, NVME) with a storage allocation pattern determined by the profile used (single, dup, raid0, raid1, raid10). BTRFS does not create any kind of "pool" or virtual device name for the filesystem. The BTRFS filesystem can be referenced by using any of the individual device names.


    A BTRFS raid1 profile is NOT disk mirroring. Two or more devices can be used in a BTRFS RAID1 filesystem. BTRFS first allocates space in "chunks" and then fills up the "chunks". For a RAID1 profile, storage is allocated in pairs of "chunks" with the two copies of data & metadata on two different devices.


    Removing a HDD from your two device BTRFS RAID 1 filesystem means it no longer has two copies for "read integrity checks" and can only write new data & metadata with a "single" profile. If you reboot, the BTRFS filesystem will not auto-mount in degraded mode. It requires use of the CLI to replace a failed HDD and carry out a BTRFS balance.


    The btrfs status page combines info from several BRTFS commands:  btrfs filesystem show, btrfs filesystem df, btrfs device stats and last scrub status.

    In what way is it not meaningful?


    There are multiple threads on the forum about BTRFS with lots of info and tips. Otherwise refer to: https://docs.openmediavault.org


    As already mentioned, replacing a drive in a BTRFS RAID1 filesystem requires the use of the CLI. Your expectation is incorrect.


    ZFS and BTRFS a very different. Neither BTRFS nor ZFS require the use of a proxmox kernel, although there are certain advantages to using it with ZFS.

    Why not?


    I thought because of the speed it's not V3?!

    Why should it be the same?


    If you're accessing a SMB share in Linux via a desktop file manager ( e.g.: nautilus, nemo, dophin. thunar, etc. ) you may get better performance by using a kernel CIFS mount instead. See man mount.cifs and google, for example, "how to mount cifs windows share on linux".