Beiträge von sadpanda

    Yep, seen that a just about everything else on the ol web pertaining.


    Up to Samba 4.15 smb multipath was considered experimental, after that it was enabled by default. OMV6 had a far older package so I built from source thinking it would help the issue.


    OMV7 is packaged with 4.17.12 so I'll come back to this if I cant get resolution in another thread.

    I installed OMV6, had a bunch of shares up and running and wanted to attempt getting SMB multichannel running.... I built/installed Samba from source, gave 10GB cards in my Mint box and OMV static IPs, added server multi channel support = yes to extra settings, connected to 10GB switch and everything was flying at 2x... I shut everything down so I could rearrange the desk and on power on nothing was working.


    Whats worse is I can't even get close to replicating it: on a fresh installs of both OMV6 and OMV7 basic shares work (no ACL just user/pass) but installing updated samba breaks shares. I can see the share but it does not accept credentials.


    everything looks good in /etc/samba/smb.conf after upgrade ie shares are there, users are listed, changes made in GUI are being written to smb.conf


    I've also tired installing samba right after OMV install (no existing shares etc) and no go.


    OMV7 ships with a sufficiently new version of samba to do multipath but I could not get it to utilize both connections. smbstatus -v showed both IPs but only one had multiple ports and speed was slow.


    FYI for testing I've got an NVME drive in the Mint box and simple 8 wide stripe on SSDs. I'm using

    ./configure --disable-cups --without-ad-dc --sbindir=/sbin/ --sysconfdir=/etc/samba/ --mandir=/usr/share/man/

    during build


    Any bright ideas?

    I can confirm re-enabling demand based power management prevented boot.


    I'll report back if I have time to dig through logs and see if I can get it resolved. Probably a 'me' problem given I'm probably the only one still operating a Dell Space Heater model 1950

    So, you created the dataset from the command line? If yes, then you have to click the import button. Not much has changed with the zfs plugin other than visual part.

    Thanks for the quick reply.


    No tried using the gui. If I use CLI can I import/mount root of dataset or will I always have subfolders?

    I created a pool then created a dataset /test via GUI and it is successfully mounted


    At this point I can't create a share on this root mount point... I have to create a file system which results in pool/test/shared My memory may be off but I'm pretty sure under omv5, I was able to create shares on the dataset root. Is this new or am I doing something dumb? This is how pool was created:


    Code
    zpool create -o ashift=12 -O compression=lz4 -O normalization=formD -O acltype=posixacl -O xattr=sa -O dnodesize=auto -O atime=off -O canmount=off -O recordsize=1m pool mirror -yadda yadda yadda


    My understanding is that canmount=off should only apply at top level of pool, not datasets.


    Thanks

    Doing a fresh install. On first login:


    1. change console password
    2. update
    3. install OMV extras via SSH
    4. install pve addin
    5. install pve kernel
    6. reboot

    on reboot I get:

    Code
        a couple of ACPI errors
        ERST: Failed to get Error Log Address Range
        rcu: INFO: rcu_sched detected stalls on CPUs/tasks... (false positive?)
        rcu: rcu_sched kthread timer wakeup didnt happen...
        rcu: Possible timer handling issue on cpu=6 timer-softirq=1 (I've tried reboot and reinstall and this has also been on cpu 4 and 5)
        rcu: rcu_sched kthread starved for xxxxx jiffies!
        rcu: Unless rcu_sched kthread gets sufficient CPU time, 00M is now expected behavior
        rcu:  RCU grace-period kthread stack dump:
        rcu: Stack dump where RCU GP kthread last ran:
        INFO: task swapper/0:1 blocked for more than 1208 seconds  not tainted 5.15.19-1-pve #1



    :/

    currently on OMV 5.3.9 kernel: 5.4.44-1 pve plugins: zfs: 5.0.5 , flashmemory: 5.0.7 , diskstats etc... no docker/plex etc


    box has been shut down for a long time so I'm trying to figure out where I left it and what dum-dum stuff needs fixed


    This is strictly storage (movies, mp3s, jpgs, system backups, iso's etc) no vms, but I will be frequently shuttling video back and forth to my workstation for processing (archiving VHS/family photos etc) so I would like some upload speed (going to 10gb in a week or 2). It looks like I originally setup a big pool of mirrors... probably not ideal so I'm thinking nuke the pool, upgrade OMV and start again is best option. Questions before I start:

    1. I also seem to remember purposefully not installing the ZFS update... I see OMV6 is not listed as stable and ryecoaaron is still working on the zfs plugin. Should I hold on both zfs and OMV updates? Whats the preferred method?
    2. I seem to remember having to fight zfs to recognize disk-by-path (ie /dev/disk/by-path/pci-0000:0a:00.0-sas-exp0x5005076028e311e0-phy10-lun-0-part2) is this still the case with the current zfs build?
    3. Is poor sequential performance in mirrored still a thing?
    4. Instead of one big tank, what about pools based on data and ease of upgrade: ie one striped mirrors for music/jpgs, another striped mirrors pool (possibly SSD) for faster I/O from workstation, some sort of raidz for sequential slower read only stuff like movies?
    5. Anyone have thoughts on structuring across HBA/expanders? ie stripe/z within HBA, mirror across HBAs that way if one HBA/shelf goes down, pool survives. Similarly, what about channels on the expanders; keep stripe/z/mirrors on same channel or spread across all 4?
    6. I've not seen this anywhere but does anyone use dataset quotas / manually constrain datasets a certain size? I can see that being beneficial during recover/upgrading... ie lets say you have 4 x1 Tb and 4x2 Tb drives in some config, limit all datasets to < 1tb so if some crazy hardware/zfs/upgrade/just start over issue comes up, each dataset can be exported to single drives and plopped on a new pool


    Thanks!

    Here is the issue... zfs pool (Tank) with datasets (movies, music etc)....


    You need to 'add a shared folder' before you can expose it as a share on NFS/SMB.


    In the 'add a shared folder' dialog box there is an important blurb

    Zitat von dialog box

    The path of the folder to share. The specified folder will be created if it does not already exist.


    The only options you have to create shares:

    1. Name='tank', device ='Tank', path = '/' - should expose all data sets
    2. Name='music', device='Tank',path = 'music' - should expose root of dataset (ie /Tank/music/)
    3. Name='music', device='Tank/music', path = '/' - should be same as above



    If memory serves, I originally went with option 1 and had no issues.... However, I had to reinstall OMV after trashing my network settings. Once up and running I hopped on here and found a post with instructions on re-connecting shares (I think they used option 2) and missed the GUI auto populating the path field. The result was new nested folders (Tank/music/Music).


    So I deleted shares (this is SCARY! does "delete share and contents" give you a double authentication? seems awfully easy to delete contents by accident) and tried option 1 again.


    'Tank' is exposed, all datasets appear as folders, folders not in datasets appear as folders etc. However file permissions not correctly implemented. I cannot write/delete root dir despite allowed in OMV gui and file system ppermissions:


    root@OMV:/Tank# ls -l /Tank/music

    total 9

    drwxrwsr-x 12 root users 12 Jun 30 17:48 Music

    root@OMV:/Tank#

    my windows usr/pass is replicated in OMV users, groups set to root/users/sudo/



    any thoughts on a fix?




    In playing with the other options above I did have a few instances where empty folders were added but it was not repeatable... GUI buggyness like what I experienced with network settings I suppose

    I trashed my network settings so after a reinstall /re-importing my zpool, I am seeing some oddities:

    1. zpool shares are being presented 'nested'... (ie zpool dataset 'Tank/movies' appears in SMB as \\OMV\movies\movies
    2. shares are not triggering 'referenced' flag in 'file systems' tab

    any thoughts on these?


    Thanks

    I hope this helps someone else...


    OMV installer attempts to create a swap partition equal to the amount of system ram by default. If ram>available space, the installer fails.


    Work arounds:


    -Pull out all but minimum system ram

    OMV doesn't need much disk space and I have a bunch of smaller unused USB sticks lying around (8gb) BUUUT my minimum ram configuration is 8gb.


    -Load Deb then build/load OMV

    Original thread I found suggested netinst img.


    -My method: Install on a sufficiently larger USB stick then transfer partition to smaller stick (I have 64gb ram so I used a 128gb stick)

    1. Install as per usual onto larger stick
    2. load OMV extras/ramdisk plugin, restart / verify operation (swap partition should show as 'unmounted')
    3. burn GParted image to CD or USB stick and boot (make sure you grab amd-64 image)
    4. resize data partition to fit smaller USB stick (overprovision a bit, ie 7.25gb on 8gb stick, many reasons for this)
    5. delete any existing partitions on smaller stick
    6. copy data partition from larger stick onto smaller stick
    7. after these operations have completed, add the 'boot' flag to the newly transferred partition
    8. test functionality - this may not always be the case but I had to install bootloader as well
      1. guide for bootloader here (Restoring GRUB 2 Boot Loader)
    9. repeat procedure on a third stick... Might as well make a backup (or 3) while you are at it.



    This method should also work for migrating from a disk install > USB or spinner>ssd etc.


    references to other threads

    MAIN SOURCE: Issues installing : Failed to partition disk because lack of free space or too small(WTF)

    SWAP partition needed when running OMV in USB Stick?

    Installing-Migrating OMV to USB stick

    Interesting USB endurance test

    I am still on this quite old OMV 3 version. znapzend is running flawlessly.

    I have no experience with znapzend in a docker installation. In the post you have linked there is a another link where a precompiled znapzend package from Gregy can be downloaded. OMV 5 is based on Debian 10 and a znapzend package is also available for that.

    Thanks!


    I had a drive drop out (listed as unavail). I remember seeing you or others post about using serial number id's and was reading more about setting up /etc/zfs/vdev_id.conf (also here) and was wondering if anyone with larger arrays has tried the by channel/slot method?


    I would like to see disks in vdev as something like Ua0, Ub0 (upper jbod, channel 1, disc 0 and disk 1) Lc3, Ld3 (lower jbod, channel 4, disk 2 and disk 3)


    I've been playing with it but I'm at a bit of a loss. I think the slot mapping is required because not specifying/using the defaults I'm getting duplicate Ids/phys...


    (note, all outputs shortened to avoid 10k char msg len restriction)

    Code
    # ls /dev/disk/by-id/
    ata-SAMSUNG_HD103SJ_S246J9GB102368                    ata-ST3750640NS_5QD4EAYM
    ata-SAMSUNG_HD103SJ_S246J9GB102368-part1              ata-ST3750640NS_5QD4EAYM-part1
    ata-SAMSUNG_HD103SJ_S246J9GB102368-part2              ata-ST3750640NS_5QD4EAYM-part2
    ...





    My topology: 2x M1015/ LSi 9220-8i both ports on each M1015 are connected to an IBM 46M0997 expander.. Each expander connects a 16 slot backplane

    so PCI slot


    Code
    0a:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
    0c:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)




    and end devices


    Code
    # ls -l /sys/class/sas_end_device
    total 0
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-2:0:0 -> ../../devices/pci0000:00/0000:00:04.0/0000:0a:00.0/host2/port-2:0/expander-2:0/port-2:0:0/end_device-2:0:0/sas_end_device/end_device-2:0:0
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-2:0:1 -> ../../devices/pci0000:00/0000:00:04.0/0000:0a:00.0/host2/port-2:0/expander-2:0/port-2:0:1/end_device-2:0:1/sas_end_device/end_device-2:0:1
    
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-3:0:0 -> ../../devices/pci0000:00/0000:00:06.0/0000:0c:00.0/host3/port-3:0/expander-3:0/port-3:0:0/end_device-3:0:0/sas_end_device/end_device-3:0:0
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-3:0:1 -> ../../devices/pci0000:00/0000:00:06.0/0000:0c:00.0/host3/port-3:0/expander-3:0/port-3:0:1/end_device-3:0:1/sas_end_device/end_device-3:0:1
    ..



    Thanks!

    New to NAS, fresh build. I played with freenas for a while and got annoyed so now I'm here.


    I forgot to destroy my pool before installing OMV so after installing plugin I saw no disks.


    Used GUI import pool with success... I did not need the data and there was a bunch of freenas stuff on there so I used CLI: # zfs destroy -R tank expecting to kill the whole thing but the result was an empty pool


    My questions are:


    GUI does not appear to support creation of my desired structure (mirrored pairs), I'm assuming process would be create via CLI then import?

    does plugin GUI 'Add Object' = add dataset

    with empty tank, I went to 'shared folders' > add, created share, shared it via SMB... folder is visible via CLI and usable on Win10

    what is this mechanism? I'm still learning but coming from Freenas, folders only reside in dataset


    thanks!