Posts by maxxfi

    Hello,


    I understand that the recommended way to set NFS to OMV is to have the client user with primary group=100 ('users').
    That is something I can apply for most of my NFS clients, but not for my primary Linux workstation, where the primary group=username (!= 100)


    Is there a simple way to grant r/w access for such a client (for a single username)?

    It seems that disabling the SMART test has solved the problem. Probably with my type of settings SMART would not make the disks spin up if they are in standby, but they do make the spindown counter reset every 7200s.
    I'll probably re-instate SMART tests as e.g. a weekly task, at a time when the NAS is likely up anyway (e.g. evenings)


    Marking the question resolved, but opinions, comments, suggestions are always welcome.

    My OMV server has 1 SSD for the system, 2 WD Red in RAID1 and 3 WD Red in SnapRAID+mergerfs.
    The HDD volumes provide some NFS shares.


    I'd like to configure them to spin down after 3-4 h of inactivity.
    Could you guide my through the settings you would recommend for:
    - spindown timer
    - APM (advanced power management)
    - SMART check interval / power mode


    Currently my settings are: timer=180 min, APM=127 and SMART interval 7200 (mode: standby), but this way disks don't seem ever going to sleep (of course no share is mounted during this time)
    Or is there some other OMV system periodic task that keep the disks periodically busy?


    Thanks

    Hello,


    while configuring my NFS mounts, I ended up using extra options like "fsid=NNN,sync,crossmnt,no_subtree_check,insecure", which are quite different from the "subtree_check,secure" currently offered as defaults in OMV 2.0.
    As from time to time I'm still shaping my mounts so I'm adding more mountpoints, I changed directly the script /var/www/openmediavault/js/omv/module/admin/service/nfs/Shares.js to offer me those values instead.
    Would it be possible to have a new environment variable where to put customized default values? If I open a request on bugtracker is there a chance that it would be supported, or I'm going against some basic design choices?


    Thanks

    Though i was going to read a proof post demonstrating the bug. This was patched at debian stable at the time in 2014


    <a href="http://metadata.ftp-master.debian.org/changelogs/main/o/openssl/openssl_1.0.1e-2+deb7u20_changelog" class="externalURL" rel="nofollow" target="_blank">metadata.ftp-master.debian.org….0.1e-2+deb7u20_changelog</a>


    CVE-2014-0160 is the reference


    BTW, the changelog is present on every OMV system with openssl installed at /usr/share/doc/openssl/changelog.Debian.gz, so personal verification is one zgrep away :)

    Thanks for your reply.


    Anyway on the first point, is it OK or is it a no-no to write directly to the mergerfs disks?


    And regarding direct_io, I added it to the OMV_FSTAB_MNTOPS_MERGERFS options listed in /etc/default/openmediavault and rebooted the server. I can see the fstab has direct_io listed, but I cannot see that flag in the output of 'mount' or in /proc/mounts or /proc/filesystems. How do I verify that it's currently used?

    Hello,


    I'm configuring my new OMV, where I have a volume made by two SnapRAID disks (plus parity) pooled together via mergerfs, which is configured with epmfs policy. I have a simple question that must be stupid, as I didn't see it asked yet :)
    To quickly transfer the content from my old NAS + backups, can I bypass mergerfs layer and write directly to the member disks, or is it recommended to always write to the mergerfs mountpoint?
    Otherwise, if I must go via mergerfs, I guess I can mount it with the 'direct_io' option while doing the mass restoring, then simply unmount it and remount it without that flag (being a media storage, I expect it to be mostly used for reading)?


    Thanks in advance