Beiträge von FONMaster

    I went through a change from aufs to mergerfs in the omv 3.5x range, and had some issues... and now I'm all good using /srv/<uuid> for my mergerfs file systems.... with one exception. I can't reconfigure CIFS to use the /srv/* file systems. When I try to delete the old ones in the gui, it fails with an error and I'm stuck. I thought that I'd be able to go into config.xml and reconfigure it there, but I can't figure out what to change... is there any reference that shows me how to trace a link from a filesystem with a shared folder with a CIFS share? If so, i think I could figure it out from there....


    I've searched and found nothing thusfar....


    any help would be appreciated.


    FON

    Just one other note...


    The original reason that I asked this question is because some drives were not appearing in snapraid SMART reporting, as they were behind an older version jmicon controller. Updating the SMARTCTL config doesn't help that problem, as snapraid requires it's own smartctl settings in its config file. Note that while -d is supported, -i is not.


    FON

    Thanks sub! Did it. It worked. My ego is slightly bruised. I thought after all this time and all this storage that I might be slightly advanced.


    Nope.


    Unless a man can edit /etc/default/openmediavault, he is not advanced. ;)


    Thanks again for the help! Time for me to study!


    FON

    Not sure how to configure this...


    As some point in their life, three of my drives hit 53 degrees. This has caused SMART ID 190 to show that it's failed in the past. I think I can ignore it safely, and I really want to do that... the problem is that I can't figure out how to add -I 190 to /etc/smartd.conf without having it overwritten...


    Anyone have any ideas that can help?


    Your pal,
    FON

    Every time I do this, I kick myself.


    The UUID and the label for the missing disk need to be the same as it was in the original configuration. You can find the UUID and label in /etc/openmediavault/config.xml. The easiest way to find it is to search for the original label of the missing disk.


    Change the label of the missing volume via:


    sudo e2label /dev/sdX1 "mydiskname"


    Change the UUID of the volume via:


    sudo tune2fs /dev/sde5 -U uuid


    You should see the disk reappear in the file system inventory.


    FON

    a couple of notes about 10.0 for those who are interested.


    1) it is faster. I think the estimates from the beta are probably right... about 2x faster. Nothing but goodness.
    2) it does use more memory than 9.3. With 9.3, I was using about 4.5gb, with 10.0 It's back to 8ish... still down over 8.x levels.
    3) CPU utilization is a little higher. Not a lot higher, but somewhat.. so if you run a transcoder on your box (i.e. plex) you may want to watch it at first.


    Happy snapping!


    FON

    you rock! much appreciated!


    Here's my first report - with 8.1, I used 10.4G of RAM. With the only change being the version upgrade, I'm now at 4.2G of RAM. Thrilled by that! We'll see how it goes with time to sync, but the initial signs are great. Can't wait for 10.x, but the 9.x version is a great improvement for me!


    FON

    Quote from evlcookie: “Is there any reason snapraid is still at 8.1 via the plugin?”
    I have updated to the latest for OMV 3.x but not 2.x. Is there something in a newer release that is critical?


    Well.... the thing that would be nice is the lower memory footprint and faster speeds. (Not 10.x faster speeds, but users do report faster speeds...) My setup takes 150+ hours to do a sync after big moves, so every little bit would help.


    Thx!

    Thanks ryeco! Can't wait to see it. Love the plugin thusfar. Thanks so much for it. I sleep better at night, knowing that my wife is less likely to be upset by the loss of her favorite shows. ;)


    FON

    A quick question on parity drives. I've got 24 content drives in my OMV, and I'm busily converting from LVM to SnapRAID. As i read the FAQs, I see a recommended 4 parity drives with 24 content drives. I've got enough spare drives to implement 4 parity drives, but the GUI doesn't seem to support it. Can I edit the snapraid.conf manually to add additional parity drives without messing things up, or should I be satisfied with two parity drives?


    Thx!


    FON