Does one need to enable "Run Scrub - Set to true if you want to scrub after a successful sync" in order for the options "Scrub Frequency" and "Scrub Percentage" to take effect? The option sounds like it will run regardless of the set scrub frequency but the SnapRAID logs say "Array scrubbing is not enabled" so should it be enabled or not?
I use WinSCP to transfer files via SFTP. It supports resuming partial transfers.
Using LFS will result in filling disks one by one before moving on to the next disk. Using MFS will result in a roughly even amount of free space across all disks as they fill.
Doesn't answer the question, you're just explaining what it does. What is the benefit of LUS/MFS for media?
Not sure what the objection to scattering data around is. If you operate on the data only via a mergerfs mountpoint you are not even aware of the scattering.
T.Underhill already mentioned drawbacks of scattering. E.g. when you copy over a TV show with multiple seasons/folders. With LFS the whole series ends up on the same drive. With LUS/MFS it ends up scattered around on multiple drives.
Thus if you remove one drive you don't have the whole series on it. Or when you lose one drive and somehow can't restore it with SnapRAID then you have an incomplete series in your pool. Good luck finding out manually what episodes are now missing. With LFS the whole series would be gone so one knows what's missing way more easily.
T.Underhill also mentioned a second drawback: possible drive spindown prevention. If you are e.g. listening to an album and every file is on a different drive, all drives would have to stay awake while I'm listening to that album. With LFS only one drive has to spin up.
I'm sure there are more drawbacks. So the question once again: what benefits does the MergerFS policy LUS/MFS have for media in comparison to LFS?
There is no requirement to write data into the pool via the mergerfs mountpoint. If you want certain data to be stored on certain disks, then put it there directly yourself.
The advantage of letting mergerfs do the balancing is that you don't have to manually place data onto individual drives.
I think you completely misunderstood the question from both T.Underhill and me.
The question is what benefits the MergerFS policy LUS/MFS has for media in comparison to LFS? Why not always use LFS? And the question came up because: why is romibaer even trying to evenly spread the data across all drives thus scattering around everything?
So my question: why would you balance the data among all the drives? What is the advantage?
I'd also like to know this. I haven't found a reason why LFS or for more placement control EPLFS (which requires manual intervention when a drive runs full) shouldn't always be used for media.
I just told you. Look up the path the Docker container is using as described above and then just delete it via SFTP. You'll also need to reconfigure your Docker container to specify the HDD as the download folder instead of your SD card.
You've set the download path via the Docker config. If you can't remember your config go to Services -> Docker -> Docker Containers -> select your Transmission container, press Details and look for the HostConfig part. Or press Modify, accept the warning and look under Volumes and Bind mounts.
How did you install Transmission? Via Docker?
One of the paths it could be under is /home/<your user>/...
No. The rootfs is not mounted on /srv.
My bad, didn't see the SD card is used as the main drive, thought it's secondary. Anyway via SFTP EchoZ can now see the files by going to the Transmission download folder path.
Use SFTP on port 22. If not yet enabled go to Services -> SSH and enable.
Tried FTP etc but can't get to the SD.
All mounted drives are accessible under the path /srv (also via (S)FTP). Since you don't have a label set for the sdcard the folder should be called dev-disk-by-path-... under /srv/.
Announcing services is fundamental for a NAS and to make it users easy to use SMB shares in Windows.So I sadly had to say that this feature request won’t make it into the UI.
As long as I can disable it via SSH I'm okay with it.
I agree that it should be enabled by default.
However imagine a household:
Around 15 Windows devices (PCs and Laptops) in our household. My shares however should only show up on my devices, not on the others (only confuses them, imagine DAUs). No big deal, let's just delete the WSD device on the other computers, right? Nope, Windows is stubborn and at the latest with the newest Feature Upgrade the device once again shows up... Essentially I'd have to endlessly remove and remove with that many devices...
you all know that I hate checkboxes to enable or disable essential features
I'm new here
It will be included in openmediavault 4.1.20, see https://github.com/openmediavault/openmediavault/pull/316.
Could you please make this optional (enabled by default but with a checkmark to turn off)? I don't want my shares to be advertised via WSD.
I can confirm that the solution from MergerFS folders not mounted in /sharedfolders works on latest OMV 4.
There is one caveat in that fix and it's this # This configuration file is auto-generated. Which implies that if there is an update then that file will be overwritten and you will need to do it again.
Correct. I created a second shared folder and the change from the first shared folder was overwritten. Which means any time you use the Add or Edit button in the GUI (even for another shared folder) you'll have to redo the change(s). This does not happen when using the ACL button btw., so personally I can live with it.
I can recommend the web browser extension DarkReader.
It works perfectly fine with OMV (I personally use Filter setting, not Dynamic, for pure black) and looks amazing. And it doesn't have the risk of screwing something up in the installation process and it won't suddenly stop working because of an OMV update.
But I don't understand any rules for this free space if you have a link to check ?
I will use my NAS with big files. +/- 50G per files
The default 4G of MergerFS will be enough for you if you format your parity drive with the command I mentioned above since you don't have a lot of small files.
I have 5 x 4Tb on my old nas but I see max data disk for 1 parity disk is 4 data disk.
I must buy another 10Tb for double parity and go for more data disk.
You don't have to. I'm running 7 disks with one parity disk without an issue. If I understand correctly the recommended drive count is there because the more disks you have the higher the chance of a bad block on one of the drives, thus you are losing data in the restore process. The more disks the higher the probability.
Disclaimer: New to OMV so I'm happy to be corrected about any mistake
how much space did you leave in the mergerfs settings (4G by default)?
50G. It all depends on your block size (I use 512KiB instead of default 256KiB) and the number of files you have.
Since I'm using 10TB disks, also for the parity, I increased the storage for the parity by 70GB by using the following format command:
mkfs.ext4 -m 0 -T largefile4 -L LabelXYZ /dev/sdX1
According to the SnapRAID documentation one can assume roughly half of the block size is wasted with 256KiB. I assume the higher the block size the higher the waste so let's assume 75% for 512KiB. Thus with 70GB headroom in the parity + 50GB free space on the drives (120GB) I can have a maximum of 312500 files. If I understand the parity calculation correctly this is not the maximum file limit for the whole pool but the maximum one drive can have.
I see that the cache on the disks is not enabled by default in OMV, should it but like it or should it activate?
The cache is enabled by default since the hdparm settings shown in the GUI are not applied if you don't press "Save" for each disk at least one time.
Personally I enabled disk cache since my 10TB drives have plenty (256MB) and I noticed significant SnapRAID sync speed increases with cache enabled (although it was enough to only enable cache for the parity drive).
I also saw that directories disappear in the sharefolders, what is the best thing to do to avoid this.
This happened on every reboot for me. The problem is that OMV tries to initialize the shared folder before MergerFS mounts its pool thus resulting in an empty shared folder.
Easy fix was to do the following:
Comment out line 8 of the file: