Posts by gderf

    Not enough information provided to say much about why your disk turned up missing. Does the drive appear in the Disks list in OMV? If not this is most likely a hardware problem and the filesystem page will not show the partition. Also, filesystems mounted in the CLI will not appear in OMV's Filesystems page and will not appear in any of OMV's drop down selection lists in various areas within OMV. Filesystem UUIDs are created when a partition is first formatted. These UUIDs do not change by merely unplugging and replugging a drive.

    The SnapRaid manual section 4.4.1 gives an example of how to recover a disk. In that example it shows the destination where the data will be restored to which is arrived at by editing the snapraid.conf file as shown. You can set that destination to whatever you want. It could be the mountpoint for a newly added and formatted disk, or the mount point for some other already existing disk.

    There is no linkage between mergerfs and SnapRaid.

    First off, although this is rarely mentioned, some prior experience with the Linux command line shell is going to be needed if you are going to work under the hood of a product like OMV. The OMV GUI can not do everything for everybody. You probably don't want to hear this but it is a fact of life around here.

    The file you need to modify is /etc/snapraid.conf Elevated (root) privileges are needed to modify this file. Take notes so you can restore things you changed as these changes are only temporarily needed. The easiest way to do this is to use the comment character (#) to comment out a line you wish to change and then type in a new line, changed as needed directly below. There are already many uses of comments in the file so just look, see, and understand.

    If you are trying to recover an entire lost drive, then the space needed is another empty formatted and mounted drive of at least the same size as the lost drive. I do not believe it is possible to span a recovery across multiple drives.

    You do not make any changes to the configuration in the OMV GUI to perform the recovery of a failed drive. It's all done in the shell.

    Also, SnapRAID as implemented in OMV does not use /dev/sdx device labels in its configuration. It uses by-label or by-uuid specifiers. Look in the snapraid.conf file and see.

    Then you should follow the example in the manual section 4.4

    There is a SnapRAID user forum you can look thru or ask questions in, but do keep in mind the assumption that some prior Linux shell experience is assumed.


    2. Is it possible to reserve the boot drive for only OMV use?

    3. Is it possible to change the docker image storage location? I tried changing it under OMV Extras when I first deployed this system, but it didn't work.

    2. Probably not, and even if you could, some runaway errant process could still fill up the drive.

    3. Of course, and not leaving it in /var is always recommended. Find out why this didn't work.

    3 14TB WD Elements HDD’s formatted in ext4 totalling some 42TB (38TB useable after formatting). In hindsight I should possibly have put them in some sort of RAID configuration but they are all working fine for my needs (For now).

    I'm pretty sure that OMV's GUI will not let you configure a RAID with USB attached drives. But you probably could set it up by hand in the shell.

    The aquota.* files are used by the quota system, even if quotas are not enabled. They can not be routinely deleted, even by the root user, because they have their immutable bit set.

    As for why files are not being written to the newly added disk, the most likely causes are an incorrect (for the intended use case) mergerfs create policy or incorrect permission/ownership on the target disk directory.

    What are the filesystem permissions and ownership of that directory (not what you see or set for the shared folder in OMV).

    Also, you may run into problems with SnapRaid throwing a warning flood at you if you run a sync without excluding some, most, or possibly all of that shared folder.

    There are Mini-itx server boards that have as many as 12 SATA ports. I ran a 12 port ASRock C2550D4I in a tiny Silverstone DS380 case that holds 8 hot swap drives plus four internal 2.5in drives. I extended it with a homemade DAS in another identical case that holds another 8 drives for a total of 15. I ran this for more than five years and still have it.

    I recently moved to a used Chenbro NR12000 1U server that has six SATA ports and eight SATA/SAS ports. It holds twelve 3.5in drives and one 2.5in drive. Not small, potentially noisy, but very cheap.

    I have a thread on the forum about it:

    Monster 1U Server

    So long as you have another good copy of the .content file I would try removing that one from the SnapRAID configuration and check again. If it passes check, delete the file from the disk and add it back into the configuration. Another new copy should appear eventually, and once it does you can run the check again.

    Your screencap shows only the environment variables which are a small fraction of the container configuration.

    The entire configuration is needed to debug this along with the log file.

    You pasted a link to the image, but have you read any of the material there, like the docker compose or docker cli files?

    I am beginning to get the feeling that you set this up by parroting someone else's stuff you found on the web into Portainer. Having high expectations that this method works is not reasonable. The tiniest difference between the source's and your use case can completely break things. But this is rarely mentioned in YouTube videos.

    Look at the image source provided docker cli file and correct it to agree with what you did to arrive at your configuration. Then post it.