Posts by Adoby

    I would install docker and plex to the storage drive, rather than to /var/lib on the USB stick. Avoid putting docker and plex on the USB stick. Or at least use a high quality USB stick or a small SSD.


    The easiest might be to do a test install, and during install simply change the default suggestions shown so that dockers and plex are not installed under /var/lib but on the storage drive. Verify that plex works as it should.


    A more advanced way, that I usually do, is to add an extra partition to a fast storage drive and change fstab so that partition is mounted as /var/lib. Or even add an extra SSD just for /var/lib. After that you can use all the default settings. Nice. Just make sure the partition for /var/lib does not fill up.


    Please note: OMV is remarkably easy to use and install. But it does require some knowledge and experience and a willingness for some experimenting, DIY and tinkering. Experience from using Linux and especially Debian/Ubuntu in other contexts is a great help. Especially when you add stuff to it, like docker and plex. There is a reason why expensive ready-to-run NAS are being sold. They require even less knowledge, tinkering and work.

    Yes, this is indeed advanced. If you want to fully automate Android Backups to a NAS you will indeed have to do a lot of work. Perhaps there is an app that can do this, already?


    I have not fully automated this. Currently I just use scheduled syncs over WiFi initiated from FolderSync PRO. Works fine as long as I haven't turned off WiFi. If the phone is not connected the sync simply fails.

    Thank you a lot. It was exactly this happening and you helped me figure out the problem in really much less time than it would have took! I now created a script that first tries to read a file that is ONLY in the mounted drive so that if it isnt, the rsync will not happen.

    That is clever. Another option is to store the script on the remote server, and launch it using a cron-job (or a Scheduled Job) on the local server that also will run the script. Then the script itself is used to flag if the remote server is available.


    An advanced extra option (that I haven't tried) is to create a "mirror script" in a subfolder to the mount point (when nothing is mounted) with the same path and name as the remote script. Then this local mirror script will be executed if the remote server is not mounted, rather than the remote script. For instance to mount the remote server and/or notify that it is not mounted.


    When the share of the remote server is mounted then the "mirror" script is not visible and the normal "remote" script is executed.

    It is a bit unclear what you are doing.


    It seems that you run OMV5 on a SD card connected to the RPi4 using the normal card slot? Is this correct?


    And you have connected some external card reader to a USB port and try to partition and format the 1TB SD card using this card reader? Is this correct?


    Is the above is correct, I assume that the card reader may be unable to use a 1TB SD card correctly OR the SD card is defect. Or it may be because your RPi4 does not support this specific card reader. Or it may be because OMV5 is not able to use the card reader or the 1TB card correctly.


    You need to figure out what is the problem. You do this by changing things and see if the problem stays or goes away, and use some deduction.


    I suggest that you test the card on a PC to verify that it really is a 1TB card and that it is not damaged. Then try to partition and format the SD card from the PC instead, using the card reader. You can temporarily boot linux from a USB thumb drive if you use Windows on the PC. And also run some tests on the card to verify that it works OK after partitioning and formatting. You can also try to run other Linux distros on the RPi4 and try to use them to partition and format the card. And you can try with other cards and other card readers.

    You are doing something very wrong or haven't thought trough what it is you are trying to do. There is no real reason why SMB, using normal drag-and-drop, shouldn't be suitable for copying over files to OMV. Besides, you are not supposed to write to any root directories in any filesystem on OMV.


    That said, here is one way to copy files to OMV without any limitations or safeguards at all. Possibly making OMV unusable in the process:


    You can use Midnight Commander and fish. Install MC on your OMV box. Login and start MC, possibly as root. Then open a ssh session in one of the MC panes, to some remote server also running ssh. Possibly as root on the remote server. And copy away from the remote server to the OMV box. You will be able to overwrite anything and everything. And do so without any regard to how file sharing on OMV works.


    This may (is likely to) mess up your OMV box because access rights are not correct. And you are able to overwrite anything. But you may be able to fix some of the mess afterwards running the plugin reset perms.


    There are many other ways, involving SSH, to do the same dangerous thing, but using MC and fish may be the easiest. At least to me.

    I am experimenting, mostly for fun, with writing (C++) a snapshot style backup utility that use checksums to detect bitrot and fix it. To fix the bitrot it copies over the backup copy if the original file has bitrot and copies over the original file again if the backup copy has bitrot. The utility works fine between locally mounted filesystems. I am testing on EXT4. Mergerfs and NFS.


    My thinking is that during a backup the utility has access to previous snapshot backup copies of all files. So that provides the redundancy needed to fix bitrot without any need for parity, mirroring or RAID. All that is needed is backups. But of course, it is not real time. But it should still work OK for large media libraries that are mostly just growing slowly. Video/music/photo archive.


    But it is still buggy and too slow. And use too much memory. A backup utility should not be buggy... I am currently rewriting it (almost) from scratch for the third time. Sequential file read performance set a hard limit on the speed of bitrot testing, I want to come close to that speed.


    No idea when it will be done. I have been working on this on my spare time, off and on, for a few years now...

    I have no experience with ZFS. But I know it has some nice features. But no filesystem can provide protection against everything. For that you need backups.


    I use rsync to create versioned snapshot style backups from one server to another. In some cases I do this more than once. I use a script for this. You could also use rsnapshot.


    And in some cases I also prepare compressed archives. .zip or .7z. This is for the really important stuff. Inside the compressed archives checksums are stored, so I can easily use normal archiving tools to verify/test that the archive and the contents is correct. I check and update the compressed archives once per year. Typically during the winter holidays.

    I don't think it is possible to have different storage policies for separate folders.


    However, if you create a pool with existing files in folders, the files remain where they are. Unless you move/copy/modify the files. Or balance the pool.


    One possible option then is to use a policy that does not preserve the path and regularly run a script that move files you have changed back to the right HDD. I think rsync would be perfect for this, but some testing would be needed to verify that Move files from the wrong HDD to the right. Perhaps as part of you creating normal backups of your data.


    (Warning: When moving files in a mergerfs between the pool and individual HDDs it is easy to delete files by mistake! That is because often when you move a file you use a copy-delete method. And the delete may delete both copies if the paths match...)


    Another option could be to create two separate pools. One that preserves existing paths and one that doesn't.

    ... I would have to check the specs what official maximum size they indicate. So far I´m running 4x 6TB internal working fine.

    From the manual:

    Drive support information

    This server has four drive bays that support:

    LFF non-hot-plug drives. The maximum LFF drive capacity is 16 TB (4 x 4 TB).SFF non-hot-plug drives. This drive configuration requires the SFF-to-LFF drive converter option.These drives are not designed to be installed or removed from the server while the system is still powered on. Poweroff the server before installing or removing a drive.The embedded Marvell 88SE9230 PCIe to SATA 6Gb/s Controller supports SATA drives only. RAID 0, 1, and 10 levelsare supported.To configure drives connected to the onboard LFF/SFF drive SATA port, use the Marvell BIOS Utility or the MarvellStorage Utility .For SAS drive and advanced RAID support, install an HPE Smart Array Gen10 type-p SR controller option.To configure drives connected to the Smart Array controller option, use the HPE Smart Storage Administrator

    This is one reason why I never got a HPE Microserver Gen10... Seems it is wrong?

    Is there a limit on how big internal data disks you can use with the Gen10? (Or Gen10+) Like max 4TB each? Or can you use any sizes? Say 16TB or bigger disks?

    I need raid for 24/7 uptime and the data is sensitive hence the raid 10 so yes I am looking to do raid.

    Then I suggest you use proper server grade hardware that is known to work correctly. And that you don't trust your sensitive data to an old laptop and a cheap USB enclosure.


    Building server for sensitive data with demand for 24/7 uptime using an old laptop and an old USB enclosure does seem more than a little a bit strange.

    You need to find the bottleneck.


    There are tools that can measure only the disk read and write speeds. There are tools that can measure only the speed of the network connection.


    One good starting point is to try to optimize to get as close as possible to the theoretical maximum. But then you first need to figure out what that is.


    Then, once you achieve close to the theoretical maximum under optimum test circumstances, change to actual circumstances, on step at a time. And test to find out if something makes performance drop suddenly.


    For a cabled GbE network that is easy. With good cables, and a decent switch, it is usually just a little less than 1Gb/s. Or slightly more than 100MB/s. Very little interfere as long as the computers have fast enough network cards, filesystems and disk access.


    (NTFS on a Linux computer is often slow. USB is often slow.)


    For a WiFi network it is very difficult. Surrounding WiFi networks can cause interference. Performance drop very fast depending on distance and what is between the WiFi stations. Especially 5GHz WiFi is very sensitive to distance and ANY object between the WiFi antennas. Test with all devices involved in the same room, turned so that the antennas are all in sight of each other, in the early morning when WiFi traffic in the neighborhood is low.


    The WiFi bandwidth specified for a router is often the combined bandwidth for several simultaneous but different connections. Possibly 2.4GHz and 5GHz bandwidth is also added together, giving a nice big number that doesn't say anything about how fast a single WiFi connection may be under optimal circumstances.


    I have a three node 5GHz WiFi mesh in my rural home. No neighbors with interfering WiFi. Over WiFi to a device I may, under very good circumstances, free sight, get around 30MB/s. More typically less than half of that if there are any walls or objects for the signal to pass.


    Cabled connection from mesh nodes gives 60MB/s. (My mesh can combine the bandwidth for two wifi routes.)


    Cabled connection all the way typically gives around 100MB/s.

    When you learn how to use rsync, try to simplify things and set up test servers and paths and test files. That is how you learn. Finding the logic, testing different things and the effects of minor speling errors. Possibly following some of the many, many online tutorials as well.


    Imagine the chaos if every beginner trying to learn to use any and all basic Linux tools did it by asking questions about why things don't work as they expect and their elementary mistakes on a forum that is about something completely different.


    Not trying to be nasty, I think it is great that you are learning to use rsync, but ...

    1. It is possible to use rsync to make one filesystem an identical copy of another. It is also possible to use rsync for other things. You need to decide what you want rsync to do, then test that. And finally check to see if it was correctly done. It is easy to make simple mistakes. You need to look at A and B and possibly compare them.. I like to use ssh and midnight commander.


    2. OMV does not, typically, have access to filesystems on client (Win10) computers. But client computers may have access to filesystems on OMV. You set that up. If a client has access to a filesystem on OMV the client can use that filesystem to store a backup. Again, you set that up.

    Get it to work in a test folder first. Make sure it works OK using the simplest possible setup. Then go from there.


    Either you have problems with speling or access rights. I'd guess...


    You need to make sure the user used is a member of group ssh and you need to enable ssh.


    Also please note that you may need to have matching numeric user ID and group ID. It may not enough to have the same names.


    I ensure this by always creating the same users in the exact same order on each server. The "correct" way is via centralized login. You can "hack" to change user ID. Google for details.

    It might be variant of the Dunning-Kruger effect? If you think that you know how to do something, you more easily make mistakes because you skip the instructions or you do extra stuff that cause problems.


    Another amusing example can be found in gardening. There is a home vegetable growing technique known as the square-foot-gardening method. It is about very intensive gardening in weed free custom mixed high compost content soil. It is possible to explain it fully to gardening novices, for free, with very good growing results already the first growing season starting from scratch, in an hour or less. But for experienced gardeners it may be impossible to ever actually get good growing results using the square-foot-gardening method.

    Yes. If you use OMV to run rsync scheduled, it will run OK.


    However you can only use shared folders as source and destination.


    Also this method, naturally, only works on OMV.


    I use the Linux standard scheduling mechanism, crontab, to automatically run scripts that in turn run rsync. That method works on any Linux computer and for any file or folder. And it also allows you to mount and unmount drives and use advanced features of rsync. For instance I use rsync to create versioned snapshots with unchanged files hardlinked from the previous snapshot.

    If you run rsync in a normal SSH shell, and close the shell, then rsync will also close. You pulled the rug out from under it.


    The solution is to use a "detached" SSH shell using screen.


    Then you can run a commond, close the window for the SSH shell, and the rsync command will continue to run. And later you can reconnect to the detached SSH shell to see if rsync is done.


    https://linuxize.com/post/how-to-use-linux-screen/