Beiträge von Adoby

    The image is a bit ad-like, I have to admit. But it was the best still image of the case I could find. I'll see if I can find one less flashy sales-like.


    And don't worry about me being a moderator and "challenging me". I don't mind at all! I am not perfect and I am learning as well.

    You are not supposed to use your username and password.


    Instead use these credentials:


    admin:openmediavault


    That is, user "admin" and password "openmediavault".


    Make sure to change that password the first time you login.

    If you have several drives you can look at a certain bit in all these drives. And check to see if it is a 1 or a 0. Ones or zeros.


    If you count all the ones at the same position on all drives, you get either an odd or an even number of ones. This is the parity. Using this information, if one single drive is missing, you can figure out if it was a one or a zero.


    For instance, if the old parity was even and the new parity, with one drive missing, now is odd, the missing drive had to have held a 1 at that location.


    https://en.m.wikipedia.org/wiki/Parity_bit


    RAID use parity this way, in real time. This has some benefits and some drawbacks.


    Snapraid also use parity in this way, but not in real time. This has some other benefits and some other drawbacks.


    I'd use snapraid. One 2TB hdd drive for snapraid parity. The rest of the hdd drives as a mergerfs pool. If you expand, in the future, you need to use a parity drive that is as large as the largest drive in the pool. So you need to buy at least two bigger drives, and use one of them for parity.


    If one drive in the pool break, you will need a replacement, and then restore the contents.


    As you build it it could be a great start to begin with one 2TB drive spare. And pretend that the 2TB drive in the pool broke. And restore to the spare drive. Experiment before things get critical... Afterwards you can add both drives to the pool and snapraid. (Format one of the drives to avoid duplicates.)

    I'm thinking to buy a HC4 because I like the dual sata layout perfect for the raid setup. The only absurd thing is using the sd card for the system and constantly fear for a fail of the memory.


    I wonder if I can use a usb to ssd instead the sd card, anyone tried this solution?

    I'd use one drive for backups and the other for data, rather than set up RAID. Or both in a mergerfs pool and backup to some other drive/NAS. But I don't have a HC4.


    I once configured a HC1 to have the operating system on the HDD (actually a SSD) rather than using the SD card. I still need to have boot files on the SD card, but after booting the SD card is not used at all.


    I used the utility nand-sata-install that come with armbian. If I remember correctly, I first installed OMV as usual, but had the SSD already partitioned with the future root partition and the future storage partition. Once OMV was running, I simply ran nand-sata-install and copied the rootfs to the prepared partition on the SSD. And rebooted. And all was fine.


    With the HC4 this might be even more useful. You could have several HDDs with very different configurations. And just swap and boot. Might be nice for backups of other NAS over the network? Just insert the right backup drive and boot the HC4.


    I assume you can also configure the HC4 to boot from the network. Using stuff like PXE, NFS and tftp. I haven't tried it. Yet...

    At one point I had problems with a RPi4, running OMV, that would hang. It happened only when I used both USB3 ports at the same time. So now I only use one of the USB3 ports. And I have no more problems.


    I assume it is something hardware or power related.

    In addition, once the drive is installed in OMV, you can use ssh and a tool like Midnight Commander to move files around. That means that you can create an empty OMV share, as normal, and later move existing files on the drive into that share.


    Also you may want to use the plugin resetperm to reset permissions on the new OMV share after moving files there.

    You unmount the drive. The OMV GUI has that functionality.


    However, if you didn't recently write to the drive, and there is no activity, most likely you can be a crazy person and just unplug it.


    This is what I do, and in a couple of years, unplugging like a crazy person at least weekly, there has been no issues. USB and EXT4.

    I would install docker and plex to the storage drive, rather than to /var/lib on the USB stick. Avoid putting docker and plex on the USB stick. Or at least use a high quality USB stick or a small SSD.


    The easiest might be to do a test install, and during install simply change the default suggestions shown so that dockers and plex are not installed under /var/lib but on the storage drive. Verify that plex works as it should.


    A more advanced way, that I usually do, is to add an extra partition to a fast storage drive and change fstab so that partition is mounted as /var/lib. Or even add an extra SSD just for /var/lib. After that you can use all the default settings. Nice. Just make sure the partition for /var/lib does not fill up.


    Please note: OMV is remarkably easy to use and install. But it does require some knowledge and experience and a willingness for some experimenting, DIY and tinkering. Experience from using Linux and especially Debian/Ubuntu in other contexts is a great help. Especially when you add stuff to it, like docker and plex. There is a reason why expensive ready-to-run NAS are being sold. They require even less knowledge, tinkering and work.

    Yes, this is indeed advanced. If you want to fully automate Android Backups to a NAS you will indeed have to do a lot of work. Perhaps there is an app that can do this, already?


    I have not fully automated this. Currently I just use scheduled syncs over WiFi initiated from FolderSync PRO. Works fine as long as I haven't turned off WiFi. If the phone is not connected the sync simply fails.

    Thank you a lot. It was exactly this happening and you helped me figure out the problem in really much less time than it would have took! I now created a script that first tries to read a file that is ONLY in the mounted drive so that if it isnt, the rsync will not happen.

    That is clever. Another option is to store the script on the remote server, and launch it using a cron-job (or a Scheduled Job) on the local server that also will run the script. Then the script itself is used to flag if the remote server is available.


    An advanced extra option (that I haven't tried) is to create a "mirror script" in a subfolder to the mount point (when nothing is mounted) with the same path and name as the remote script. Then this local mirror script will be executed if the remote server is not mounted, rather than the remote script. For instance to mount the remote server and/or notify that it is not mounted.


    When the share of the remote server is mounted then the "mirror" script is not visible and the normal "remote" script is executed.

    It is a bit unclear what you are doing.


    It seems that you run OMV5 on a SD card connected to the RPi4 using the normal card slot? Is this correct?


    And you have connected some external card reader to a USB port and try to partition and format the 1TB SD card using this card reader? Is this correct?


    Is the above is correct, I assume that the card reader may be unable to use a 1TB SD card correctly OR the SD card is defect. Or it may be because your RPi4 does not support this specific card reader. Or it may be because OMV5 is not able to use the card reader or the 1TB card correctly.


    You need to figure out what is the problem. You do this by changing things and see if the problem stays or goes away, and use some deduction.


    I suggest that you test the card on a PC to verify that it really is a 1TB card and that it is not damaged. Then try to partition and format the SD card from the PC instead, using the card reader. You can temporarily boot linux from a USB thumb drive if you use Windows on the PC. And also run some tests on the card to verify that it works OK after partitioning and formatting. You can also try to run other Linux distros on the RPi4 and try to use them to partition and format the card. And you can try with other cards and other card readers.

    You are doing something very wrong or haven't thought trough what it is you are trying to do. There is no real reason why SMB, using normal drag-and-drop, shouldn't be suitable for copying over files to OMV. Besides, you are not supposed to write to any root directories in any filesystem on OMV.


    That said, here is one way to copy files to OMV without any limitations or safeguards at all. Possibly making OMV unusable in the process:


    You can use Midnight Commander and fish. Install MC on your OMV box. Login and start MC, possibly as root. Then open a ssh session in one of the MC panes, to some remote server also running ssh. Possibly as root on the remote server. And copy away from the remote server to the OMV box. You will be able to overwrite anything and everything. And do so without any regard to how file sharing on OMV works.


    This may (is likely to) mess up your OMV box because access rights are not correct. And you are able to overwrite anything. But you may be able to fix some of the mess afterwards running the plugin reset perms.


    There are many other ways, involving SSH, to do the same dangerous thing, but using MC and fish may be the easiest. At least to me.

    I am experimenting, mostly for fun, with writing (C++) a snapshot style backup utility that use checksums to detect bitrot and fix it. To fix the bitrot it copies over the backup copy if the original file has bitrot and copies over the original file again if the backup copy has bitrot. The utility works fine between locally mounted filesystems. I am testing on EXT4. Mergerfs and NFS.


    My thinking is that during a backup the utility has access to previous snapshot backup copies of all files. So that provides the redundancy needed to fix bitrot without any need for parity, mirroring or RAID. All that is needed is backups. But of course, it is not real time. But it should still work OK for large media libraries that are mostly just growing slowly. Video/music/photo archive.


    But it is still buggy and too slow. And use too much memory. A backup utility should not be buggy... I am currently rewriting it (almost) from scratch for the third time. Sequential file read performance set a hard limit on the speed of bitrot testing, I want to come close to that speed.


    No idea when it will be done. I have been working on this on my spare time, off and on, for a few years now...

    I have no experience with ZFS. But I know it has some nice features. But no filesystem can provide protection against everything. For that you need backups.


    I use rsync to create versioned snapshot style backups from one server to another. In some cases I do this more than once. I use a script for this. You could also use rsnapshot.


    And in some cases I also prepare compressed archives. .zip or .7z. This is for the really important stuff. Inside the compressed archives checksums are stored, so I can easily use normal archiving tools to verify/test that the archive and the contents is correct. I check and update the compressed archives once per year. Typically during the winter holidays.

    I don't think it is possible to have different storage policies for separate folders.


    However, if you create a pool with existing files in folders, the files remain where they are. Unless you move/copy/modify the files. Or balance the pool.


    One possible option then is to use a policy that does not preserve the path and regularly run a script that move files you have changed back to the right HDD. I think rsync would be perfect for this, but some testing would be needed to verify that Move files from the wrong HDD to the right. Perhaps as part of you creating normal backups of your data.


    (Warning: When moving files in a mergerfs between the pool and individual HDDs it is easy to delete files by mistake! That is because often when you move a file you use a copy-delete method. And the delete may delete both copies if the paths match...)


    Another option could be to create two separate pools. One that preserves existing paths and one that doesn't.

    ... I would have to check the specs what official maximum size they indicate. So far I´m running 4x 6TB internal working fine.

    From the manual:

    Drive support information

    This server has four drive bays that support:

    LFF non-hot-plug drives. The maximum LFF drive capacity is 16 TB (4 x 4 TB).SFF non-hot-plug drives. This drive configuration requires the SFF-to-LFF drive converter option.These drives are not designed to be installed or removed from the server while the system is still powered on. Poweroff the server before installing or removing a drive.The embedded Marvell 88SE9230 PCIe to SATA 6Gb/s Controller supports SATA drives only. RAID 0, 1, and 10 levelsare supported.To configure drives connected to the onboard LFF/SFF drive SATA port, use the Marvell BIOS Utility or the MarvellStorage Utility .For SAS drive and advanced RAID support, install an HPE Smart Array Gen10 type-p SR controller option.To configure drives connected to the Smart Array controller option, use the HPE Smart Storage Administrator

    This is one reason why I never got a HPE Microserver Gen10... Seems it is wrong?