Posts by johnvick

    Yes but the usual advice is don't use raid on USB drives. It's not supported through the OMV web interace so you'd have to do it via the command line. A better alternative is UnionFS/SnapRaid which is OK with USB drives and caters for drives of different sizes which is likely your case.

    That should make a nice system.


    1) Either but I'd use SSD - best choice NVME M.2 (not SATA M.2 as it will steal one of your SATA slots).

    2) Same as with any Linux system numerous trutorial on the web as to how to recover. You'll have media files on the HDs so they'll be intact if you need to reinstall OMV.

    3) I'd imagine so - I have same CPU in a Windows desktop and have run one Linux VM without trouble and you have plenty RAM for the job. OMV will likely use 1% of you CPU most of the time.

    4) I'd imagine so unless they are 4k files, then I don't know. Research if hardware transcoding is supported with that CPU/APU and maybe take a look at Jellyfin instead of Plex, it is free and supports HW transcoding - I have an intel Pentium Gold - much more modest than your CPU - and it transcodes 1080 x265 in hardware at 300+ fps.

    Given your HDs are various sizes SnapRaid is a better choice than RAID.

    I can explain how I do it but I use Apache on an Ubuntu 20.04 system as my webserver - this is 192.168.1.2. It has the LetsEncrypt certs.

    OMV is on 192.168.1.3 and doesn't face the web directly.

    I run SyncThing on OMV and on the Ubuntu/Apache device I have a virtual server as follows:


    ProxyPass /syncthing-omv/ http://192.168.1.3:8384/

    <Location /syncthing-omv/>

    ProxyPassReverse http://192.168.1.3:8384/

    Require all granted

    </Location>


    On that device I have a folder /var/www/html/syncthing-omv with a one line index.html file:


    <meta http-equiv="Refresh" content="5; url=http://192.168.1.3:8384/">


    So I'm away from home and want to look at SyncThing on OMV:


    https://my.duckdns.org/syncthing-omv


    Apache directs the request to the OMV device and the certificate is good.


    Thanslate this principle to Niginx and you'll have it going.


    But...this took a lot of working out, maybe isn't the best way, and other web apps such as Jellyfin required a different approach.

    Hardware transcoding works in the Jellyfin docker and proably others but haven't tried. Letsencrypt only works on a device that has ports 80 and 443 exposed so you can't use on two devices (unless you have two internet accounts and two routers). What you can do is have the cert on one device and use its webserver to forward certain addresses to the second device

    Setting up a full web server under Docker to run, for example a Wordpress site, is a pretty advanced task. My suggestion is to get a second device such as a cheap OrangePi or similar and use that instead with Ubuntu server as the OS and then follow a web tutorial on setting up a site. Once this has been mastered then think about trying the same under Docker.


    An alternative is to use free web hosting site such as x10 hosting to learn the ropes at no cost.

    I use a Beelink J45 mini PC with Ubuntu 20.04 and it's modest Pentium CPU transcodes Jellyfin steams to devices that can't natively decode the stream. My more powerful OMV system has a Pentium Gold CPU it does better at 100+ fps HEVC 10 bit -> 264. It all depends on how many devices you are sending video to. My estimate is that the OMV system will easily transcode three streams simultaneously. So you don't have to spend hugely to do the job.

    After several hours and much searching and testing I found the problem was UFW on my Ubuntu device. It looks thereore that on boot the nfs shares connect before the firewall is up, which was why I didn't think of this at first. Allowing nfs through the firewall and all good now.

    I've several nfs shares configured on OMV using the options "rw,subtree_check,insecure". The client machine runs Ubuntu 20.04 server with autofs to connect to the shares on demand. This works but if the OMV server is powered off and back on I cannot reconnect to the shares from the client without rebooting the client. I can ping the OMV server OK but no access to the shares. This is the same if I configure the client to use nfs 3 or 4.


    Edit: I have setup a second device with the same packages as a clinet: nfs-common and autofs. Used the same autofs config files. All works fine on the second device. So OMV is not the problem (never thought it was anyway).


    Any clues on how to fix this?

    My suggestion isn't relevant given your most recent post but for future reference if you boot OMV but have no networking then connect monitor and keyboard and run omv-firstaid from the command line. Then fix the networking using the menu item.


    If the BIOS settings don't change the situation maybe to test that the card isn't faulty make a USB flash live Ubuntu drive, connect a non-critical HD to the new card, disconnect the other drives, boot into Ubuntu and see if is sees the card and drive.

    I have 4x4TB WD reds for data an an external 5TB Seagate USB for partity. I use a once daily helper script to sync/scrub etc. All working fine. From my reading the recommended max data drives is 4 per partity drive. Is my understanding correct that the logic behind this is that if you go beyond 4 data disks you run a higher risk of disk failure during a rebuild? Or are there other reasons? I have read of someone using 9 disks for one parity.


    If I decide to go beyond 4 (which I tried briefly then reversed) is there a way to supress the SnapRAID warning message about too many data disks?


    John