Posts by jml79

    Bad sectors don't always mean immediate data loss. Many hard drives will relocate the data if at all possible and some filesystems are resistant to bad sectors. The safest option is to buy a big backup drive (external 16-20tb) and make a backup and then try to fix the problem. You can replace each drive one at a time and hope for the best. You can buy some larger drives and build a new raid, maybe raid 6. Those drives are older and if 2 have started failing then more are likely to follow. I wish it was better news but if some data is already corrupted then it could mean a mess.

    I used to use syncbackfree on my system when it was windows and still use it on my wife's computer to sync to an SMB share on my OMV server. I have the share mapped to a drive letter on her machine and syncback basically runs like a nice gui rsync push to the share. Another near identical option is FreeFileSync which also happens to be open source.


    Another option that will allow you to do an rsync pull from your server is to install WSL and an rsync server in WSL with all of the proper drives mounted (ro). A little more complex but if you are familiar with linux it shouldn't be to hard to find the guides and get it up and running.


    If you aren't familiar with WSL, it's Windows Subsystem for Linux and runs a kind of cross between a VM and a Docker under windows. It's technically a VM but not a full fledged managed VM like you would run under KVM. Inside you can install several distros, seeing that you already use OMV, I would install Debian. Here is the link with instructions.


    InstallingDebianOn/Microsoft/Windows/SubsystemForLinux - Debian Wiki

    I tried running pihole without a MacVLan or a bridge (in a VM) and it wouldn't bind to port 53 so it wouldn't work. I didn't look into the issue to far, I just run a MacVLan with docker and a bridge with a VM. The disadvantage with a MacVLan is the machine that is running the docker cannot connect to pihole. There is a way to configure it so it can loop into the MacVLan pihole for DNS, I configured it once but haven't since. If you run pihole in a VM with a bridge (br0) then the host can use it. I actually prefer running it in a small VM, Debian 12 and installed directly (not in docker), I find it more resilient to network disconnects. The VM fits in a 3gb image and uses less than 300mb of ram while running with almost no impact on cpu usage.

    I assume you are referring to the guide I linked in post #2. I thought that guide already included this configuration. If not I'm sure chris_kmn will offer to fill in whatever is needed.

    I was referring to this guide;



    And it has 2 issues but I can’t add a post to help improve the guide. The first issue is the topic of this thread, the second issue is the most recent Tdarr builds require driver version 520 or higher and OMV6 ship with 470 so there is an extra step needed for Tdarr users to upgrade the base driver. OMV7 has 525 so doesn’t need the extra step. But the same 2 notes apply to the guide you posted as far as I can see. I love and hâte NVidia. Crazy fast and good transcoding but a PITA to get working well. My Intel and AMD transcode setups were way easier but much slower and the AMD was also much lower quality.

    Well look at that. An ID-10-T error. I’ve cleared that field and I’ll keep an eye on it.


    Thanks.


    There is a very good, recent guide for OMV6 in the guides section. Can we make a post to add this detail to the thread or should I write a new guide and include this tiny change?

    When I run updates to OMV7 I often have the /etc/docker/daemon.json overwritten which kills my jellyfin and tdarr dockers. It is easy enough to run #nvidia-ctk runtime configure --runtime=docker to fix it but a small bug. At this point I am not sure if it is OMV7 updates that over write the file or OMV-Extras. I will monitor and report what updates cause the issue as more updates happen. So far I can only verify that the last update was openmediavault and openmediavault-kvm. I have tried installing a plugin to see if it was a general thing with the --allow-downgrades flag with apt-get but it didn't overwrite the daemon.json.


    Last entry in /var/log/apt/history.log


    Start-Date: 2023-12-31 02:26:19

    Commandline: apt-get --yes --allow-downgrades --allow-change-held-packages --fix-broken --fix-missing --auto-remove --allow-unauthenticated --show-upgraded --option DPkg::Options::=--force-confold dist-upgrade

    Upgrade: openmediavault:amd64 (7.0-20, 7.0-21), openmediavault-kvm:amd64 (7.0.1, 7.0.2)

    End-Date: 2023-12-31 02:26:51

    I see a few issues, you have pointed your pihole container to unbound but you have to configure PiHole to use unbound separately. You can do this by either logging into your pihole and selecting settings-DNS and add a custom dns entry that points to your unbound or add "- PIHOLE_DNS_=192.168.0.14" under DNS2 in the env section of pihole. A much easier way to do this is use the combined pihole-unbound image. I've modified my working compose file so you only have to change the volume locations and it should work for you in a singe container.


    I think it has to do with the Kernel update. I can’t remember if I had uninstalled the Debian kernels or not. I’ll keep an eye when the next Debian kernel update comes out. My main desktop is on Debian 12 so I’ll see it when that updates. The only thing I can think of is the downgrade option downgraded my kernel and uninstalled NVidia-dkms because they need to be built at install and I upgraded the kernel before installing the NVidia software so didn’t have any compatible dkms on my system.

    The latest update to OMV7 installed a new debian kernel and removed the Proxmox kernel (6.5.11-7-pve) and killed my NVidia driver install. Re-install of the kernel did not fix the NVidia drivers. I had to uninstall nvidia-kernel-dkms and then re-install both nvidia-driver-dkms and nvidia-driver to rebuild the kernel modules.


    #apt remove nvidia-kernel-dkms

    #apt install nvidia-kernel-dkms nvidia-driver

    I was trying to find one of those too but they are rare and when I found one, it was expensive. I broke down and bought a decent Corsair CX750M on sale and it seems alright but the system it's in is hardly low power. I have just built the X99 system I was waiting for and it's surprisingly efficient at an idle of 46w at the wall but that is miles from the range you are looking at. My X79 build never dipped below about 61w.


    Just for a comparison, my x99 system has an E5-2650 V4 (12c 24 threads), 32gb ECC DDR4, 1tb NVME, 256gb SSD, P600 GPU and a 2.5gb NIC installed. There are mechanical too drives but they are spun down at idle.


    My S12 has a 256gb NVME, 2tb SSD and an external 4tb USB drive and USB 2.5gb ethernet. It idles at 7w, peaks at 30w.

    Thanks!
    I'm actually looking at non ECC options now as it just seems too expensive/too hard to find suitable options.


    I stumbled onto Wolfgang's Channel on YouTube and am quite taken with his power efficiency focus. I'm currently looking at the following for Motherboard and CPU:
    Motherboard: Fujitsu D3402-B - can find one of these very cheaply
    CPU: 6th Gen Intel i5-6500 Skylake - Also can find very cheap

    RAM: 32GB (I'm thinking at this stage)

    The SkyLake does not transcode very well compared to 7th Gen and newer but if you aren't concerned about transcoding then they work well. N95/N100 are very comparable to Skylake I5 for processing power but have much better transcode performance and even lower power consumption. The hard part with any low power build is the power supply. Typically available power supplies are way to big and wasteful at the lower end of their output curve. All of the SFF and mini computers are to small for 4 drives but have very nice, small power supplies. A bit of a catch 22. My own N95 is a 4" form factor and uses a wall wart and is super power efficient but it only has room for an NVME and a single 2.5" SSD.

    For ECC memory, the cheapest options are from Aliexpress using a X79 or X99 platform. I have an X79 mATX board (desktop chipset) with an E5-2650 V2 and 32GB ram that I got for around $100 Cdn (pretty close to on par with Aud) shipped. It's way more than enough for most OMV builds but it burns around 54w at idle. I have an X99 full ATX server board, chip and ram on the way to upgrade it. I like it that much. This computer is my home lab, basically a toy for playing and learning. I don't always run it 24/7 and often break it a bit.


    I also run OMV on a BeeLink S12 (N95) that has a 2tb ssd added internally and it is a fantastic Jellyfin server that can transcode 3 streams if needed and it uses all of 7w most of the time, only 25w while transcoding. No ECC though. This runs 24/7 serving the basic services to my home. My 3rd OMV install is on a recycled AMD A10-7600 that is nearly 10 years old. Again only the basics on this one, Jellyfin, Pihole, remote access and an SMB share. It idles at 35-40w and can just barely transcode 1080p using the APU. This serves my in-law's house.


    If you are set on ECC then your budget goes up and your selection down. You are basically stuck on Xeon/Epyc or some marginally supported config pairing certain specific processors with certain specific motherboards (ASRock) and a dash of prayer that it works. If you can skip the ECC then there are a ton of options and the N5095/5150 or newer N95/100 make very compelling, low power options.

    Glad to hear that USB/Ethernet adapter worked for you. I bought a few to experiment with and you affirmed that it should work.

    I was worried too. I bought it for my 24/7 mini server which is an N95 mini computer. The little thing makes a great basics server with room inside for an NVME and a 2.5" SSD. It runs jellyfin and pihole and even transcodes like a champ for only a few dollars more than a PI. 7w idle and less than 28w doing multiple transcodes. It keeps the family happy when I play with my homelab server.

    Well, no responses but it's a homelab so I decided to test it out and see what the results were. I used a Plugable USB 3.0 2.5GB ethernet dongle on my OMV6 server (it worked perfectly with no issues) and the builtin 2.5GB ethernet on my MSI Z670 motherboard. My workstation is running Debian 12. I directly connected the 2 together with a straight through CAT6a patch cable. No need for a crossover cable these days. I assigned each computer a static IP one network away from my 1GB network (192.168.0.x = 1GB, 192.168.1.x = 2.5GB). I adjusted the NFS share settings in OMV to allow 192.168.0.0/22 vs 192.168.0.0/24 so that NFS would work.


    I was able to mount and use the NFS shares on my workstation. I transferred files to and from the server using a 2TB NVME Gen4 drive in the workstation. Transfer speed to a WD Blue 8TB Hdd was 186MB/s which is about what an internal copy on the server is. Transfers to my Raid1 Mirror of 2x Seagate IronWolf 8TB NAS drives was 270MB/s which is near the theoretical limit of 2.5GB connection so I am guessing that some caching is happening somewhere.
    NVME to NVME speeds were similar at 270MB/s. RSync worked properly with much improved speed.


    SMB did not work as I hoped. SMB did not see both connections and chose the faster one. It defaulted to the slower connection. I am sure there might be ways to fix this but after considering my use case I just bought an 8 port hub instead of the 4 port card and will run my 2 workstations and 2 servers on the 2.5GB network and everything else (internet and wifi) will stay on the 1GB network. If I need more bandwidth (unlikely unless I upgrade my storage) I can add a 10GB card to the server and allow each host to run at a full 2.5GB.


    A fun experiment and has it's uses but for simplicity and compatibility a more traditional approach will do.

    My wife does some video editing in Resolve and I am an astrophotographer (the data workload is surprisingly similar) and both of us use local Gen4 performance oriented NVMEs for scratch, cache and processing. The NAS is for storage and archive. The cost for network gear to provide acceptable performance far exceeds justification until you have a team of 3 or more and even then, a 2tb performance NVME is under $200. The other justification is projects that exceed what 1 or 2 NVME's can hold (4tb or more).

    Good Day,


    Currently all of my servers run on a standard 1gb flat network with a switch and a mix of static ip's and DHCP. My workstation and server both have 2.5gb ethernet and a second 1gb nic and I would like to directly connect the 2 and access the server over the 2.5gb network but still maintain the 1gb connection to access the internet and other servers and toys on that network. Is there an easy way to set this up, can I expand it to multiple networks using a 4 port 2.5gb card in the server so each connection has full bandwidth. I have fast enough transcoding nodes to saturate 1gb and my wife wants to use the NAS for video editing so having multiple independent 2.5gb links could be useful. I would also get to avoid buying another switch this way. I would love for both SMB and NFS to work over the faster network. No issue if all management is done over the slow network. I don't care if any of the workstations can talk to each other over 2.5gb, everything is on the main file server and I can use the slow network for the few times I might do something like that.


    Thanks for any help.

    It does not make sense to make this a second plugin.

    Strangely, those who aren't coders want the most changes...

    No but the exercise of tearing apart your code plus the scripts from BTRFS-Snapraid and Neon's script above might....might just teach me enough to be dangerous with the hope of useful. But, to find the time and resources to re-learn almost everything. Last time I did anything beyond modify a few scripts was in the mid nineties and most of my skills were focused on assembly even then with a touch of VB and C. Yes, I am a dinosaur.