Posts by BernH

    Hey good call BernH. Worked straight away 😊


    Thanks so much.

    You're welcome.


    There are other solutions, such as booting a linux live distro and using it to copy files to an ntfs drive that windows can use or using the windows Windows Subsystem for Linux (WSL) to mount the drive and copy the files at the command prompt/terminal, but the paragon solution is so much easier and faster to set up.

    What format is the drive?


    I would recommend using Paragon's "Linux filesystem for windows" software. It can read/write ext2/3/4 and read BTRFS and XFS


    There is a trial version that you can use that will work for 10 days if this is a one time requirement. If you want to buy it, a home user license is not very expensive (about $30 USD)

    You will need to check the file properties to see what codecs are being used and confirm they are compatible with your tv.


    If they are it is still no guarantee that they will work as it also depends on what software was used to create them. MKV files in particular can cause issues as there are no real industry standards for mkv, and it will even allow for mixing of audio and video codecs that normally are not allowed to coexist in the same file.


    I have had mkv's that should be fine, but have problems, and a simple re-wrap to an mp4 or even another mlv wrapper fixes the issue, while other times I have had to do a full transcode.


    I generally try to standardize all my media that is to be used in Jellyfin using an automated transcoder called fileflows to avoid these issues, but even so, I still have to manually intervene occasionally. Alternately something like tdarr or unmanic can also do the job, but I personally find fileflows to be a better option. All are available as docker installs.


    FileFlows
    File processing made easy!
    fileflows.com

    Tdarr

    GitHub - Unmanic/unmanic: Unmanic - Library Optimiser
    Unmanic - Library Optimiser. Contribute to Unmanic/unmanic development by creating an account on GitHub.
    github.com

    I used it for plex (not in docker) and Audiobookshelf
    At first i had my old GTX1080 because i wanted too use it for helping decoding/encoding the stream

    First big mistake is installing plex natively. When running OMV it is recommended not to install anything natively. Many of us will install a few extra cli based tools for diagnostics but not much.


    There have been a few posts over the years of people doing what you have done and then experiencing problems.


    Best suggestion is to do a fresh install and use docker to install plex and/or VM's (will need the omvextras plugin to do this) for anything not covered in the plugin system.

    It is raid five and this is not a software raid but rather a Dell server with H710 card. The card reads the array and it is listed as "optimal"

    So, i have been advised to do a fresh install with the drives disconnected, then connect drives and the H710 should easily pickup the raid array when booted.

    Anybody want to chime in with opinions or advice, please comment...

    A hardware RAID like you have is just seen as a single drive by linux, so as long as the filesystem and card/drives ore ok, the filesystem will be able to be mounted.


    The reason for the recommendation to disconnect other drives is to not accidentally install omv or grub on the wrong drive and destroy the data. If only the OS drive is available during install there is only one place for the files to be installed.

    If I were you, I would re-install. It may be possible to just re-install grub and it may boot, but I don't think I would trust it if a power outage caused os drive problems. A clonezilla restore or a clean install is safer.


    If you disconnect all other drives and install omv, you should then be able just plug the array drives back in and mount the filesystem, as long as the power outage didn't cause any problems with it too.


    What kind or array are you running? There are folks here on the forum that have experience with the different types (md, btrfs, and zfs) in case you have a problem. I've never had one completely fail on me, only an occasional single array member needing to be replaced.

    If the bios battery died, your bios has been reset to defaults, so first thing would be to replace that battery, then you need to do any bios configs required to get the hardware working correctly again. I can't tell you the specifics of the settings required for your hardware and to be honest I generally avoid dell and hp like the plague because I hate their bios restrictions, so I can't even really comment from memory.

    The compose plugin has a setting for specifying the ip address of the server. (Compose > settings > overrides). Without this set, the links on the files page point to 127.0.0.1, the "localhost IP". If you set the override to the server IP, those links will work correctly.

    Jellyfin can use a fair bit of RAM for sure. As raulfg3 mentioned, your 4GB is fairly lean for using jellyfin particularly id you have other things using RAM, and 8GB is a much better starting point


    However, you may be able to take control of it a bit.


    Firstly, look in the stats tab of the compose plugin to see how much RAM it is using. Mine used to easily get up to 5GB or 6GB and sometimes more, although with 12 CPU cores and 64GB of RAM it wasn't a big problem, but I did want to control it a bit better.


    To that end I added some limits to my jellyfin compose file like these, taking it down to a modest 2GB limit, and it currently runs at about 1.75GB on average (adjust the numbers to fit your requirements and system specs):

    Code
        mem_limit: 2G
        cpus: 8

    I do this vm shrinking occasionally when I notice vm qcow2 getting large using virt-sparsify. (installed as part of libguestfs-tools)


    in a single step, it will fill free space in the vm with 0's so that it can then essentially thin provision the file by writing the results to a new file.


    I cobbled together a simple script that will loop through all my running vm's, shut them down, move the original to a "not sparse" directory as a backup and source for the command and them write the new file back in the original location. (adjust paths as required)


    Try USB dummy HDMI adaptor - it tricks the system into thinking there is a monitor attached. Very cheap not a lot to lose if it doesn't work.


    https://www.aliexpress.com/ite…%3Asearch%7Cquery_from%3A

    You need a GPU for these. I use one in my Arc 380 to ensue it is fully active for GPU based video encoding.


    The OP doesn't want to have a GPU. That's why I suggested a usb vga adaptor, but as I said I don't know if it will work the way he wants.

    The kvm plugin will let you do most vm/lxc creation stuff in omv. If there are things that can’t be done, a docker used virt-manager gives full access. I have several VM’s and lxc’s running like this no problem. However if you prefer to use proxmox, that is ok also. Whatever works for you.


    As for performance issues, with options 2 & 3, an lxc via proxmox will probably perform a bit better than a docker via omv as it would be running directly on the proxmox host instead of being docker virtualized in omv which is also virtualized. Basically the closer to the host os, the better the performance will likely be, and since omv needs to be installed as a full vm, not an lxc, it is not running as “light” as a proxmox lxc.

    never directly on omv. you can easily break it doing that. options 2 & 3 are completely up to you in terms aof what you are more comfortable with. Docker is faster and easier to set up, but custom lxc gives you more control over the setup.


    Also, if you are not running other things that you need proxmox for, I would recommend taking a layer of complexity out of it and use omv as the root os on the system (it can also do vm's and lxc's via the kvm plugin if the need ever arose). If you are only using proxmox as a way to present storage to omv, it seems like a waste of resources.

    I see. I misunderstood the question then. Apologies 👍

    No need to apologize. I know he has a domain because we have had some conversations. but your point is quite right. the use of duckdns or another dns service is not just about a changing ip address. It's also about have a "friendly name". The human brain tends to remember names easier than numbers.

    i did all the procedure again. This time, the system did ask me for scanning various sensor.

    I did everything and saved the script.

    But, again, empty line as output.

    I did read , while scanning, "K10" recognized as cpu

    I run an AMD Ryzen 5600G. Not the same CPU as you , but this basic info may help.


    I noticed that the default config for the cputemp widget was wrong, and when implementing the config in the guide it was also wrong (off by about +18C).


    So using some other softwares as a temperature reference (bpytop and inxi), I was able to determine that instead of the k10temp value from sensors, I needed to use the CPUTIN value. I ended up modifying the /usr/sbin/cpu-temp file to use this line, instead of the line in the guide:

    sensors nct6779-isa-0290 | awk '$1 == "CPUTIN:" { print $2 }' | grep -o '[0-9.]\+' | sed 's/\.//'


    I don't know if the Phenom uses the same CPUTIN value, but if you want to try to find the correct one using the method I used, you can install bpytop and/or inxi with apt-get install bpytop and/or apt-get install inxi. Bpytop is a nice top alternative that combines the most important info of most top programs in one application, and inxi is kind of a general system info utility similar to dmidecode, but a fair bit better for finding info.


    Run bpytop and look for the cpu temperature in the top right of the screen or run inxi -Cs  and it will give you the cpu temp. Then run sensors and look through the list of values to see which temperature variable name looks to be the right one. If you are not sure it may help to run 2 terminal windows with with bpytop or watch inxi -Cs (watch will make inxi live monitor the temps instead of exiting after running it) in one and sensors in the other, so you can find the variable/sensor name that matches the CPU temp in inxi or bpytop, you may want to re-run sensors a few time to make sure the teps change in a similar fashion. Then if you think you have the right one picked out, look at the top of that section of the sensors output and you will see the chipname that the sensor belongs to. You can run watch sensors <chip_name> and do something that will cause the CPU temp to change and confirm that the same change happens in both the bpytop/inxi window and the sensors window. (For me the chip my sensor was on was nct6779-isa-0290 and the sensor name was CPUTIN)


    You will note that since I found that the correct sensor for me was CPUTIN and it was in the nct6779-isa-0290 chip group, my config modification is invoking sensors with the chip name, then piping to awk, grep and sed like the guide to extract the temperature and remove all the non numeric characters. The difference is that I found the correct chip name and sensor for my CPU and used those to re-write the first part of the config from the guide.

    Have a look at freefilesync if you want to do it via windows. As long as you can connect to the two shares and have read/write capability it will work.


    FreeFileSync
    Download FreeFileSync 12.1. FreeFileSync is a free open source data backup software that helps you synchronize files and folders on Windows, Linux and macOS.
    freefilesync.org