Beiträge von UKenGB

    I am running OMV on a board with dual Network Interfaces. I have one use for the machine that I want to limit to one of the NICs, but it looks like OMV is listening on both, port 80. How can I configure the OMV admin interface to only bind to one of the NICs, leaving the other free to use as I want, with any port?


    I cannot find any such setting in the admin GUI, nor any config files (but of course that would not be a good idea anyway). Would appreciate it if someone could point out how I can set this.


    Thanks.

    Having now installed OMV on the M.2 SSD I can see my fundamental misunderstanding. OMV doesn't take over the entire drive and I was subsequently able to also install other stuff (and Plex) on the same partition as per any other *nix.


    That means Plex's database is on the M.2, but that shouldn't be a problem and all is running fine.

    All up and running again and updated to 5.6.2. When I unmounted a filesystem (mount by-label) it was then remounted by-uuid.


    So I rewrote my script and that now creates links in /mnt to each filesystem and named just by label.


    One more thing has cropped up though. The web GUI mounts the filesystems as root:root which makes using them problematic for less privileged users. How can one instruct the web GUI to mount them as different user:group?

    Thanks, when running again (need to make some longer SATA power cables dammit) I'll have a look at that.


    Yes, ethernet cable connected before started install so it should have been able to do what it needed to do. In fact I do recall some messages on screen about downloading additional stuff so yes, it was able to establish connection.

    How does one 'upgrade'?


    My OMV machine is down at the moment for some hardware changes. When I fire it up again, I'll look into upgrading to the latest OMV, but not currently sure what that process is.

    Current version is 5.6.2

    https://www.openmediavault.org/?p=2890


    Change regarding mountpoint was 5.5.20

    https://www.openmediavault.org/?p=2866

    I saw the 5.6.2 notice in the blog, but seemed to be for SBC only. Maybe I misread. However…


    Following your link and then the Download link to the Sourceforge page, the link there to "Download Latest Version" is stated as for 5.5.11 and that is the folder displayed in the list below. That's what I was going by as it seemed the official source.


    If there's a better way to obtain the latest, pray tell.

    If you unmount a data disk and remount it in the GUI it will almost certainly become mounted by-UUID and Identity As that way , or at least it should if you are running an up to date OMV 5. Look in /etc/fstab to see how disks are mounted. You can still see the Label in the Filesystems view if desired.

    Mine were mounted by label. I am running the latest OMV as of a couple of weeks ago and latest was still the same when I last checked a few days ago, so a very recent change. I simply used the web GUI to create the filesystems and mount them. It used the labels.


    I already have a script that creates a link to each /srv filesystem so I can simply refer to them by name alone (no dev-disk-by-label… etc). Now I will have to re-write as a more complicated scripts to get the name of each etc.


    I have to say that changing to mounting by UUID is a backwards step in my view. I understand it might be a great idea for developers, but I think it's horrible for the user.


    --edit--


    I just checked and the latest ver. is still 5.5.11, same as mine. Apparently not been changed since Sept. last year. When was the change to using UUID mounting introduced?

    It seems the system disk volumes are mounted by UUID. but the data disks are mounted by label.


    I would just like to see the relevant filesystems labelled in the GUI as it's simply easier to see what's what at a glance. So I'd like to label them how it suits me. I'm just surprised there is no 'Edit' option in the GUI. The first data disk I added with the wrong label (my mistake) and with no way to simply edit the label I had to remove the filesystem and create it again. Fortunately the disk was still empty.


    So if I edit the label using the CLI, will the OMV web GUI show the new/edited label?

    After installation and some basic config to enable one of the ethernet interfaces, I shutdown the server. The next day after starting, I could not connect with the web GUI, yet on the server's actual console I could log in and access the network no problem. I eventually realised that in the browser, I was trying to access the original IP address as I'd forgotten that had now changed after enabling the ethernet port. Nothing odd about this, except I was sure the server's console/screen after startup had displayed the old address that was no longer valid.


    That was yesterday and later I changed the IP address of the other ethernet interface (there are 2) and at the end of the day, I shutdown again.


    This morning, I was careful after startup to note what IP addresses were displayed on the console and sure enough, although the main one that had been set a couple of days ago was now displayed correctly, the other one, that I changed yesterday, was still showing its old IP address, i.e. the address used when last booted, before it was changed. Logging in and using 'ip a' displayed the correct current IP addresses, yet when OMV boots, it displays the IP addresses from some earlier time.


    So when OMV displays the IP address(es) on that first screen after startup, from where is it getting the info? If from config.xml, then that rather suggests it is not correctly updating its own config file when modifying ethernet interfaces, e.g. changing IP addresses, in the web GUI. Is this a known problem?

    Having installed the HDDs and set up the filesystems, I find I'd like to change the label on one of the filesystems, but there seems to be no option to do so in the web GUI. Am I missing something?


    I could do this from the command line, but would OMV then recognise the new label?

    Thanks. As you say, it appears that RAID is not great for 8 TB drives and also, since it is not possible to start with a single drive and add another as and when required, I have opted to use mergerfs and SnapRAID and as XFS is apparently well suited to large unchanging files, I've decided to use that for these data volumes on my media server.


    Hopefully this will prove a good choice.

    I answered your questions from your first post, if that is sufficient for you then I fail to understand the relevance of your post above. As far as I am concerned I have no problem if a user wants to run a raid system it's all down to personal choice.

    We're not getting on here are we. Likewise, I fail to understand the relevance of your post, since your answers to my first post is simply "RAID is no good" and now I ask what is your suggested alternative, you don't answer that and merely state you already answered my initial questions and sneeringly denigrate my follow-up request for some additional information.


    I'm sure geaves you are very knowledgeable, but so am I. Just not in the same field obviously. I came here to ask reasonable questions and hoped to receive reasonable and helpful replies and I have to say, neither have been forthcoming.


    I will look further into unionfs and Snapraid that I had previously considered prior to asking here about actual RAID. Ultimately, I will make my own decision as you say, but I will obviously need to conduct my research elsewhere.


    Good day to you all.

    …8TB drives in an mdadm raid take a lllllloooooooonnnnnggggg time to sync, Raid 1 allows for one drive failure, so let's say you have to replace a failing drive within that mirror, during that sync it stops because the good drive has failed. Bye, Bye data, the same is for a Raid 5 allows for 1 drive failure within the array, whilst the raid is rebuilding 1 of the 2 good drives dies, bye, bye data.

    Why such a big downer on RAID? As I said, I am well aware of what RAID means and of course level 5 was specifically developed to cope with hardware failure of a single drive and if a second fails before the first has been replaced and the array all sync'd, then obviously data will be lost. But what's the alternative? No RAID means single failure and data IS lost, are you trying to suggest that is a better solution.


    As I said, this is for a media server and if the data is lost, that's just some missing TV time. Not gonna cause WW3 to break out and it doesn't warrant the significant cost of duplicating the entire storage space for a full backup which is also not required for individual file reversion reasons. The files will be stored and then not changed. So I only need to prevent total data loss from HDD failure and the chances of 2 drives failing at the same time are remote. Not impossible I grant you, but highly unlikely. Certainly a far smaller chance than that of a single drive failure.


    So, I do wish to take advantage of what RAID 5 offers, i.e. a good chance to avoid loss of data due to the failure of a single drive. I was using RAID 5 very effectively 30 years ago and I cannot believe it is less effective now than it was then.


    I've similarly been using the command line in *nix for 30 years now so do not presume to think I am ignorant and incapable of doing so now. However, for speed and simplicity, I would prefer to be able to administer this server using the web GUI as much as possible, but first priority is to be able to achieve the functionality I require and if that requires using the CLI, then so be it.


    Having said that, the server is not yet complete and I have no prior experience of OMV, nor mdadm on any flavour of *nix, hence my questions here.


    Why is RAID 5 based on 8 TB drives such a bad idea. Does array recovery take more than twice as long as for 4 TB drives? Or is there some capacity threshold above which recovery time increases exponentially? Just trying to understand the reasons for your criticism of a large RAID 5 array in mdadm.


    Since there's no need to duplicate the entire data store as a full backup (and certainly want to avoid the significant expense of that), but you opine that RAID 5 is such a bad idea, what is your suggestion?

    This is exactly my question too so seems pointless to start another thread.


    In my case I want to use a RAID 5 array of 8 TB HDDs to store large video files. Default choice would be ext4, but would that actually be a bad choice? What would be the best solution, preferably manageable from the web GUI? What about XFS, or JFS, or something else?

    I am in the process of setting up a media server based on OMV and intend to use RAID 5 for the data storage. Ideally I want start with a single 8 TB drive (which will be big enough for a while) and then add more as and when finances allow. But can I do this without having to backup, format and copy it all back again?


    Ideally I'd like to start with a single HD on it's own. Then add another identical HD for security (not additional storage) then be able to add another to grow the size of the array (and then another etc.). Would this be possible?


    I am fully aware of what RAID means and not being a backup, that is not the issue. It's just how OMV's RAID implementation (MD I believe) can cope with growing from a single independent HD to an array of 5 drives. So hope someone can advise.

    What portion(s) of PMS do you find slow?

    Well, not that I can specifically attribute to reading and/or writing to its database, but then I don't know in total what it stores in there - apart from the metadata of course. Where does it store the EPG when it eventually downloads it? That is always slow. Select Guide in the LiveTV/DVR section and it takes a while to populate the table and it takes that long every time you go to the Guide so the whole usability is impacted by this. Compared to using just a TV where the Guide display is almost instantaneous, every time it needs to be displayed. It's a general gripe I have with Plex that it provides a worse TV viewing experience than simply using the TV. However, it also records for me (well, sometimes) and can handle all my music etc. as a far better central server than iTunes ever could be. So I want to persevere with Plex and hope it will improve in those area where it needs such improvement. I'm going to throw it on a fast media server (using omv of course) and want to optimise it wherever I can. Hence my thoughts about possibly speeding up its operation by having its database on an SSD.


    I realise that may not be a bottleneck, but just exploring possibilities at the moment and still unsure whether continuous reading and writing of the database files will wear out an SSD/M.2 to a significant extent. Having said that, is M.2 storage the same in this regard to a regular SSD or is it more/less robust?

    Interesting replies, thanks. But leaves me with 2 questions still:-


    Would having the Plex database on an SSD not make for faster operation (of PMS)?


    Would the continuous writing and updating of that database reduce the life expectancy of e.g. an M.2 drive?

    I understand how omv takes up the entire system volume and that means the Plex database needs to be located elsewhere. I also understand that the main system drive could be partitioned to keep some space available for Plex. But…


    Can anyone comment on using an SSD to house the Plex database, whether that be a system drive partition or any other drive/partition? Since this database constantly gets written and rewritten, does that not present longevity problems for an SSD which has a limited lifetime with regard to writing? This would suggest the database would be better located on an HDD, but then you lose the speed advantage - or with heavy caching is this not a factor?


    I've not yet set up my Plex Media Server on omv (currently it's on a Mac) and I'm just fishing here for ideas about how best to set it up on omv, so would appreciate what others have to say on this matter.