Posts by Shadow Wizard

    If you are confidant that a clicking drive can function without causing errors I will look into replacing them (After cleaning contacts and such) I just assumed if the drive was that bad I would have had errors, or slow access, or something.
    Thank you.

    I have a ZFS Raid Z2 array with 8 SAS drives. For over a year now, it seems like at least 1 if not 2 of the drives are bad. However ZFS reports no errors. The array is not slow, and everything works as it should. The only indication any of the drives are bad is the clicking, and the indicator on the drive caddy itself (Details below)

    Because these are SAS drives, no smart data is reported, and I am not familiar with any way to check their health status. The system is set up on a ProLiant ML350p Gen8, and each of the caddies has a central orange light that apparently blinks if the system detects issues with the drives, and 2 of the 8 drives are flashing there.

    So, should I be concerned? Is there a way to test these drives in place, or get it to give me some kind if report on them? Can I tell ZFS to do an "Extensive test" to see if it can find anything? And most intriguing, I asked 3 questions (4 if you count this one) will people actually answer all 3 of them, or just select one to answer..

    I am in the process of reconfiguring my homelab, the biggest parts of which is my OMV on one bare metal machine, and Proxmox on another bare metal machine. I am bringing another OMV server from another location home as well on its own bare metal machine (One of the HP servers as a matter of fact) then I am trying to incorporate into 1.

    Its looking like, considering hardware alone, one of the best options is to switch to a single machine running Proxmox, and passing through either the controllers for the drives directly to a VM running OMV, or the drives themselves. Why use Proxmox to run VM's instead of getting OMV to do it? Because I know Proxmox, I have backups from Proxmox. I know how to restore the backups easily, and its one less thing I need to take a lot of time learning in this major switch over.

    So the first question is, is this a bad idea, and why?

    Here is a bit of details (Well, I guess it will be a lot of details) about my setup, any a few bits of why I am thinking about this method.

    The stars of the new show would be an HP Proliant ML350P Gen 8. It will have 128+ GB RAM, and Dual Xeon processors; more then enough to run my VM's easy, and a EMC KTN-STL3 HDD array. There are about 15 drives total, a mix of SATA and SAS between the OMV nases containing 2 ZFS pools (one only SATA drives and one only SAS), and a SNAPRAID/MergerFS of a few drives for non-critical data. I am currently running 3 bare metal computers. One running a ryzen 5600 for the Proxmox, one running a Ryzen 3600 for one of the OMV servers (It runs quite a few docker containers), and one of the proliants for the other OMV. You don't want to see how fast my electric meter spins! I seem to be unable to get the HDD array to recognize both Sata and SAS at the same time, or at least the computer attached to it. Perhaps I just need to learn how to do it better.All the controllers are set to IT mode; the OSes see each individual drive as a drive, not as a raid array.

    My way of thinking is that switching to one of the proliants will give me lots and lots of ram, a bit more CPU power then the 5600 on its own (Yes, I have 2, and 2 of the HDD Arrays. Spare parts) The bays (12) in the proliant will let me run the 6 SAS drives (Or maybe there are 8?) and some of the SATA (The proliant will let me mix and match, just the Array won't for some reason) I can install Proxmox on either an SD card, or even a SATA drive, install a PCIE-NVME adapter or 3 for the VM's, pass through the controllers to the OMV VM (Or the individual drives.. That would be more ideal for me if doable)
    Now the second question. Should I pass through the drives (Or CAN I pass through just the drives) or does it have to be the whole controller? Passing through the drives allows me to use drives directly with proxmox for VM's (I know they are not accessible from the OMV container as well, they would be other drives dedicated to running VM's on proxmox)
    Are there any other issues/quirks I should look out for doing this? And does anyone have any other suggestions (I am happy to turn this into an open table discussion) other then to run OMV as the "main" os on one bare metal computer.

    I look forward to hearing what you all have to say.

    Well, I am working on taking the plunge. Getting ready to rebuild my systems with the newest OMV from OMV 6.

    So, as usual, I do everything on a test system first, just a VM to be sure I have all the steps down, and everything works.

    And, as usual, something doesn't.

    PLease advise.

    Hmm, had to cut this short, as it won't let me post the whole error.. And since I don't know what you need.. well.. If this isn't what you need, please advise what art of the error you need (Or increase my posting limit) and I shall post it.

    Wow, I REALLY need to cut this off.. Wow.

    Is there a way to select a dick, under "Storage-->Disks" or anywhere else, to see what is is used for? (It is shared, what directories are shared, is it part of a zpool, or a mergerfs, or anything)


    Basically, I had to put a disk in to recover from a mergerfs fail (used it for a restore, then moved all files off of it), and have been having a hard time getting it gone. I want to be sure 100% that OMV isn't using the disk for anything before I just remove it.

    it isn't restoring properly. Hard to say why.


    Use omv-regen to make a backup now and reinstall/restore to new drive following the omv-regen docs.

    Seems as though omv-regen will not work either:

    Tried more then once.

    Ideas?

    That sounds like perhaps the best approach, as I don't know if OMV 6 is totally up to date, and updating it likely won't happen with these errors.

    Most of my containers do not have volumes of their own, pretty much 90% of my containers use bound directories instead of volumes, so unless I totally misunderstand how docker works all I should need to do is create the containers again pointing them to the same directories, and they should just keep working. If there are no actual volumes for the docker container, there is nothing to backup/restore, is that in fact correct?

    For example, my qbittorrent has the "\config" dir bound to "/SixTBpool/Config1/qbittorrent" whereas SixTBpool is a zfs filesystem on separate mechanical drives I will just re-mount under the remake of OMV. So when I recreate the container, I again bind the config directory to "/SixTBpool/Config1/qbittorrent" (Assuming the mount point is the same) and it just picks up where it left off?

    it isn't restoring properly. Hard to say why.


    Use omv-regen to make a backup now and reinstall/restore to new drive following the omv-regen docs.

    I was going to ask about that (finding a way to backup the config and re-installing), as it would also permit me to move to OMV 7. The biggest issue I have is making sure my docker containers are properly backed up/restored. Although most can just be re-created from scratch, Seafile and the containers associated with it would be quite bad if they didn't backup/restore properly.


    I have thought about installing it on a USB, but I have the SSD's laying around, I don't have any open bays for SSD's to run the docker containers on, they get backed up with the daily backup and running the docker containers on a drive other than the system disk, although I am sure would be easy to obtain, is a skill I don't posses.

    But thank you for the suggestion. It is often good to consider other solutions to your issues; in this case however I have considered it and unless there is something I am missing, isn't the best solution for me personally.

    Hence why I am trying to restore a DD.

    So please may I ask for some help on the error I mentioned above, where after a restore (On the same sized drive) Debian rescue reports it is unable to mount the filesystem, and in a live linux I get "can't read superblock on /dev/sda1"

    Ideally I would like to get this resolved before a full drive failure. As of right now I am apply any configuration changes. Applying the changes results in "Please wait, the configuration changes are being applied." I wanted overnight, and when I came back, the "working" page disappeared, changes had not been applied, and I was told I needed to apply changes.

    I will add something else, that may be helpful, or not. I do get these constant errors, and quite often on reboot I am forst to do a Fsck (Or whatever it is that si close to that) You told me in a post many months ago not to worry about it however.


    In addition, pretty much any changes to the system (Apply configuration) take forever. Yesterday I stopped looking after about 60 min,.

    Well, I found the "Rescue Mode" but it doesn't seem like its going to help me.. Or I don't know how to use this (I am SOOO glad I am doing this this way now rather then an actual emergency. I would be in such a bad place..( Anyway)

    The rescue was saying it couldnt' mount the filesystem on the device, so I decided to boot into a live linux again to try and browse the drive I recovered the backup to, and it is telling me there is an error mounting it because it "can't read superblock on /dev/sda1"

    Now what?

    **EDIT**

    So I decided to try to restore again, as I figured there was no reason not to. And I am getting the same error whenever I try and read anything off the drive. So I assume either I am doing the restore wrong, or something else is wrong somewhere.

    Can you provide just a bit more information on the "Rescue option." Is this an option I am given on install (Like in a windows install, "Repair your computer") Or is this a command I use from the command line. A GUI program I use when booting into a live desktop?

    And as far as Linux doesn't do many things like windows, I agree. Under most circumstances Linux does stuff better, I agree. I even tried to switch to Linux for my daily driver, but unfortunately it doesn't play well with many of my devices.