Beiträge von Shadow Wizard

    Well, I am working on taking the plunge. Getting ready to rebuild my systems with the newest OMV from OMV 6.

    So, as usual, I do everything on a test system first, just a VM to be sure I have all the steps down, and everything works.

    And, as usual, something doesn't.

    PLease advise.

    Hmm, had to cut this short, as it won't let me post the whole error.. And since I don't know what you need.. well.. If this isn't what you need, please advise what art of the error you need (Or increase my posting limit) and I shall post it.

    Wow, I REALLY need to cut this off.. Wow.

    Is there a way to select a dick, under "Storage-->Disks" or anywhere else, to see what is is used for? (It is shared, what directories are shared, is it part of a zpool, or a mergerfs, or anything)


    Basically, I had to put a disk in to recover from a mergerfs fail (used it for a restore, then moved all files off of it), and have been having a hard time getting it gone. I want to be sure 100% that OMV isn't using the disk for anything before I just remove it.

    it isn't restoring properly. Hard to say why.


    Use omv-regen to make a backup now and reinstall/restore to new drive following the omv-regen docs.

    Seems as though omv-regen will not work either:

    Tried more then once.

    Ideas?

    That sounds like perhaps the best approach, as I don't know if OMV 6 is totally up to date, and updating it likely won't happen with these errors.

    Most of my containers do not have volumes of their own, pretty much 90% of my containers use bound directories instead of volumes, so unless I totally misunderstand how docker works all I should need to do is create the containers again pointing them to the same directories, and they should just keep working. If there are no actual volumes for the docker container, there is nothing to backup/restore, is that in fact correct?

    For example, my qbittorrent has the "\config" dir bound to "/SixTBpool/Config1/qbittorrent" whereas SixTBpool is a zfs filesystem on separate mechanical drives I will just re-mount under the remake of OMV. So when I recreate the container, I again bind the config directory to "/SixTBpool/Config1/qbittorrent" (Assuming the mount point is the same) and it just picks up where it left off?

    it isn't restoring properly. Hard to say why.


    Use omv-regen to make a backup now and reinstall/restore to new drive following the omv-regen docs.

    I was going to ask about that (finding a way to backup the config and re-installing), as it would also permit me to move to OMV 7. The biggest issue I have is making sure my docker containers are properly backed up/restored. Although most can just be re-created from scratch, Seafile and the containers associated with it would be quite bad if they didn't backup/restore properly.


    I have thought about installing it on a USB, but I have the SSD's laying around, I don't have any open bays for SSD's to run the docker containers on, they get backed up with the daily backup and running the docker containers on a drive other than the system disk, although I am sure would be easy to obtain, is a skill I don't posses.

    But thank you for the suggestion. It is often good to consider other solutions to your issues; in this case however I have considered it and unless there is something I am missing, isn't the best solution for me personally.

    Hence why I am trying to restore a DD.

    So please may I ask for some help on the error I mentioned above, where after a restore (On the same sized drive) Debian rescue reports it is unable to mount the filesystem, and in a live linux I get "can't read superblock on /dev/sda1"

    Ideally I would like to get this resolved before a full drive failure. As of right now I am apply any configuration changes. Applying the changes results in "Please wait, the configuration changes are being applied." I wanted overnight, and when I came back, the "working" page disappeared, changes had not been applied, and I was told I needed to apply changes.

    I will add something else, that may be helpful, or not. I do get these constant errors, and quite often on reboot I am forst to do a Fsck (Or whatever it is that si close to that) You told me in a post many months ago not to worry about it however.


    In addition, pretty much any changes to the system (Apply configuration) take forever. Yesterday I stopped looking after about 60 min,.

    Well, I found the "Rescue Mode" but it doesn't seem like its going to help me.. Or I don't know how to use this (I am SOOO glad I am doing this this way now rather then an actual emergency. I would be in such a bad place..( Anyway)

    The rescue was saying it couldnt' mount the filesystem on the device, so I decided to boot into a live linux again to try and browse the drive I recovered the backup to, and it is telling me there is an error mounting it because it "can't read superblock on /dev/sda1"

    Now what?

    **EDIT**

    So I decided to try to restore again, as I figured there was no reason not to. And I am getting the same error whenever I try and read anything off the drive. So I assume either I am doing the restore wrong, or something else is wrong somewhere.

    Can you provide just a bit more information on the "Rescue option." Is this an option I am given on install (Like in a windows install, "Repair your computer") Or is this a command I use from the command line. A GUI program I use when booting into a live desktop?

    And as far as Linux doesn't do many things like windows, I agree. Under most circumstances Linux does stuff better, I agree. I even tried to switch to Linux for my daily driver, but unfortunately it doesn't play well with many of my devices.

    A lot of that makes good sense. Thank you for taking the time to explain it. I personally have had issues with clonezilla restoring to a smaller drive, but it may be just the world of GUI and things just working. But that was for something else entirely. And I know that gz is a compressed file format, its just, again, from the world of Windows, when you have a 7 GB file, that is a compressed disk image that contains 27 GB of data and you restore it, it only every writes 27 GB. DD wrote the whole disk (It wrote 111 GB on the 120 GB disk, and 222 on the 240 GB disk) And please don't think I am complaining about this. Things work differently under Linux, I understand that, and I am sure there are reasons for it. I only mention this so readers can understand why I am so confused about all this.

    So we are still at the point where I have restored, both to a smaller drive, and to the same size drive, and in moth cases I get a black screen, white writing that says:

    "GRUB"

    with a flashing curser.

    From my understanding when grub fails and drops to a prompt I should get "GRUB >" but I do not, I seem to be missing the >, and all ability to type. Nothing works. No keypresses, enter, Crtl-C, Crtrl-Alt-Del, nothing. It was left on for 16+ hours to see if it would do something when I went home for the evening, but it is still there, so there must be something wrong with either the restore, or something else.

    The only thing of note is that I am trying to boot it on a different machine with only the system disk installed. In my limited experience with Linux, that should not matter. I have always been able to swap around system disks between machines and Linux just dealt with it. I am using the 240 GB disk at this point to eliminate the size difference being the problem. Restored system disk is the same size as the original.

    So any idea why I am not able to get it to boot?

    Not:

    "GRUB >"

    It actually stops at:

    "GRUB"

    I can type nothing. There is no response to keystrokes. CTRL-ALT_DEL does nothing.

    I just tried a dd restore on a new drive (Tried it again if you are looking at the other thread on a drive the correct size now, old one did the same)

    Why can't this just work?

    Wow, this is a lot harder then I think it should be... Please understand, I come from a windows world, where even the linux backup and restore programs I use (Such as acronis true image, and disk director) just work. Restore to smaller drive? No problem I will take care of that for you. Restore to larger drive? Sure, how did you want to do it?

    I have booted into a live version of Lubuntu, as I am trying to do this on a different system, with a different sized (Smaller) drive. Original drive (I think) 240 GB, this one 120 GB) (Don't worry, the entire backup will fit on the new drive almost 20 times.. There is enough room)

    I started with gunzip -c backupfile.dd.gz | dd of=/dev/sda status=progress

    It went through the whole process, writing 111 GB of a 7 GB file (That made no sense to me), and I was greeted with a blank drive.

    I then found some other directions that suggested I should restore the partition table first using the grubparts file (Can't seem to find those directions again.) but it seemed to kind of work.. Except I ended up with a 222gb partition on a 120 GB drive. That obviously would have been bad.

    So now I am kind of at a total loss here. How do I do this, what should be a very simple task?

    SO setting my e-mail information in the notification settings in OMV results in other things sending me e-mails as well? That is good/interesting to know. So then I guess that asks the question, what installed/designed this script? Is it something that gets installed with Linux Samba? With OMV Samba? Something else? Basically who's "responsibility" is it to decide if this is an issue and to correct it if it is I suppose I am asking. I use "responsibility" in quotes because I know a lot of this is open source and don't feel its right for anyone to expect issues to be fixed in open source projects.

    And as far as the system being in bad shape. After the forced fschk on boot, all seems fine. Thats what happened the last 3 times as well. For some reason I rebooted, it demanded a fschk, I did it, and it all worked. Last time I posted about it in this forum asking if I should be concerned because it has happened a few times and was told not to be worried about it. Are you saying I should be?

    38 emails sounds like a lot at first.


    But if there are 38 different processes for which a warning is issued, then that is exactly what the software is there for.


    You have to tell us more information about the error and the process which is monitored to do more analysis.

    Sure, since I don't need help resolving it, I can post the cause here. I just didn't want to turn this thread into a "lets fix 2 errors in 1 thread" thread.

    The message was as follows:

    Code
    The Samba 'panic action' script, /usr/share/samba/panic-action,
    was called for PID 1660763 (/usr/sbin/smbd).
    
    This means there was a problem with the program, such as a segfault.
    However, gdb was not found on your system, so the error could not be
    debugged.  Please install the gdb package so that debugging information
    is available the next time such a problem occurs.

    So I did the first thing you should do when you have an issue. I rebooted. It forced me to do a fschk before it would boot on the system drive, so I did that (It was the 4th time I have had to do it in the last coupel of years on this system btw. Was told in a previous thread not to worry about it) and its fixed.. I hope. I had the same thing yesterday (only 8 messages) but all worked, but when it happened again today, I thought I should reboot. So far as is well.

    Are you really still on OMV 6.x? If you are, are you sure OMV 7.x still does the same thing?

    You do have a fair point. I guess I should have asked "If this is not fixed in OMV 7 don't you think it should be?" I think we have had the discussion about upgrading, and I think I am gonna try and do it next weekend, at least to one of my servers. I understand asking for it to be fixed on 6.x is not reasonable.