Posts by KM0201

    Using dd to image the rootfs partition won't copy the boot track, so the restored image will not boot.

    There's a way to also use dd to grab that piece separately but I don't know what it is, never needed to do that. May also be a way to easily take that non-bootable rootfs partition restoration and clobber in a boot track.

    Hmm, I don't know about any of that, but I've always liked the idea of clobbering something.. :)

    The original idea was to buy new HW but when a added all that i needed to the shopping list i was over 1500€ so i went the scavenger route 8)

    Yeah... I was hoping to rebuild end of last year.. then covid kicked my ass for the end of december/start of january... I'm just getting back to work (thank goodness for sick time).. I've already acquired a few things I wanted to use that I wasn't concerned about sitting in a closet (chassis, some cables I'd need, a few USB-c flash drives, etc.)..

    I've always been a fan of Fractal Design cases... (next one is going in an R5 I got a good deal on in clearance.. that will probably be my last chassis as I'll die before I outgrow it)

    Image is not the same as import export. If you have possibility to export all settings you are able just to install new OMV release from scratch and import all of your settings (e.g. more than 50 users, their passwords, their privileges, groups, their privileges, names for shared folders, steup for /etc/fstab and many many more).

    OS image is usable until it works or is possible to be upgraded to a new release (illusuin of beeing save)
    With exported settings you could start your NAS even from the scratch if necessery (beeing save).

    Of couse in every moment I could use e.g. the Clonezilla to make OMV image.

    But this is not regarding the question asked in this thread ;-)

    We know that, your question has been asked and answered. There's not much more to discuss about it.

    What did you use to image? I use this:

    now=$(date +"%Y.%m.%d.%H.%M.%S")
    dd if=/dev/disk/by-id/usb-PNY_USB_3.0_FD_070B67D5A4B2D178-0:0 of=$file bs=1M status=progress

    Prior to switching to a USB stick I used a 16GB 2.5 inch Samsung SSD in a USB case. That was considerably faster but all these times are so small to begin with they don't enter into the choice.

    I used dd, but it's been so long if I told you what I did I'd probably be lying as it's been at least 5-6yrs ago. I wonder if by chance I didn't image my whole 64gig SSD, and that's why it was taking so long, vs just my root partition..

    Here's how I always install portainer... Obviously it doesn't help you in your situation now, but it may as you move forward.

    As root/sudo or as a user in your docker group..

    mkdir portainer

    cd portainer

    touch docker-compose.yml

    nano docker-compose.yml

    Adjust the "/data" volume below to your needs.

    Cntrl X, then Y, then Enter to save

    docker-compose up -d

    As long as you always point at your /data correctly.. Portainer will always be 100% the same when you redeploy it with that compose file. Passwords and usernames, stacks can be edited, configurations, etc... even if your

    Zoki Thanks for your comment can you clarify if you mean the limited one or the old one( this is not and just throwing the error)

    I didn’t find any option for export or maybe it is not clear for me

    Read the thread I linked you to, that's what he's referring to I do believe. Unfortunately I don't think there's a way you're going to get to where you can edit your current stacks.

    The only way I know to accomplish what you're doing,

    Like I said, if it were me, I'd just delete portainer, reinstall it w/ docker-run or docker-compose (assuming you used the OMV button before), and make sure you set a /data volume for it. It will show your running containers, then just delete your stacks 1 x 1 (which will also delete the container, and redeploy them (since you have them backed up). As long as your /config directories are right, you point at the same directories, etc.. it *should* redeploy like nothing ever happened (at least that's been my experience when I tested scenarios like this).

    My dd OMV system drive backup images (16GB USB 3.0 stick) are done on cron so how slow or fast this is doesn't matter to me, but it's under seven minutes.

    A restore to bare metal using a proper USB 3.0 stick takes under fifteen minutes but I don't recall the exact figure.

    I wonder if I was doing something wrong, because when I tried to image my SSD, it was taking over an hour. I never tried to restore it.

    Obviously backing up/restoring doesn't matter if I'm clean installing a new version, but I can see where it is useful to some in lieu of reinstalling.. just not for me.

    I've been with OMV since V2 and this feature has been frequently asked for, and most certainly been asked for well before ever I got here.

    Back during the first beta releases of OMV, this "feature" was there. I could be wrong, but I don't think it even made it to the 1.0 release before it was removed.

    From what I can remember of my personal experience, the configuration backup seemed to work well. It was the restoration that was a problem. It frequently wrecked things, didn't work properly, etc. and often I ended up purging/reinstalling OMV and starting over anyway which basically destroyed the intent. I tested it multiple times and it never once worked properly. I'm assuming most had a similar experience as votdev pulled it. The feature works well in some other NAS operating systems, but unfortunately hasn't translated well to OMV. That was when imaging your OS drive became the recommended backup procedure.

    As I've said many times, I've never backed up my OS drive, and never will. Since I've migrated almost everything to docker, doing clean installs is easy and probably faster than the image/restore process. When I clean installed 6 on Thursday.. it was 50min from booting the installer USB, to all my services, shares, jobs, etc. being back to normal. Probably would have been quicker but for 2 things.. For some reason the Debian servers were really slow during the initial install... and I cried for at least 10min because I couldn't use drive labels anymore :) (j/k). Now admittedly, before docker, those clean installs were a total PITA, and generally took me at least 2-3hrs.

    Thank you wor your suggestion. I will use this. But anyway this is workaround / not a real solution. Just look what will happen if you will format your drive, or replace it. The uuid is a random number changed after every drive format. Then what? You have a lot of wrong entries @ your OMV system.

    Well what more of a "real solution" do you want? The drives mount by UUID because mounting by drive label was causing significant issues for some users.

    It still doesn't matter.. they may randomly change at some point. I did a clean install of omv 6 yesterday... When I started, my OS drive was sda... It's sdc now. In fstab, the drives are mounted by UUID, not /dev/sdX. As votdev said above.. it may rename the device every single boot.

    Drive labels would be the easiest way to effect this, as said..

    This plugin will not happen since my dog already hacked the prototype ^^ She is a 90lb chocolate lab (in good shape, not fat) that is ALWAYS hungry.

    Labs are such awesome dogs. If she's 90lbs and not fat... she is a BIG one. Labs are one of those breeds really prone to obesity because they just tend to eat because food is there.. doesn't matter if they are hungry or not.

    If you can't get it resolved, last thing is deleting portainer, reinstall it and make sure you have a /data directory set. Before doing that, you could use autocompose to create compose files off your running containers... Then just redploy those stacks in the new portainer and you should be able to edit them.

    Read Zoki posts here on autocompose.

    OMV on Raspberry and SD Card full!

    How did you install portainer? Most of the Google hits are talking about an entry point error, and most like you notice, said if they reinstalled it was fine.

    Unfortunately for you, you have several stacks there, like you said. Do you have a data folder mapped for portainer? I'm assuming not or I don't think this would happen.

    (This probably makes a good case for backing up your docker-compose files when they work how you want them.. I have all mine backed up)

    Do you update your first posting?

    Is it still "not started"

    This plugin is a must-have for me

    Well, because of your post... I decided there was no choice but to kidnap Aaron. He is now locked in my basement and soon I will begin pumping him full of Starbucks and Methamphetamine. I can't guarantee his work will be accurate.. but it will done.

    Serious.. cut the guy some slack. He's got a real job to you know. I think the progress he's made is pretty freaking amazing considering most of these plugins were a complete rewrite.

    If Mac's are the same as Linux.. it's in /home/username/.ssh/ . I'd agree just deleting the known_hosts file would do the trick and be the easiest way to do this, unless he knows exactly which key to delete (I'm assuming he may have several)