Beiträge von 7ore

    Planning on upgrading and have a question on Portainer. That might have been answered in this thread - but it is 57 pages long so...

    First, I think that it was the right choice to remove Yacht and Portainer from OMV-extra.
    It is a change but in the long run it is better to keep that kind of tools on the "outside".


    But I need to check this before I start. I use two machines (soon three) with docker containers, so I suspect that I do need to continue using Portainer (or Yacht, Rancher or similar tools) to handle all servers in one place?

    Or can the compose plugin handle external servers?


    Or you could create a portainer compose in the compose plugin and keep using portainer like you did before. You would update portainer from the compose plugin.

    Found this snippet. That sounds good.
    I would like to keep all docker configs in the same folder, where is the Portainer config folder located?
    (I have not access to my OMV 6 right now, so I might find the folder easy enough when I have access to it.)

    Solved the empty folder issue.
    Moved one of the drives in that RAID to another session of OMV (started up another computer with an old version of OMV that I had).
    And the drive had all files on it, so I recover the raid there.

    Then back to this instance and continue to do a cleanup there to remove the extra disk and the other issues.

    Thanks for the help so far. I might have followup questions, but I think that I know the path forward from here.

    OK, So I tried to remove the raid from GUI, but could only remove one of the disks in it.
    The other one is still there and the option to remove the complete disk can't be selected.



    But I have formatted both disks now, that worked.
    I have not restarted the machine yet.


    I got a bigger issue now, though.

    One of the shared folders in md126 appears empty. Other folders are intact and the filesystem seems untouched. This seems like another issue outside of this thread, but as it is related I continue here.
    What steps can I do to recover or rescan the filesystem to restore the content of that folder

    Thanks.
    Here is the results of those commands:




    Removed the instructions and default settings of the file.

    Code
    e:~# cat /etc/mdadm/mdadm.conf
    ...
    
    # definitions of existing MD arrays
    ARRAY /dev/md/ior metadata=1.2 name=nasse:ior UUID=edb0fb6c:b865e482:91005459:1f536e48
    ARRAY /dev/md/puh metadata=1.2 name=nasse:puh UUID=c38f46f3:382c20e5:02a93b53:837294a3
    ARRAY /dev/md/kangu metadata=1.2 spares=1 name=nasse:kangu UUID=0c98e989:12de39be:be119640:e4f548e0



    I can see that md124 and md126 have the same guid in blkid and that the md124 array is missing in mdadm.conf.
    How do I fix that - without making more mess of things...

    MultiUser "</>" is a button in the editor creating the above boxes.

    I have OMV 6.0.632.

    I added two new disks as a new mirrored raid yesterday.
    When I restarted the server I noticed that an older raid, as well as this new one had started a scan.
    The old raid stopped it's sync and the new one was running.
    But as I had issues accessing the existing raid, I shut down the server again, removed the two new disks and restarted.
    Reason to let the old raid resync get ready with the resynch first.
    When done, I attached the two new disks again and this one got done this afternoon.

    But now I can't add a new filesystem and noticed mismatched names.
    The new one is called "/dev/md124" in the RAID tab. But that name is what the old one has if I open the dropdown in Shared folders.
    And the name the old one has on the RAID tab, "/dev/md126" is missing on "Shared folders" tab.
    And there are nothing to select if I try to add a new filesystem.

    How do I fix this issue? The new raid is completely empty, so I do not mind to reformat that if that helps.
    But I think that there are references in Debian that need to be fixed to sort this out, and I am still too much of a linux rookie
    to fix it.
    The risk is that I make it worse.

    I shut down my OMV and replaced a disc in one of my mirror raids. The one left there (sda) is a WD red 3TB.


    When I restarted that raid (md125) is not shown in the "Software RAID" tab, so I can't add the new disc to the raid.






    Code
    ~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md125 : inactive sda[2](S)
          2930135512 blocks super 1.2
    
    md126 : active raid1 sdc[0] sdb[1]
          2930135360 blocks super 1.2 [2/2] [UU]
    
    md127 : active raid1 sde[2] sdd[3]
          3906887512 blocks super 1.2 [2/2] [UU]








    Code
    ~# mdadm --detail --scan --verbose
    ARRAY /dev/md/ior level=raid1 num-devices=2 metadata=1.2 name=nasse:ior UUID=edb0fb6c:b865e482:91005459:1f536e48
       devices=/dev/sdd,/dev/sde
    ARRAY /dev/md/puh level=raid1 num-devices=2 metadata=1.2 name=nasse:puh UUID=c38f46f3:382c20e5:02a93b53:837294a3
       devices=/dev/sdb,/dev/sdc
    INACTIVE-ARRAY /dev/md125 num-devices=1 metadata=1.2 name=nasse:kangu UUID=0c98e989:12de39be:be119640:e4f548e0
       devices=/dev/sda

    This was due to a mix of permission issues and old container settings.
    After fixing the permissions and then creating app templates for my services (in Portainer), I could replace the old containers with new stacks.


    OMV 6 is really nice and a huge improvement over OMV 4! I new it would be better, but this exceeded my expectations.

    I was about to upgrade my environment from OMV 4.x to 6 but after I read a bit on it, I decided to install OMV 6 clean on a new drive.

    So I have copied the content from /var/lib/docker to the same folder on the new instance and created symlinks from /sharedfolders/ to the correct paths in omv 6.


    The configurations is found, but not fully. Portainer can't find anything (Not fully cleaned out from the first attempt, probably).

    Yacht shows all containers and they are running, but in most cases the correct values are shown when I click on the container, but if I select "Edit" it is empty.

    I can login to some, but not all.

    So I have reached half of the way and have obviously missed some steps...



    I think I should do this one more time, and make sure I do it properly...
    So is there some steps on how I move all my containers from OMV4.x to 6.0?


    (I need to get Letsencrypt up and running as well, but if I get this working, the swag container should be OK

    Thanks.

    Yes it is probably about time to do that.

    I am a bit hesitant as it usually goes sideways, giving me a lot of problems before it is up and running again.

    But if I go to 6 now, that might give me peace for a couple of more years to come...

    Thanks. I normally use 1.1.1.1 as DNS and that usually is a stable dns.
    Switched to Google dns and get the same issue (but Google work)


    It might be as simple as registry-1.docker.io being down currently.
    I tried from two remote servers I have with the same issue.


    So I suspect I just have to wait it out...

    My docker containers are running but tab "Docker images repo" is empty and when I try to pull a new image I get.



    And when I check the logs I can see that dockerd have issues:


    Code
    dockerd[1342]: time="2022-06-17T12:30:49.847154455+02:00" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:48982->[::1]:53: read: connection refused"
    dockerd[1342]: time="2022-06-17T12:30:49.848723952+02:00" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:48982->[::1]:53: read: connection refused"
    dockerd[1342]: time="2022-06-17T12:30:49.849600480+02:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:48982->[::1]:53: read: connection refused"


    Have there been a change in docker? I checked if there was a new plugin update, but found none.

    Any ideas on how to proceed?

    I have learned a number of things on my old environment as I made made a number of errors on the way.
    The old system do work, but there are issues with it, so when I now plan to upgrade to 4.0, I want to do a clean install on a new SSD HD.
    I plan to recreate most of the settings I've used in the old system but without the bad choices.


    So I would like to use the old config files as a template and lookup when I configure the new environment.


    What files should I do a backup of to be able to do this?
    openmediavault/config.xml is the first that comes to mind for the main system. But is that enough for OMV?


    There are a couple of plugins I need to backup too, but my main concern is for OMV.
    I plan to move the current HD to another computer during the installation, but as that one lacks the actual hard drives, I am not sure how helpful that will be.


    Side question; By the way, would it be useful to run OMV as a docker image?

    I would like to configure nginx to be able to reach all services without adding a port to the IP-number.
    Instead I would like to have it like "192.168.1.196/service1" and have the nginx to reroute that to the port in question.


    I tried to add this in the /etc/nginx/sites-available/default :


    location /sab {
    proxy_pass http://127.0.0.1:8080/;
    }


    But it only gives me a 404.


    So I suspect I need to change more things or I configured the wrong file.

    The system do work but I cant really change things as this exception prevents pages to load properly, Like "File Systems".


    I can see that there are items /etc/openmediavault/config.xml under fstab that is missing from /etc/fstab.


    But as I accidentaly removed OMV and succeeded to reinstate it, I had presumed it to be the other way around. That things would be gone from omv setting but in place for fstab...


    For instance this section is from config.xml but no UUID starts with "fd9" is fstab.


    Code
    <mntent>
            <uuid>fd900769-c5d8-447d-b1c1-d803216edaa0</uuid>
            <fsname>c95301c0-9790-48c6-ab1f-17da8e3c3538</fsname>
            <dir>/media/c95301c0-9790-48c6-ab1f-17da8e3c3538</dir>
            <type>cifs</type>
            <opts></opts>
            <freq>0</freq>
            <passno>0</passno>
            <hidden>1</hidden>
          </mntent>



    What steps should I do to realign my system so that everything works as expected again?
    Any specific config files I can add that would help the troubleshooting or logs?


    On the File system page I get this exception:
    Error #0:exception 'OMV\Exception' with message 'No filesystem backend exists for 'cifs'.' in /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc:390Stack trace:#0 [internal function]: OMVRpcServiceFileSystemMgmt->getList(Array, Array)#1 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#2 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array)#3 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('FileSystemMgmt', 'getList', Array, Array, 1)#4 {main}