Posts by darkengel02

    Figured it out. an rm -fr did the trick. Leaving it up here for anyone who runs into a similar issue. Now just gotta figure out why myip:9000 for portainer is not reachable.

    you solve this? I'm too with portainer not reachable, docker is running fine, even portainer is running... but I can't reach the web panel on 9000 port.

    Here another log because was too long. (the complete is attached)


    Hi everyone, I'm not sure if this issue is on the right section but few days ago I had troubles with portainer and I think docker in general, I use it to have Pihole and this is working fine from a year ago but one time I had unaccesible the Pihole website, then I wanted to check portainer to update the image or something but was unaccesible, with the same issue I have now after a fresh install (I decided to reinstall all raspbian server, install again OMV 6 with the script and then just install docker and portainer from omv-extras buttons), I have been using OMV from a while so I can install perfectly and setup my shared folders, but with docker and portainer I'm pretty new, I have read about the issues reported here but there is no information, I usually keep the OMV updated and I think it crash after an update.


    So after the long story this is the issue, after a fresh install, when I want to open portainer first time (and all the times) I have the big portainer logo and the legend:
    "New Portainer installation

    Your Portainer instance timed out for security purposes. To re-enable your Portainer instance, you will need to restart Portainer"


    My setup is a Raspberry Pi4 with OMV 6 on a SSD partition.


    Of course I tried to restart the portainer, docker services, rebooting and is just the same, somewhere I read could be cache, I tried on private mode and some devices I never use for portainer and have the same, I had this issue with my old installation with all my drives, but I have the same with a fresh install and OMV6 as default, here some logs I read could be usefull:


    Code
     grep -r /etc/apt/ -e docker
    /etc/apt/sources.list.d/omvextras.list:deb [arch=arm64] https://download.docker.com/linux/debian bullseye stable

    Hope someone can help me.

    Yes just now I push it the apt clean button. But I don't understand, so no one have plugins from omv-extras.org? I do not remember if when installing the OMV Keyring had the Erasmus repo and get an installation error, then I change the repo to Stoneburner and installing again told me that I had installed the OMV Keyring.

    I dont install omv-extras3, I follow this guide, the name of file I installed was openmediavault-omvextrasorg_latest_all.deb,

    Code
    root@radxa:~# dpkg -l | grep openm
    ii  openmediavault                  2.2.2                          all          Open network attached storage solution
    ii  openmediavault-keyring          0.4                            all          GnuPG archive keys of the OpenMediaVault archive
    ii  openmediavault-omvextrasorg     2.13.1                         all          OMV-Extras.org Package Repositories for OpenMediaVault

    I all, I installed OMV 2.2.2 on a Radxa Rock board, a rock chip arm, I installed omv-extras.org plugin too from his guide, but now after update repos and all, I can't see any new plugin to install, I have an error when I refresh plugin list. Please see the attached.


    The RAID still working now, with 2 drives, so the /dev/sdb its clean now with the command dd? What I do now with this drive? I'd like to add to the array, to have protection, it is a new drive, 2 WDC drives are new and the ST4000 drive its 3 years old. Can i do that? How?

    @ryecoaaron I could started the array with:


    Code
    mdadm --assemble /dev/md0 /dev/sd[cd] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 0.
    mdadm: Marking array /dev/md0 as 'clean'
    mdadm: added /dev/sdd to /dev/md0 as 0
    mdadm: no uptodate device for slot 4 of /dev/md0
    mdadm: added /dev/sdc to /dev/md0 as 1
    mdadm: /dev/md0 has been started with 2 drives (out of 3)


    I think /dev/sdb has a file system problem, this drive its the first of the array (I supose), is risky work like this? What would be more advisable to do?


    I Array Management I have able the Repair button, may can fix the /dev/sdb drive?

    Same result:

    Code
    mdadm --assemble /dev/md0 /dev/sd[cdb] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: no recogniseable superblock on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted

    No results, same message:


    Code
    mdadm --zero-superblock /dev/sdb
    mdadm: Unrecognised md component device - /dev/sdb


    Code
    dd if=/dev/zero of=/dev/sdb bs=512 count=100000
    100000+0 records in
    100000+0 records out
    51200000 bytes (51 MB) copied, 1.81299 s, 28.2 MB/s


    Code
    mdadm --assemble /dev/md0 /dev/sd[bcd] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: no recogniseable superblock on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted


    sdb disc still alive?

    Thanks @ryecoaaron always for your support, I did and show this:


    Code
    mdadm --assemble /dev/md0 /dev/sd[bcd] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted

    Hello guys, I have a serious problem with a RAID5, this content full 3x4TB HD, so I need fix without loosing data, please help me with this hard work. Some details:


    Code
    cat /proc/mdstat: 
    Personalities : 
    md0 : inactive sdd[0](S) sdc[1](S)
          7813775024 blocks super 1.2
    
    unused devices: <none>


    Code
    blkid:
    /dev/sda1: UUID="36b09d97-b3d9-468f-874d-cf3eced0e1da" TYPE="ext4" PARTUUID="0009e9b4-01" 
    /dev/sda5: UUID="ab4934a4-850d-45be-81aa-94bd123613b2" TYPE="swap" PARTUUID="0009e9b4-05" 
    /dev/sdd: UUID="c6178cc8-262f-56df-e588-2c97e7aa2e6c" UUID_SUB="4f3aba4b-d752-8496-6dab-e164f4b6d617" LABEL="PRODUCCION:Produccion" TYPE="linux_raid_member" 
    /dev/sdc: UUID="c6178cc8-262f-56df-e588-2c97e7aa2e6c" UUID_SUB="bdc0334a-063d-8f1e-b3f0-519c683d01b1" LABEL="PRODUCCION:Produccion" TYPE="linux_raid_member" 
    /dev/sdb1: PARTLABEL="LDM metadata partition" PARTUUID="bfad6539-ecbe-11e3-b8f5-0010c6b06aae" 
    /dev/sdb2: PARTLABEL="Microsoft reserved partition" PARTUUID="986e2150-2926-4fcb-87bb-a10f7bbb93d2" 
    /dev/sdb3: PARTLABEL="LDM data partition" PARTUUID="bfad653c-ecbe-11e3-b8f5-0010c6b06aae"



    Thank you in advance. :/