Beiträge von Mychomizer

    Yes, do not reboot, do not pass go until raid has rebuilt :) OH and make sure you select the correct drive to wipe :) to fully recover the raid

    Awesome! The raid seems to be rebuilding correctly now, hopefully it works like it should.


    Can't thank you enough for your time and effort in helping me! Expect a donation to the OMW Project from me very soon, the least I can do!


    EDIT: Raid recovered succsesfully!

    mdadm --assemble --force --verbose /dev/md127 /dev/sd[acd]:


    mdadm: looking for devices for /dev/md127

    mdadm: /dev/sda is identified as a member of /dev/md127, slot 1.

    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.

    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.

    mdadm: no uptodate device for slot 0 of /dev/md127

    mdadm: added /dev/sdc to /dev/md127 as 2

    mdadm: added /dev/sdd to /dev/md127 as 3

    mdadm: added /dev/sda to /dev/md127 as 1

    mdadm: /dev/md127 has been started with 3 drives (out of 4).


    The raid show in the webui again!


    Next step for me is what you wrote in the 1st post now? Clean new disk and then proceed to recover the raid?

    Sorry!


    cat /proc/mdstat:

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    unused devices: <none>

    Empty becuse I did mdadm --stop /dev/md127 earlier?

    blkid:

    /dev/sdc: UUID="478fbf22-daf1-9758-1264-80b1d486c52c" UUID_SUB="39909b78-25bf-765f-8683-2df664608779" LABEL="Jocke-Microserver:raid5" TYPE="linux_raid_member"

    /dev/sdd: UUID="478fbf22-daf1-9758-1264-80b1d486c52c" UUID_SUB="cb425cbd-23b5-0344-7980-615016972c0d" LABEL="Jocke-Microserver:raid5" TYPE="linux_raid_member"

    /dev/sde1: UUID="8a18c2c0-fecd-4924-a9a4-0790640a774e" TYPE="ext4" PARTUUID="57716ba9-01"

    /dev/sde5: UUID="2bbaa483-2349-4d9f-ad57-cdae88500dce" TYPE="swap" PARTUUID="57716ba9-05"

    /dev/sda: UUID="478fbf22-daf1-9758-1264-80b1d486c52c" UUID_SUB="7c8f1a0c-d9fb-f4be-5108-0bd2d4fd7bef" LABEL="Jocke-Microserver:raid5" TYPE="linux_raid_member"


    mdadm --detail /dev/md127:

    mdadm: cannot open /dev/md127: No such file or directory
    same reason as cat /proc/mdstat?

    Then I'm afraid it's dead

    Are you sure? Becuase /dev/sdb where it says there is no super block is the new disk.


    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors

    Disk model: Samsung SSD 840

    Disk identifier: 0x57716ba9

    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFAX-68J

    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    WDC WD40EFAX-68J = /dev/sdb should be the new since it differs in name from the 3 others

    Thanks for the quick reply! However something didnt work.

    mdadm --stop /dev/md127:
    mdadm: stopped /dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[abc]:
    mdadm: looking for devices for /dev/md127

    mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)

    mdadm: no RAID superblock on /dev/sdb

    mdadm: /dev/sdb has no superblock - assembly aborted


    EDIT:
    I think the "labels" might have changed cus I rebooted the system after I wrote the post.


    NEW fdisk -l | grep "Disk ":

    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors

    Disk model: Samsung SSD 840

    Disk identifier: 0x57716ba9

    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFAX-68J

    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Hello, first of all big love to the OMW devs and community! Been using it since I got into homeservers (since OMW v2)


    Short story: For storage I have been running a raid 5 with 4x4TB WD REDs for years. THEN I had a power shortage and one disk died. When I replaced the broken disk the raid disappeared from the web UI. I am currently in full panic that years of backups have been lost.


    From before i swapped disks and the raid disappeared:


    cat /proc/mdstat:


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md127 : inactive sda[1](S) sdc[3](S) sdb[2](S)

    11720662536 blocks super 1.2

    unused devices: <none>


    blkid:


    /dev/sdb: UUID="478fbf22-daf1-9758-1264-80b1d486c52c" UUID_SUB="39909b78-25bf-765f-8683-2df664608779" LABEL="Jocke-Microserver:raid5" TYPE="linux_raid_member"

    /dev/sda: UUID="478fbf22-daf1-9758-1264-80b1d486c52c" UUID_SUB="7c8f1a0c-d9fb-f4be-5108-0bd2d4fd7bef" LABEL="Jocke-Microserver:raid5" TYPE="linux_raid_member"

    /dev/sdc: UUID="478fbf22-daf1-9758-1264-80b1d486c52c" UUID_SUB="cb425cbd-23b5-0344-7980-615016972c0d" LABEL="Jocke-Microserver:raid5" TYPE="linux_raid_member"

    /dev/sdd1: UUID="8a18c2c0-fecd-4924-a9a4-0790640a774e" TYPE="ext4" PARTUUID="57716ba9-01"

    /dev/sdd5: UUID="2bbaa483-2349-4d9f-ad57-cdae88500dce" TYPE="swap" PARTUUID="57716ba9-05"


    fdisk -l | grep "Disk ":


    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68W

    Disk /dev/sdd: 111.8 GiB, 120034123776 bytes, 234441648 sectors

    Disk model: Samsung SSD 840

    Disk identifier: 0x57716ba9

    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFAX-68J


    cat /etc/mdadm/mdadm.conf:


    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.

    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #

    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions

    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes

    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>

    # definitions of existing MD arrays


    mdadm --detail --scan --verbose:


    INACTIVE-ARRAY /dev/md127 num-devices=3 metadata=1.2 name=Jocke-Microserver:raid5 UUID=478fbf22:daf19758:126480b1:d486c52c

    devices=/dev/sda,/dev/sdb,/dev/sdc


    Please try to be precise if you need me to post something else/do something since my knowledge of terminal commands and usage is basically copy&paste.


    Thanks in advance

    Last thing i see is "loading initial ramdisk" then instant reboot.


    Anyways, im in rescue mode right now. No idea how this works, just install using Rescue mode? (just afraid of messing up my raid)


    Or should i just reinstall the OS? I run dockers so all the appdata is still there so just minor settings to redo in the webgui. Mabye refrain from doing apt upgrade? I just like to be on the latest

    Did you stop your VMs before running the script?

    Yes i stopped the VM i had running and did disable the VirtualBox plugin in the omw webgui, I didnt uninstall the plugin just disabled it.


    Edit: this is the error i get when trying to open phpvirtualbox:

    Code
    There was an error obtaining the list of registered virtual machines from VirtualBox. Make sure vboxwebsrv is running and that the settings in config.php are correct.

    Hello! First post here


    I'm trying to upgrade my VirtualBox to version 5. I ran this script but it's not working


    output of virtualbox

    Code
    WARNING: The vboxdrv kernel module is not loaded. Either there is no module
             available for the current kernel (3.2.0-4-amd64) or it failed to
             load. Please recompile the kernel module and install it by
    
    
               sudo /sbin/vboxconfig
    
    
             You will not be able to start VMs until this problem is fixed.
    Qt WARNING: VirtualBox: cannot connect to X server


    output of sudo /sbin/vboxconfig


    Code
    Starting VirtualBox kernel modules ...failed!
      (modprobe vboxdrv failed. Please use 'dmesg' to find out why)
    Starting VirtualBox web service ...fail!


    output of dmesg
    http://pastebin.com/1cnyu6LT


    Sorry for pastebin but "Message is too long, must be below 10,000 characters"


    P.S I'm super new to this kind of stuff