raid 1 clean degraded - help me

  • Hi i need help becouse my raid is degraded.
    the raid was created by disks /dev/sdb and /dev/sdc/


    Please help me becouse i can't restore with the funcion in omv. (3.0.99)


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mdadm --manage '/dev/md0' --add /dev/sdb 2>&1' with exit code '1': mdadm: add new device failed for /dev/sdb as 2: Invalid argument


    Errore #0:
    exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mdadm --manage '/dev/md0' --add /dev/sdb 2>&1' with exit code '1': mdadm: add new device failed for /dev/sdb as 2: Invalid argument' in /usr/share/php/openmediavault/system/process.inc:175
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(362): OMV\System\Process->execute()
    #1 [internal function]: OMVRpcServiceRaidMgmt->add(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('add', Array, Array)
    #4 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('RaidMgmt', 'add', Array, Array, 1)
    #5 {main}


    This is the information.





    Code
    root@Cubotto:~# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdc[1]
          1953383488 blocks super 1.2 [2/1] [_U]
          bitmap: 10/15 pages [40KB], 65536KB chunk
    unused devices: <none>



    Code
    root@Cubotto:~# blkid
    /dev/sda1: UUID="b19de7a4-45cf-4bff-ae2a-032c0696f900" TYPE="ext4" PARTUUID="228bd653-01"
    /dev/sda5: UUID="23e7a2ab-6e0d-476f-8382-6565bad9790f" TYPE="swap" PARTUUID="228bd653-05"
    /dev/md0: LABEL="Dati" UUID="47b7050e-2199-4d82-8de9-4bafd91047d4" TYPE="ext4"
    /dev/sdc: UUID="a388506b-8091-a6e4-93d0-4ad5f9e5cdae" UUID_SUB="ae433e0c-5c96-e4d4-e963-8d98c51c8f76" LABEL="Cubotto:Dati" TYPE="linux_raid_member"
    Code
    root@Cubotto:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sda: 55,9 GiB, 60022480896 bytes, 117231408 sectors
    Disk identifier: 0x228bd653
    Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/md0: 1,8 TiB, 2000264691712 bytes, 3906766976 sectors



    Code
    root@Cubotto:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=Cubotto:Dati UUID=a388506b:8091a6e4:93d04ad5:f9e5cdae
       devices=/dev/sdc
    • Offizieller Beitrag

    mdadm --stop /dev/md0
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bc]

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    but i recive an error when i try to stop the raid

    The filesystem is mounted. It can be a pain to unmount since services are most likely using it. I recommend booting a rescue distro like systemrescuecd to fix this.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!