raid 1 clean degraded - help me

    • OMV 3.x
    • raid 1 celan degraded - help me

      Hi i need help becouse my raid is degraded.
      the raid was created by disks /dev/sdb and /dev/sdc/

      Please help me becouse i can't restore with the funcion in omv. (3.0.99)

      Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mdadm --manage '/dev/md0' --add /dev/sdb 2>&1' with exit code '1': mdadm: add new device failed for /dev/sdb as 2: Invalid argument

      Errore #0:
      exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mdadm --manage '/dev/md0' --add /dev/sdb 2>&1' with exit code '1': mdadm: add new device failed for /dev/sdb as 2: Invalid argument' in /usr/share/php/openmediavault/system/process.inc:175
      Stack trace:
      #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(362): OMV\System\Process->execute()
      #1 [internal function]: OMVRpcServiceRaidMgmt->add(Array, Array)
      #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
      #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('add', Array, Array)
      #4 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('RaidMgmt', 'add', Array, Array, 1)
      #5 {main}

      This is the information.




      Source Code

      1. root@Cubotto:~# cat /proc/mdstat
      2. Personalities : [raid1]
      3. md0 : active raid1 sdc[1]
      4. 1953383488 blocks super 1.2 [2/1] [_U]
      5. bitmap: 10/15 pages [40KB], 65536KB chunk
      6. unused devices: <none>



      Source Code

      1. root@Cubotto:~# blkid
      2. /dev/sda1: UUID="b19de7a4-45cf-4bff-ae2a-032c0696f900" TYPE="ext4" PARTUUID="228bd653-01"
      3. /dev/sda5: UUID="23e7a2ab-6e0d-476f-8382-6565bad9790f" TYPE="swap" PARTUUID="228bd653-05"
      4. /dev/md0: LABEL="Dati" UUID="47b7050e-2199-4d82-8de9-4bafd91047d4" TYPE="ext4"
      5. /dev/sdc: UUID="a388506b-8091-a6e4-93d0-4ad5f9e5cdae" UUID_SUB="ae433e0c-5c96-e4d4-e963-8d98c51c8f76" LABEL="Cubotto:Dati" TYPE="linux_raid_member"

      Source Code

      1. root@Cubotto:~# fdisk -l | grep "Disk "
      2. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      3. Disk /dev/sda: 55,9 GiB, 60022480896 bytes, 117231408 sectors
      4. Disk identifier: 0x228bd653
      5. Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      6. Disk /dev/md0: 1,8 TiB, 2000264691712 bytes, 3906766976 sectors



      Source Code

      1. root@Cubotto:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md0 metadata=1.2 name=Cubotto:Dati UUID=a388506b:8091a6e4:93d04ad5:f9e5cdae
      18. # instruct the monitoring daemon where to send mail alerts
      Display All

      Source Code

      1. root@Cubotto:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=Cubotto:Dati UUID=a388506b:8091a6e4:93d04ad5:f9e5cdae
      3. devices=/dev/sdc
    • mdadm --stop /dev/md0
      mdadm --assemble --force --verbose /dev/md0 /dev/sd[bc]
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • dopey_it wrote:

      but i recive an error when i try to stop the raid
      The filesystem is mounted. It can be a pain to unmount since services are most likely using it. I recommend booting a rescue distro like systemrescuecd to fix this.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Users Online 1

      1 Guest