RAID clean,degraded and the first hd is now as "spare" one...

  • Hi to all. I'm quite not skilled about raid and some days ago I found out the 2 arrays down.

    I reset up the first one (md0) reading everywhere on the net and now is working properly, but the second one is still in "clean,degraded" status and one HD is set as "spare".

    Here the status of the md1:

    Here some data:

    Code
    root@nas:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md1 : active raid1 sdc[2](S) sde[1]
          976631360 blocks super 1.2 [2/1] [_U]
    
    md0 : active raid1 sda1[2] sdb1[1]
          2930264896 blocks super 1.0 [2/2] [UU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    unused devices: <none>
    Code
    root@nas:~# blkid
    /dev/sda1: UUID="9fbc3293-aa57-ed39-0170-1e90ef95a135" UUID_SUB="62e53fb1-e34d-edab-1701-de1fcbd3b99d" LABEL="linux:0" TYPE="linux_raid_member" PARTUUID="52289811-8f0e-f049-a7dd-96d3c3f8405d"
    /dev/sdc: UUID="be21a281-1f0e-7f6d-4580-c9f196b04695" UUID_SUB="49f67fb1-f022-cf9b-ab77-169c08a9a3b9" LABEL="NAS:1" TYPE="linux_raid_member"
    /dev/sde: UUID="be21a281-1f0e-7f6d-4580-c9f196b04695" UUID_SUB="1dec0a3d-e37e-f1bf-7461-44f23c36aac0" LABEL="NAS:1" TYPE="linux_raid_member"
    /dev/sdb1: UUID="9fbc3293-aa57-ed39-0170-1e90ef95a135" UUID_SUB="c8e39ba2-9b09-d3bb-c9af-ce67b297e070" LABEL="linux:0" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="40236d5f-ade4-4d2d-98dd-833b8ce0e5a0"
    /dev/sdd1: UUID="fbe9d17c-d62c-474d-a05a-e06c4d088fc5" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="cdb93944-01"
    /dev/sdd5: UUID="f1a20c21-8f6f-4a44-afe3-12dc1b13ae98" TYPE="swap" PARTUUID="cdb93944-05"
    /dev/md0: UUID="d5af4f81-b8ff-4fa7-9830-cb8bb3439c20" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/md1: UUID="8090f9b9-b4b1-4166-a785-1c6cd6496a2f" BLOCK_SIZE="4096" TYPE="ext4"
    Code
    root@nas:~# cat /etc/mdadm/mdadm.conf
    DEVICE partitions
    CREATE owner=root group=disk mode=0660 auto=yes
    HOMEHOST <system>
    MAILADDR carlo@xxxxxxxxxxxx.it
    MAILFROM root
    ARRAY /dev/md/NAS:1 metadata=1.2 name=NAS:1 UUID=be21a281:1f0e7f6d:4580c9f1:96b04695
    ARRAY /dev/md/linux:0 metadata=1.0 name=linux:0 UUID=9fbc3293:aa57ed39:01701e90:ef95a135
    Code
    root@nas:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/linux:0 level=raid1 num-devices=2 metadata=1.0 name=linux:0 UUID=9fbc3293:aa57ed39:01701e90:ef95a135
       devices=/dev/sda1,/dev/sdb1
    ARRAY /dev/md/NAS:1 level=raid1 num-devices=2 metadata=1.2 spares=1 name=NAS:1 UUID=be21a281:1f0e7f6d:4580c9f1:96b04695
       devices=/dev/sdc,/dev/sde
  • Maybe not md127 but md1? I tryed both ways anyway:

    Code
    root@nas:~# mdadm --stop /dev/md127
    mdadm: error opening /dev/md127: No such file or directory
    root@nas:~# mdadm --stop /dev/md1
    mdadm: Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group?
  • I tryed also a lsof |grep md1 and I had: a sync action is going on and on and on?


    • Offizieller Beitrag

    :cursing: my bad again, sorry replying to two separate threads :) but this ->

    Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group

    suggests that something is accessing the array, possibly a share linked to a docker container, or something accessing an smb share.

  • Uhm.... I have (and I hate containers...) Container running an image of JDownloader with /opt/JDownloader/Downloads and config on:

    /srv/dev-disk-by-id-md-name-NAS-1/JDownloader/downloads/opt/JDownloader/Downloads
    /srv/dev-disk-by-id-md-name-NAS-1/JDownloader/config/opt/JDownloader/app/cfg


    Maybe have I to stop the container image before doing the add?

    • Offizieller Beitrag

    Maybe have I to stop the container image before doing the add

    The fact that the --stop command cannot get exclusive access would suggest that, just stop the container in Portainer then try the --add command again, if that fails, try the --stop command then --add

  • Stopped the container image and tryed the --add (nothing), and then the --stop (nothing). Maybe something else is going on.... did you read the losof output I posted?

    Code
    root@nas:~# mdadm --add /dev/md1 /dev/sdc
    mdadm: Cannot open /dev/sdc: Device or resource busy
    root@nas:~# mdadm --stop /dev/md1
    mdadm: Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group?
    • Offizieller Beitrag

    did you read the losof output I posted

    Yes, but never seem anyone do that on here so that's a new one on me, but if you look at the output of your post 5 that is also puzzling, but could suggest it's attempting a resync based upon the output of your lsof.

    Whilst I don't usually suggest this have you just tried a reboot to see if that resolves it, or do you have a backup of that array

    • Offizieller Beitrag

    Rebooted, nothing changed

    Then I'm currently at a loss, the only option I can think of now is to use the systemrescuecd, which is on the menu of the kernel plugin, this installs and boots to a system rescue cli so it doesn't directly interact with OMV, when you exit the system restarts and you login via the WebUI


    The problem is being able to get access to the array which at present appears to be not possible, what makes it worse is you have no backup, so if I suggest something and it goes down the toilet :)


    Do you have another user in OMV that has ssh access

  • Reading somwhere, I found out that a RAID1 cannot be stopped totally but one of the disks has to stay active: You can not stop both devices at the same time, while in use. If you can not stop the array itself by unmounting whatever partitions are mounted on it and then stopping it, you will need to migrate device by device.

    They also suggested to do in this way:

    Code
    mdadm --fail /dev/md0 /dev/sda1
    mdadm --remove /dev/md0 /dev/sda1

    What do you think?

  • No no, sorry, it was just the example I found on the web.... I know that I have the problem on the md1 array :)

    It was just to know if that solution can fit my needs (sorry for my english).

    I thought that I can remove the sdc hd doing a fail and remove, than readd it. But I don't know if the solution can be ok.

    • Offizieller Beitrag

    It was just to know if that solution can fit my needs (sorry for my english)

    That's OK, would it fit your need, technically yes and I had thought about that, you could give it a try, if works and sdc is removed wipe it before re adding it to the array.


    The other question I was going to ask what are the drive sizes on both the arrays? Do you not have the space on md0 to transfer the data from md1?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!