Beiträge von jonni

    I have decided to give up and did the following.

    The alternative is you start from again, disconnect the drives, reinstall OMV, connect each drive one at a time and wipe it, if you connect all the drives together the probability is it will detect the raid signatures on the drives.

    I have also set up notifications as recommended by you:

    With CRC's on all drives, I'd wonder how long that's being going on. Was it a recent event or over a longer period of time? If you had E-mail reports set up, as recommended in the User Guide, you'd have been aware of the problem as it started to develope.

    I hope that I will see errors early if they occure in the future.

    Thank you guys for your help and time.

    My results when executing : smartctl --attribues /dev/sdX (SMART 187 and 188 were missing from the table)


    Have you ran mdadm --create ?

    I have ran mdadm --create /dev/md1 --assume-clean -l5 -n4 -c512 /dev/sd[bcd] missing with only three out of four drives plugged in.

    Also to confirm - you stay you've tested each drive in the array, with a SMART LONG drive test, correct? smartctl -s on -t long /dev/sd?

    That is correct. All drives were tested and these tests completed without error.

    There is an option you could try before going down the complete testing route, which will take time.

    Recreate/rebuild OMV on a USB flash drive or small hard drive, disconnect the data drives, clean install, update, shutdown, then connect all the data drives, the new install should pick up the array from the drive signatures. If that failed then you're back to testing and the systemrescuecd.

    I have reinstalled OMV the way you described. OMV-gui looks the same way it did before. It recognizes my newly created RAID as shown in #25, but I can not access any data. I could only setup a new file system on that RAID, which, as I understand, would delete all my data as well.


    I have set up a systemrescuecd usb drive, but I am not sure what to do nor if it is worth trying after all the tests I have already conducted. If you see a chance of still getting it to work that way, I will try it, but I am pretty close to giving up and reinstalling OMV once again, wiping all disks, setting up a new RAID and just losing the data since my backup.

    I'm sorry but I am somewhat at a loss as to why you have reposted back here after five days, I suggested you use a systemrescuecd to run commands from the cli in an attempt to ascertain what might be wrong. After suggesting that I was waiting for feedback, to proceed, but as I said this could be hardware related.

    I did setup the systemrescuecd, but was not sure which commands I should run.

    Because of that I used the following command on all drives via OMV-cli, which took me some days as each execution took about 6 hours.

    Code
    smartctl -s on -t long /dev/sdX

    All of these tests were "Completed without error".


    Zitat

    If you want confirmation regarding the error you posted in #1 have a look here

    I have checked with the Bios and all drives look normal (size of 3tb shown)

    smartctl -H /dev/sdX test results for the 3 currently connected drives are "PASSED".


    If it is of any help I can post the results of smartctl --atributes -H /dev/sdX as well.


    My only reason why I have not given up yet is the fact that I can actually see the RAID in the OMV gui again, where it has not shown up for weeks.


    If you would tell me that using the mdadm --create command I have ruined my array anyways, I will just go ahead and do this:

    Zitat

    The alternative is you start from again, disconnect the drives, reinstall OMV, connect each drive one at a time and wipe it, if you connect all the drives together the probability is it will detect the raid signatures on the drives

    I have checked all drives with smartctl and did not find any errors.

    I then found the following link: https://unix.stackexchange.com…shed-linux-md-raid5-array

    I connected only 3 out of 4 RAID drives and used the following command

    Code
    mdadm --create /dev/md127 --assume-clean -l5 -n4 -c512 /dev/sd[bcd] missing

    The RAID now shows up in OMV RAID management, but I can still not access any files nor can I mount the RAID in the OMV gui.

    RAID details in OMV:

    I would like to either try to repair the RAID or pull a backup and then wipe the drives and build a new RAID.

    When you say pending what is it displaying, TBH that --readwrite option should be instantaneous, but having no idea what else is happening that is why I said leave it.

    It has not shown anything after I executed the command nor can I input any new commands as it is (apparently) still processing the readwrite command.

    It looks like this at the moment:

    Code
    root@nas-Jonathan:~# mdadm --readwrite /dev/md127
    "empty line with flashing symbol"

    As sdX names changed, I altered the assemble command as well.

    Code
    root@nas-Jonathan:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    Code
    root@nas-Jonathan:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcd]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
    mdadm: added /dev/sdd to /dev/md127 as 1
    mdadm: added /dev/sdc to /dev/md127 as 2
    mdadm: no uptodate device for slot 3 of /dev/md127
    mdadm: added /dev/sdb to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 3 drives (out of 4).
    Code
    root@nas-Jonathan:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active (auto-read-only) raid5 sdb[0] sdc[2] sdd[1]
          8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    WHY!!!! did I say reboot,

    OK, I will leave it running then from now on.

    When it was rebooting the reboot was not going through because of a task that was blocked. I am not too sure whether that task was finished or terminated after 10-15 minutes.

    Code
    root@nas-Jonathan:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdb[0](S) sdc[2](S) sdd[1](S)
          8790402407 blocks super 1.2
           
    unused devices: <none>

    Result:

    Code
    root@nas-Jonathan:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    Code
    root@nas-Jonathan:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[abc]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 2.
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 1.
    mdadm: added /dev/sdc to /dev/md127 as 1
    mdadm: added /dev/sdb to /dev/md127 as 2
    mdadm: no uptodate device for slot 3 of /dev/md127
    mdadm: added /dev/sda to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 3 drives (out of 4).

    The reboot afterwards took about 10-15 minutes, but it did succeed.

    The RAID does still not show up in the OMV gui "RAID management" though.

    Thanks macom for the clarification. I was a little confused.

    I have booted my OMV with only 4 drives (3 HDD plus SSD with the OS) instead of all 4 RAID HDDs as it does not finish booting with 4 HDDs.

    Used RAID HDDs:

    TOSHIBA DT01ABA3 (3tb, 3 drives)

    Seagate IronWolf NAS HDD (3tb, one drive)


    Code
    root@nas-Jonathan:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdb[2](S) sda[0](S) sdc[1](S)
          8790402407 blocks super 1.2
           
    unused devices: <none>
    Code
    root@nas-Jonathan:~# blkid
    /dev/sdb: UUID="1ae7b1fb-004c-10d2-5ee6-31b56d43d6a5" UUID_SUB="c1be727c-965a-bc95-de8f-4c699d4c72c4" LABEL="nas-jonathan:MeinRAID" TYPE="linux_raid_member"
    /dev/sdd1: UUID="68bfc5a4-6add-4b8e-8e8a-f6965765bbc8" TYPE="ext4" PARTUUID="6ec3d6f8-01"
    /dev/sdd5: UUID="26b08f75-df65-45ea-9e43-8d38352f5629" TYPE="swap" PARTUUID="6ec3d6f8-05"
    /dev/sda: UUID="1ae7b1fb-004c-10d2-5ee6-31b56d43d6a5" UUID_SUB="f8ffd68f-5af4-7bb5-2f32-a68089552676" LABEL="nas-jonathan:MeinRAID" TYPE="linux_raid_member"
    /dev/sdc: UUID="1ae7b1fb-004c-10d2-5ee6-31b56d43d6a5" UUID_SUB="4e51ff5c-b622-f65f-f29f-137b56169223" LABEL="nas-jonathan:MeinRAID" TYPE="linux_raid_member"
    Code
    root@nas-Jonathan:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk model: TOSHIBA DT01ABA3
    Disk /dev/sdd: 111,8 GiB, 120033041920 bytes, 234439535 sectors
    Disk model: Samsung SSD 840 
    Disk identifier: 0x6ec3d6f8
    Disk /dev/sda: 2,7 TiB, 3000588754432 bytes, 5860524911 sectors
    Disk model: TOSHIBA DT01ABA3
    Disk /dev/sdc: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk model: ST3000VN007-2E41
    Code
    root@nas-Jonathan:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=3 metadata=1.2 name=nas-jonathan:MeinRAID UUID=1ae7b1fb:004c10d2:5ee631b5:6d43d6a5
       devices=/dev/sda,/dev/sdb,/dev/sdc

    I was using OMV4 until a couple days ago, but my RAID 5 with 4 hard drives was not showing up in my OMV configurations any longer.

    I then upgraded (reinstalled) to OMV5, but my RAID is still not showing up.

    I tried to fix it through mdadm assebmling and it actually went through successfully (at least it looked that way to my newbie eyes), but as I tried to restart the machine, it would not even finish booting up. It gets stuck after init ramdisks with 4 errors "ataX: softreset failed (device not ready)".

    As soon as I unplug one of the drives it shows only 3 errors and finishes booting afterwards.

    These three drives (plus my SSD) show up in OMV gui "Disks".