SCSI disk (#9) not detected

  • Hi


    not quite sure if it fits in here best, but as a starter I try. Thread may be moved if totally wrong...


    I had 8 disks for OMV which I tied together as a RAID5. Some days ago I decided to move to RAID6 by adding another disk. Today the additional disk arrived, I mounted it in the server and hoped to get it recognized. My "outer" OS (windows 10 with HyperV running the OMV) detects the disk with no problems. But OMV cannot see the 9th disk. It only sees the former 8 disks.


    To prevent comments like "do not use virtual machine" or something: If it WOULD have to do with it, why are the 8 disks ok and why can I hand in the 9th but it is not detected by OMV?


    I know it is more something for debian itself but OMV - but the greatest geeks live around here ;)
    thanks
    Obelix

    • Offizieller Beitrag

    I would never say don't use a virtual machine but I would say don't use windows :D


    Are you using a hardware raid controller, windows software raid, or OMV software raid?
    If raid controller or windows software raid, you need to grow the filesystem. It is easiest to boot gparted-live and expand the filesystem.
    If OMV software raid, what is the output of: fdisk -l and cat /proc/mdstat

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • HI


    Windows is mandatory because I am running tuners on the bare metal with ARGUS-TV. If argus is someday portable to linux I will switch ASAP ;)
    Ok, sorry, very less information in my first post.
    I am running OMV software raid, so under physical disks I can see the 8 disks (the new one missing) and in the RAID section I see my /dev/md0 with 8 disks.


    fdisk -l gives


    and cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdb[0] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1]
          41022736384 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
    
    
    unused devices: <none>
    • Offizieller Beitrag

    Linux (OMV) definitely does not see the drive. This seems like something that needs be changed in hyper-v. Do you have to pass the drive to the VM? And just to warn you, you cannot change from raid-5 to raid-6 in the OMV web interface.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • the change via mdadm should not be too challenging
    I hand the disk "in" the same way as all others (if you know HyperV I set the disk offline, add another disk in machine settings and use the physical disk no 9)
    On HyperV side ther is nothing I can do in other manner as far as I can see.
    I could also not see the disk while booting OMV/debian - so I ask...
    any good ideas (TeamViewer possible)

    • Offizieller Beitrag

    Funny, I actually used to run an OMV fileserver on hyper-v. But it used .vhd files since the host had a hardware raid controller.


    Have you powered off the VM? Is OMV running the 3.16 backports kernel? I would try booting systemrescuecd on the VM to see if it recognizes the 9 drives. If it doesn't, I would guess it is a hyper-v config issue somewhere.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well, booting gparted made me believe :whistling:


    It seems like the SCSI controller emulated by Windows handles more than 8 drives in the GUI but does not map them into the machine.
    Solution:
    - add another SCSI controller
    - add the 9th disk on the new controller
    - power on OMV
    this will lead to many messages concerning /dev/sd(X) with letters you would never expect, but I could not find the messages again to have a look at them.


    now for the interested ones, here comes how to manually convert RAID5 to RAID6:


    to be sure the RAID5 is clean

    Code
    cat /proc/mdstat


    to see all disks are ok

    Code
    mdadm --detail /dev/md0


    add the new disk to the RAID in /dev/md0

    Code
    mdadm --add /dev/md0 /dev/sdj


    be sure the new disk is in and ok

    Code
    mdadm --detail /dev/md0


    and now change RAID level and tell mdadm now to use 9 disks and also make a backup of the RAID5 config

    Code
    mdadm --grow /dev/md0 --level=6 --raid-devices=9 --backup-file=/root/raid5backup


    and now... wait.... wait....... wait................

    • Offizieller Beitrag

    Glad it is working.


    this will lead to many messages concerning /dev/sd(X) with letters you would never expect, but I could not find the messages again to have a look at them.


    You can view those messages with dmesg.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • argh... I had a look into it but did not find instantly, so I thought its gone...


    BUT taking more than 5 seconds to search leads to results ;)


    any idea what this means?


    • Offizieller Beitrag

    I don't see any issues other then maybe performance. Hyper-v must pass all drives to the VM as 512 byte sectors instead of 4k sectors. I would definitely upgrade to the backports 3.16 kernel if you haven't already.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    should I restart the machine while RAID is reshaping?


    Nope.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!