curios problems creating a new Raid5

  • At first i tried omv in an Virtualbox Environment and felt comfortable with the Interface. So i decided to migrate my Server to omv.
    Omv is on SSD /sda installed and i tried several times to create a simple Raid5 with 3x 3TB HDDs. After a reboot i always disapeared and so i decided to format all hdd. I cleand all smb/nfs-shares an selected the 3 HDD to create a new raid5. After the confirmation the gui showed me three Raids as you can see on the screenshot. What am doing wrong?
    Sorry, it´s extremly frustrating. In the wasted time i could have set up manually on debian-server...

  • I did that. Fdisk -l showed "Disk /dev/sd* doesn´t contain a valid partition table" for all disks added to that raid.
    Until now the resync is at about 20% and tomorrow i will see, if it works. If not, my next clue is a new clean install of omv.

    • Offizieller Beitrag

    mdadm --stop /dev/md8p1
    mdadm --stop /dev/md8p3


    and if I were you, I would start over with the newly sync raid /dev/md8.


    mdadm --stop /dev/md8
    mdadm --zero-superblock /dev/sds
    mdadm --zero-superblock /dev/sdt
    mdadm --zero-superblock /dev/sdu
    dd if=/dev/zero of=/dev/sds bs=512 count=10000
    dd if=/dev/zero of=/dev/sdt bs=512 count=10000
    dd if=/dev/zero of=/dev/sdu bs=512 count=10000


    Then recreate the array in the web interface.

  • After stopping /dev/md0p3 all other md-devices diasapeared. "mdadm: error opening /dev/md0p1: no such file or directory"
    Stopping /dev/md0 worked, even if it was not visible in the web-gui.


    After all i created a new Raid, clicked "ok" and... nothing. Nothing happened and nothing is visible in the gui.
    Now the log is flooded with "udevd: timeout: killing ´/sbin/mdadm --detail--export /dev/md0p3"


    Is it time for clean installation?

    • Offizieller Beitrag

    I would probably reboot before creating the new raid if you have problems.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • After a reboot the raids from my first posting are visible once again. Seems like omv rediscovered the raid from somewhere else than the superblock. I have zeroed it like discribed.
    I made a clean install of omv, deleted the superblock and the problem still occurs. It seems like it is an old Superblock from a previous Raid1, but i cant delete it with mdadm on omv. What can i do now? Writing zeros to the entire hdd with dd is the only solution i know.

    • Offizieller Beitrag

    Is there data on the drives? My commands a few posts up will get rid of it with only writing zeros to part of the drive.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • After stopping md0p3 (the first) i was unable to stop the "wrong" other "mdadm: error opening md0p1 no such file or directory".
    Stopping md0 worked.
    Deleting superblocks and writing zeros worked. Afterwards i restarted the server.
    Than i created a new raid - and as you might imagine the two "ghost" raids are back again.
    Is it possible that the superblocks are from an other mdadm version an are on a different part of the hdd and so could not be targeted from zeros? Two of the hdds were used under debian testing in a mdadm raid1.
    There are no data on the drives.

    • Offizieller Beitrag

    It is possible but I've never seen that. Zeroing the superblock should keep that from happening. I have no idea how they keep coming back. You could try:


    mdadm --remove /dev/md8p1

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I tried it without success. Now i wrote zeros to all the disk and it took me about 24 hours. Afterwards i removed the old Raid-entry in the /etc/mdadm/mdadm.conf and restartet the server. Right now i´m building a new array, once again, and it looks good.
    After it´s done, i will copy the files and reboot some times and have a backup of all files.
    If it wont work, i will use snapraid and greyhole.

    • Offizieller Beitrag

    I've never done anything more than the following for each drive: dd if=/dev/zero of=/dev/sdX bs=512 count=100000

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The "Wipe" button in the OMV GUI basically does what @ryecoaaron describes. You don't have to zero the whole drive, just the first several thousand bytes. I usually do

    Code
    dd if=/dev/zero of=/dev/sdX bs=4096 count=1000

    as that can be faster (write 4096 bytes at a time). You just have to blow away enough data that the system can't figure out what's on the drives.

  • I'm facing the same stupid issue and just asking to be 100% clear what to do.
    @tl5k5 You said that the shredding removed these strange "ghost" RAID set. Have you done that AFTER or BEFORE creating the new RAID?


    I'm asking cause I did the shredding via web gui button for all of my 4 drives. It took veeery long. After that I also cleaned mdadm.conf and did a reboot, but faced the same issue afterwards:



    That's why I'm asking again. Is it also possible to shred md0 after the raid was created? Otherwise I have no idea what to do except clean install of omv.
    Thanks for any idea in advance!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!