At first i tried omv in an Virtualbox Environment and felt comfortable with the Interface. So i decided to migrate my Server to omv.
Omv is on SSD /sda installed and i tried several times to create a simple Raid5 with 3x 3TB HDDs. After a reboot i always disapeared and so i decided to format all hdd. I cleand all smb/nfs-shares an selected the 3 HDD to create a new raid5. After the confirmation the gui showed me three Raids as you can see on the screenshot. What am doing wrong?
Sorry, it´s extremly frustrating. In the wasted time i could have set up manually on debian-server...
curios problems creating a new Raid5
-
- OMV 1.0
- mscgn
-
-
A good practice before creating a new raid is to wipe every single disk from the array. You can do that in the physical disk section, just the fast one. You don't need to create a partition to the disk, like sdb1, just add them raw sdb to the raid.
-
I did that. Fdisk -l showed "Disk /dev/sd* doesn´t contain a valid partition table" for all disks added to that raid.
Until now the resync is at about 20% and tomorrow i will see, if it works. If not, my next clue is a new clean install of omv. -
I'm having the same problem.I tried wiping all three disks but that didn't help.
-
mdadm --stop /dev/md8p1
mdadm --stop /dev/md8p3and if I were you, I would start over with the newly sync raid /dev/md8.
mdadm --stop /dev/md8
mdadm --zero-superblock /dev/sds
mdadm --zero-superblock /dev/sdt
mdadm --zero-superblock /dev/sdu
dd if=/dev/zero of=/dev/sds bs=512 count=10000
dd if=/dev/zero of=/dev/sdt bs=512 count=10000
dd if=/dev/zero of=/dev/sdu bs=512 count=10000Then recreate the array in the web interface.
-
After stopping /dev/md0p3 all other md-devices diasapeared. "mdadm: error opening /dev/md0p1: no such file or directory"
Stopping /dev/md0 worked, even if it was not visible in the web-gui.After all i created a new Raid, clicked "ok" and... nothing. Nothing happened and nothing is visible in the gui.
Now the log is flooded with "udevd: timeout: killing ´/sbin/mdadm --detail--export /dev/md0p3"Is it time for clean installation?
-
Seems to be working for me, I'm recreating the RAID and I only have one RAID array in the web gui. Thanks for all the help!
-
I would probably reboot before creating the new raid if you have problems.
-
After a reboot the raids from my first posting are visible once again. Seems like omv rediscovered the raid from somewhere else than the superblock. I have zeroed it like discribed.
I made a clean install of omv, deleted the superblock and the problem still occurs. It seems like it is an old Superblock from a previous Raid1, but i cant delete it with mdadm on omv. What can i do now? Writing zeros to the entire hdd with dd is the only solution i know. -
Is there data on the drives? My commands a few posts up will get rid of it with only writing zeros to part of the drive.
-
After stopping md0p3 (the first) i was unable to stop the "wrong" other "mdadm: error opening md0p1 no such file or directory".
Stopping md0 worked.
Deleting superblocks and writing zeros worked. Afterwards i restarted the server.
Than i created a new raid - and as you might imagine the two "ghost" raids are back again.
Is it possible that the superblocks are from an other mdadm version an are on a different part of the hdd and so could not be targeted from zeros? Two of the hdds were used under debian testing in a mdadm raid1.
There are no data on the drives. -
It is possible but I've never seen that. Zeroing the superblock should keep that from happening. I have no idea how they keep coming back. You could try:
mdadm --remove /dev/md8p1
-
I tried it without success. Now i wrote zeros to all the disk and it took me about 24 hours. Afterwards i removed the old Raid-entry in the /etc/mdadm/mdadm.conf and restartet the server. Right now i´m building a new array, once again, and it looks good.
After it´s done, i will copy the files and reboot some times and have a backup of all files.
If it wont work, i will use snapraid and greyhole. -
I've got the same problem with my RAID set. I've done everything above except zero the entire drives. I think it might be faster if I take the drives into Windows and Partdisk>Clean each one. Any thoughts?
-
I started using the shred command to clear my drive and I like that. Maybe try that ?
Sample:
shred -v -n 1 /dev/sdo -
I've never done anything more than the following for each drive: dd if=/dev/zero of=/dev/sdX bs=512 count=100000
-
The "Wipe" button in the OMV GUI basically does what @ryecoaaron describes. You don't have to zero the whole drive, just the first several thousand bytes. I usually do
as that can be faster (write 4096 bytes at a time). You just have to blow away enough data that the system can't figure out what's on the drives.
-
Shredding is underway. I'll see if this does the trick.
-
Yep, it took the shred command to finally get rid of the "ghost" RAID set.
Thank you blindguy! -
I'm facing the same stupid issue and just asking to be 100% clear what to do.
@tl5k5 You said that the shredding removed these strange "ghost" RAID set. Have you done that AFTER or BEFORE creating the new RAID?I'm asking cause I did the shredding via web gui button for all of my 4 drives. It took veeery long. After that I also cleaned mdadm.conf and did a reboot, but faced the same issue afterwards:
That's why I'm asking again. Is it also possible to shred md0 after the raid was created? Otherwise I have no idea what to do except clean install of omv.
Thanks for any idea in advance!
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!