how to delete old raid 5 filesystem with fresh omv install with raid 6

  • I built raid 5 with 3 disks to try out omv. Then I added 3 more disk to make raid 6 . I deleted the raid 5 from omv and re installed the omv from beginning. I could built raid 6 with all 6 hdds and sync finished. now I see unused file system labeled from raid 5. I can't create new file system coz I can't select the raid at that window also can delete this freaking file system it gives errors.Installing fresh omv doesn't help.

  • Did you have wiped your disks before creating the new RAID 6? I would suppose to delete the current raid 6, then goto "Physical disks", make a (quick) wipe of all 6 disks, one after another. Then repeat the creation of the raid 6 system.


    Have you ever thought about usage of ZFS as a filesystem / logical volume manager? :)

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Did you have wiped your disks before creating the new RAID 6? I would suppose to delete the current raid 6, then goto "Physical disks", make a (quick) wipe of all 6 disks, one after another. Then repeat the creation of the raid 6 system.

    Yes i wiped all of them one by one before building new raid 6. I want to try btfrs although I can with zfs if it solves my problem:).
    I dont wanna waste another 16 hour for sync as 6x 4tb for raid 6 :(

  • Oh, I haven´t read your post thouroghly enough. You wrote, you can´t delete the raid 6. Please ignore my first post.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Re,

    now I see unused file system labeled from raid 5.

    Where did ya see that? That's not in your screenshots ...


    Please enable SSH on your NAS and copy the outputs as text int he posts, this will make analysys a lot easier ...


    Here are the important commands for the md-RAID:
    cat /proc/mdstat
    mdadm -D /dev/md0


    as well as a complete (not trunkated) output of:
    blkid
    lsblk


    Btw.: the error shown on your screenshots is not RAID-related, it is FS related ... may be you had a powerloss?


    Sc0rp

  • wiped the raid and re make solved my problem. it just took my 15 hours. :)

    • Offizieller Beitrag

    For future use, get a copy of Darik's Boot and Nuke. I've had problems repartitioning, GTP partitioned disks, disks with LVM, RAID, etc. If you want to clear out the boot sector, DBAN does the job. It's not necessary to do a complete wipe. The first place DBAN starts is in the boot sector - run it for a couple minutes, done.


    https://dban.org/

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!