Beiträge von Grobalt

    i don't know what happened exactly.


    as i plugged in the new harddrive the existing raid was 2 active and 2 spare - one of the "old" ones changed from active to spare as well as the new one. i tried different things to change this third drive from spare back to active but i failed.


    the last step was to create a superzeroed new raid with the existing 3 disks but without success.

    Thanks for the replies, i will try this later this week. I received a new disk from WD - replacement for my defect WD Green was a WD Black. Yes, it's much faster but i sold it now because in raid i only have the disadvantage of 2,5x power consumption compared to WD Green.
    Today i ordered a new wd green as replacement and try your inputs.


    The first question reducing the amount of drives was - i planned not to buy an "old" green drive and replace it by a new 3TB WD Red into the other raid 5 setup. But if there are problems doing that i will replace the drive instead removing.

    Hello,


    i have a defect drive in one of my 2 MD Raids. I have to return that drive to manufacturer and want to remove that drive.


    Current setup:
    md0 raid5 with 4x 3TB drives (remains untouched)
    md1 raid5 with 4x 2TB drives (one defect and needs to be removed, not replaced!)


    all together in 1 LVM


    Please, can you help me step by step how to remove one drive out of the md1 ? I have 7TB free out of ~ 13TB, enough space to "work".


    Thank you (help in german also welcome :) )
    Patrick

    Because i'm a linux noob i googled and tried different things trying to mount my unraid disks.
    First step i installed reiserfs, needed to access unraid filesystem:


    Code
    apt-get install reiserfsprogs




    successful was 1. create folder for mount point

    Code
    mkdir /tmp/mount_point


    2. mount unraid harddisks (all but parity will work - but parity can be trashed at the end)

    Code
    mount -r -t reiserfs /dev/sdb1 /tmp/mount_point


    After that i could access my old files.


    Now i try to get software raid working with different disks behind LVM

    Hello,
    i want to switch to OMV because i run into file copy issues and something more with unraid.


    Setup is a supermicro board, xeon, with LSI controller (flashed ibm m1015) and several disks attached (different types and sizes!).
    OMV is running in vmware esxi, but i dont think that matters, controller passthrough is working well as before in unraid.


    My old disks are shown as reiserfs filesystem, but i cannot use them - and i need that files on them.
    I have 2 empty disks to use and want to create one big share with software raid (and xfs as filesystem) - so i can use different disk sizes as before in unraid.


    can i do this ? how ?


    thank you
    Patrick - answers in german are welcome, too :)