Howto create software Raid with LVM and convert unraid disks

  • Hello,
    i want to switch to OMV because i run into file copy issues and something more with unraid.


    Setup is a supermicro board, xeon, with LSI controller (flashed ibm m1015) and several disks attached (different types and sizes!).
    OMV is running in vmware esxi, but i dont think that matters, controller passthrough is working well as before in unraid.


    My old disks are shown as reiserfs filesystem, but i cannot use them - and i need that files on them.
    I have 2 empty disks to use and want to create one big share with software raid (and xfs as filesystem) - so i can use different disk sizes as before in unraid.


    can i do this ? how ?


    thank you
    Patrick - answers in german are welcome, too :)

  • Because i'm a linux noob i googled and tried different things trying to mount my unraid disks.
    First step i installed reiserfs, needed to access unraid filesystem:


    Code
    apt-get install reiserfsprogs




    successful was 1. create folder for mount point

    Code
    mkdir /tmp/mount_point


    2. mount unraid harddisks (all but parity will work - but parity can be trashed at the end)

    Code
    mount -r -t reiserfs /dev/sdb1 /tmp/mount_point


    After that i could access my old files.


    Now i try to get software raid working with different disks behind LVM

  • You should look at mdadm man page http://linux.die.net/man/8/mdadm and/or this wiki https://raid.wiki.kernel.org/index.php/RAID_setup in order to see what you can do with your disks.


    You must install the openmediavault-lvm2 plugin with the webGUI.


    Once the array is created, you can create a logical volume on it, using Storage -> Logical Volume Management, and successively using "Physical Volumes", "Volume groups" and "Logical Volumes" tabs to create a physical volume, a volume group and a logical volume, each step using the result of the previous one.


    After that, you will be able to create an XFS filesystem on the logical volume (Storage -> Filesystems).


    Concerning the RAID array, it seems that, with disks of different sizes, and if you want to use the whole capacity of your disks, the only possible scenario is the "linear" mode. The problem is that a "linear" array can not be changed later to a RAID 5 array if you add a disk.


    So you should consider to assemble a RAID0 array of twice the size of your smaller disk. Create a partition of same size on each disk (you should install parted and look at its man page)
    Example of command :

    Code
    mdadm --create /dev/md127 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1


    Once your reiserfs volume are no longer used, you can grow this array with these disks and change its level to RAID5
    Example of command :

    Code
    mdadm /dev/md127 --add /dev/sde1
    mdadm --grow /dev/md127 --level=5 --backup-file=/root/backup-md127

    omv 2.x omv 3.x (testing) - AMD CPU - RAM ECC 8x4TB RAID6 + 4x3TB RAID5 - SSD 4GB for system

  • And just for the records:


    raid0 is no protection at all. It only increases the performance of sequential IO (large file read/write).


    However if you want raid protection, your disks need to be the same size. You can integrate larger disks into a raid5 but the additional space of the large disks is unusable. A raidx (where x>0) will allways use the capacaty of the smallest disk for all disks.


    Second:
    I would recommend ext4 as you can better tune it to the underlying raid array. Also you should tune it not to journal the data, but only the meta information.

    Everything is possible, sometimes it requires Google to find out how.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!