create raid 1 on a blank disk, copy from a drive and grow the raid

  • Dear to all, I have 2 drive WD60EFRX on my NAS. sda2 is the drive in NTFS file system that contains the data. sdb1 is the drive blank.

    I want make a raid 1 (mirror) with the sdb1 drive, format it in EXT4 file system, copy the files from sda2 drive in it, make a checking of copy with md5/sha utility and finally add sda2 to raid. What are the steps to do it? I have read here https://askubuntu.com/question…sting-drive/526785#526785 and here https://www.linuxbabe.com/linu…nux-software-raid-1-setup. I have runned all the steps in the second link except use of fdisk (I have used gdisk because the size is of 6 Tera) and I have used fd00 instead of fd (system doesn't accept it) so I have got "Changed type of partition to 'Linux RAID'" instead of "Linux raid autodetect".


    The other doubt is when I have runned "sudo parted /dev/sda mklabel gpt" command and I have got "Information: You may need to update /etc/fstab."... Should I made it? How?


    When I have tried to do copy with "cp -aR /dev/sda2/* /dev/sdb1" I have got "zsh: no matches found: /dev/sdb2/*". When I have reboot the system instead of md0 as in the tut the raid name was md127 (I don't know if it has been mointed).


    So question is: considering that webinterface of openmediavault doesn't allow to make a raid with one drive, what are steps with ssh commands to make raid as written above and save it also for the reboot of system? If I log into openmediavault web interface as user "admin" sda is mounted, while not always with "mount -l" on ssh (login as root) sda result mounted.


    Somebody can help me? How can I check that copy is fine from md5 or sha utility?


    now I have deleted raid, unmount after running

    sudo mkdir /mnt/raid1

    sudo mount /dev/md0 /mnt/raid1


    deleted folder raid 1 and now I'm erasing sdb drive from omv web interface.


    Hope that somebody help me.

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

    • Official Post

    Why do you want RAID1?


    Usually we suggest to use one drive as main drive and the second drive as backup. You can use the rsync or rsnapshot plugin to create the backup. In case of rsnapshot you even get a versioned backup, so that you can restore previous versions of files (and deleted files, of course).

    RAID is not backup.

  • because the backup should be scheduled, and after because speed read of raid are great than single drive...

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

  • because I can use parallelism concept to read from raid1 (not write).


    Can You help me?

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

  • because I can use parallelism concept to read from raid1 (not write).

    You are misinformed.


    RAID1 consists of data mirroring, without parity or striping. There is no speed improvement compared to using only a single disk.


    RAID0 consists of striping without parity or data mirroring. Because the data is striped across multiple drives, there is a speed improvement over what a single drive offers.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • yes, but I want make raid 1 as I have told on first post... there is a way or not?

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

  • yes, but I want make raid 1 as I have told on first post... there is a way or not?

    If that's what you want, you have 2 choices (there are more but to simplify....)


    1 - Follow what's instructed on the 2nd link you posted.


    OR


    2 - Get a 3rd HDD with the same size you have on the other's. (6Tb)

    Make a rsync copy of the DATA of the NTFS drive to the 3rd drive.

    Check the DATA is all there.

    Keep that copy in a safe place.

    Wipe the other 2 disks on OMV GUI in secure mode.

    Create a RAID1 on the OMV GUI.

    Mount the RAID.

    Copy back the DATA from the 3rd HDD to the RAID1.


    Keep doing constant backups from the RAID1 to the 3rd HDD to make sure that you have a SAFE BACKUP OF ALL DATA.


    RAID is not backup.

  • thank you, why when I run cp -aR /dev/sda2/* /dev/sdb1" I have "zsh: no matches found: /dev/sda2/*" ?

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

    • Official Post

    yes, but I want make raid 1 as I have told on first post... there is a way or not

    Yes, but OMV does not;


    1) Create mount points under /mnt they are created under /srv

    2) Does not use labels for the drives it uses the UUID

    3) Use partitions as a reference i.e. /dev/sda1 it uses the full block device /dev/sda


    Can you create a Raid1 with one drive -> yes you can but it would have to be done from the cli, that way the Raid will show in OMV as clean/degraded


    1) Connect the drive that has no data on it -> Storage -> Disks select the drive and click the wipe icon, as this has be used from your previous testing I would suggest secure, when complete ssh into OMV and run ->

    2) sudo mdadm --create /dev/md0 --level=1 --raid-devices=1 /dev/sd?(replace ? with the drive reference a, b, c, etc) mdadm will create the array in a clean/degraded state, when complete ->

    3) File System -> create, select the Raid and format with ext4, when complete it it doesn't mount automatically click the create icon and select mount


    To use the cp command, again use the block device /dev/sd? not /dev/sd?1 (? being the drive reference)


    I would also suggest looking at the new user guide

  • thank you for the explanation.

    2. UUID is generated automatically from OMV?

    3. use partition or full block device?


    1. when you write "ssh into OMV" it's different from ssh from putty (user= "root")? Where I should insert ssh into omb web interface?


    what package should I install to calculate checksum locally the NAS (instead of add smb sharing for both ntfs device and ext4 device and verify from another pc inside the network. Locally should be take much less time the checksum)

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

    • Official Post

    UUID is generated automatically from OMV

    Yes

    use partition or full block device

    As I stated the block device not the partition

    when you write "ssh into OMV" it's different from ssh from putty (user= "root")

    no same thing

    what package should I install to calculate checksum locally the NAS

    ASFAIK there isn't one, personally I would use rsync and this could also be run via ssh information on this can be found here

  • correct cp command is "cp -aR /dev/sda/* /dev/md0" ?

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

    • Official Post

    why if I create "/dev/md0" when reboot it becomes "/dev/md127"

    It has something to do with the way mdadm determines if the array is local or "foreign" one of the options I missed which may have caused the change was to not run update-initramfs -u which should have been run after the array was created.

    Another option, as this was done via the cli and not from the GUI was to run omv-salt deploy run mdadm


    What's the output of cat /proc/mdstat and cat /etc/mdadm/mdadm.conf

  • Output are the following


    cat /proc/mdstat


    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

    md127 : active raid1 sda[0]

    5860390464 blocks super 1.2 [1/1] [U]

    bitmap: 0/44 pages [0KB], 65536KB chunk


    unused devices: <none>


    cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts

    MAILADDR francesco.altamura@alice.it

    MAILFROM root


    # definitions of existing MD arrays

    # Trigger Fault Led script when an event is detected

    PROGRAM /usr/sbin/mdadm-fault-led.sh

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

  • Ok you could try;


    omv-salt deploy run mdadm then


    update-initramfs -u then reboot and it should all stay as it is

    should I use "sudo mdadm --assemble --update=name --name=0 /dev/md0 /dev/sda" or automatically I'll get raid to /dev/md0 using your commands?

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

  • running "mdadm --detail --scan --verbose" I'll get


    ARRAY /dev/md/helios64:0 level=raid1 num-devices=1 metadata=1.2 name=helios64:0 UUID=cb884964:44860d40:ac28e0fc:89d0abb4

    devices=/dev/sda

    omv 6.1.5-2 (Shaitan) | arm v64 | Helios64 | 5.15.89-rockchip64 kernel

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!