Posts by SerErris

    Yep, that would allow you with SystemRescue to dd from one device to another one. Or directly use Clonezilla :)


    Or this ;)

    AODUKE 4Bay Fan Cooling M.2 NVME/SATA to USB3.2 10Gbps Docking Station Adapter Reading and Writing,External Hard Drive Enclosure for PCIE NGFF SSD(Cloning Are not Supported) AJM2S4
    AODUKE 4Bay Fan Cooling M.2 NVME/SATA to USB3.2 10Gbps Docking Station Adapter Reading and Writing,External Hard Drive Enclosure for PCIE NGFF SSD(Cloning Are…
    www.amazon.de

    You can use clonezilla to make a copy of the drive and then on recovery you can automatically adjust the new drive size.


    So yes, the process actually works directly.


    If you do have OMV6 you can even select bootable CloneZilla entries to your boot menu, so no reason to boot from a removable device.

    Yes, and you would need to replace your old drives "one after one" to actually utilize the full capacity of your new drive.


    For now the new drive will never use more then 3.8 TB.

    can you please run the


    Code
    mount


    command and see what the output is?


    Also you could just unmount them and they may then appear ...


    Can you also please list your /etc/fstab file?

    How did you create the file system on it? Via OMV GUI, or manually ?


    If you have created it manually I do have the same issue. So I just created a new filesystem and then mounted it.


    Is there anything on it?

    Question: (nOOb alert!)

    Appears that once a drive has been part of an omv raid, future installs of omv run afoul of latent files or partitioning structure? I noticed that installs on the used drive went to partition 2 whereas on the clean usb install went to partition 1 with successful boot into the operating system. What is best way to totally wipe the old RAID drive?


    It looks like you want to reuse a drive that was part of a raid before, correct?


    Very simple is to use


    Code
    dd if=/dev/zero of=/dev/sdX bs=1M count=100


    That erases all information in the first 100MB of the drive and therefore eliminates all RAID information. (most likely the first 4MB would be good enough anyway ...

    Yes I agree, but if you look at the original post and the current configuration of that massive machine ... Not really makes any difference.


    It would be a difference if you having an all SSD and 18W System. But that monster with that many spinning disks ... should be less than the loss because of the power supply.

    You cannot mount a raid .. you need to create a filesystem on the raid, and then you can mount that filesystem somewhere.


    So first go to Storage->Filesystems and press the + icon.

    In the dialog you select the filesystem type (ext4) and then you select the raid device, where you want to create the filesystem on (/dev/md126).


    Repeat it for the second raid.


    Then you can go to

    Storage->Filesystems again and mount the filesystem


    And after that go to Storage->Shared Folders and Share that Filesystem

    Then I would just go for a small SSD ..


    That was the cheapest I could find in a few seconds...


    Intenso Interne 2,5" SSD SATA III Top, 128 GB, 550 MB/Sekunden, Schwarz
    Die SSD SATA III Top bietet Schnelligkeit und Effizienz für den Alltäglichen Gebrauch zu Hause als auch für ausgiebiges Gaming. Durch die SATA III…
    www.amazon.de


    This is Amazon expensive :) 11,xx Euros


    You can find it even cheaper at 8,37 Euros ...


    A pen drive is pretty much same range now and much slower and less secure (e.g. wear leveling only on very few thumb drives).

    The screenshot is from the network card boot try. It fails, because you have not connected any network cable to the intel network card.


    Anyhow not your issue.


    Check the BIOS for boot order.

    Disable network boot - you most probably will not need it.

    Check if your new drive is the 1st drive in boot order or at least the first HDD/SSD.


    Then try a reboot, you should see the Debian Boot message.


    If not, try to remove any drive other the OS drive, and reinstall.


    This gives you better chances of success.


    Also try to put your OS drive to SATA port 0 of the onboard controller (whatever that is on your mainboard).


    All in all you should end up with this drive being /dev/sda .. and then it should run pretty smoothly.

    I can only agree to ryecoaaron.


    Raid is not a backup. It is a redundancy system to prevent dataloss if a disk fails. It does not provide any protection against file deletion, mistakes in any form, or other malware (including encryption or wiping).


    So you should always have a backup, and you may want to have a RAID array to prevent downtimes for your application. But if you have RAID it is both, backup AND raid.


    If you do not need any backup, then not a problem - go with raid.

    No, that is not how I would do that.


    Sure, have a good backup - but you can actually do this way.


    You do this procedure one by one.


    1. You fail a drive

    Code
    mdadm /dev/md2 --fail /dev/sdX


    2. Then you remove the drive from the raid

    Code
    mdadm /dev/md2 --remove /dev/sdX


    Now you power down your system and replace the physical drive with a new one.


    3. Now you readd the new drive and wait for raid to get synched:

    Code
    mdadm /dev/md2 --add /dev/sdX


    Please be aware that this operation can potentially fail and you should have (as always) a good backup.


    After you have done the procedure 4 times, you have replaced all for disks in your raid 10 with shiny fresh new drives.

    I know it does not help any longer, because you know have a RAID5.


    RAID 1 is maximum protection as you have a full mirror, but at the cost of space efficiency. Only 50% of the capacity is useable.

    RAID5 is better efficiency (75% useable with 4 disks), but at the extend of more CPU overhead.


    What you asked for would be a combination of RAID 0 (no pretection, just a stripe set for more performance over a set of disks and RAID 1 (mirroring).


    And that is exactly what RAID 10 is.


    Just for anyone else to get in here, this would have been the procedure:


    1. Convert your existing RAID 1 into a RAID10. This is a nondisruptive task, as actually nothing is happening.

    2. Then add two disks to your RAID 10, where actually to each of your mirrors a disk is put into a stripe set (now two disks long).


    So that would be what you asked for.


    However RAID5 is much more efficient and you have more space out of the 4 disks (12TB instead of 8TB).


    There is also a procedure to online convert a RAID1 into a RAID5 (degraded) and then add the third drive, and then grow it further to 4 drives.

    A guide to mdadm - Linux Raid Wiki


    Upgrading a mirror raid to a parity raid

    The following commands will convert a two-disk mirror into a degraded two-disk raid5, and then add the third disk for a fully functional raid5 array. Note that the first command will fail if run on the array we've just grown in the previous section if you changed raid-devices to three. If you already have a 3-disk mirror configured as two active, and one spare, you will need to run the first command then just change raid-devices to three. The code will reject any attempt to grow an array with more than two active devices.

    Code
    mdadm --grow /dev/md/mirror --level=5
    mdadm --grow /dev/md/mirror --add /dev/sdc1 --raid-devices=3