Posts by Sc0rp


    you have to use the console, i think.
    - here you can work with "MC" (Midnight Commander, a Norton Commander Clone)
    - or "mv", the linux move-command

    Or you (install and) use tools like the CloudCommander ... a docker-environment.

    Moving files between FS have to be always "physical" between the partitions ...



    normally you can try (on console):
    touch /forcefsck

    This creates the empty file "forcefsck" in the root-directory - it will try to force an fsck on all FS listed in the fstab ...

    But i'm not sure, whether this works on ZFS/BTRFS too, XFS is included, as well es every EXT-version ...
    (on OMV 4.x with systemd i don't have a glue ... currently ... but i read, that there are kernel-boot-options for this)



    I have the "Test" pool selected, but am trying to grow the "Home" pool which is linear. I'm aware stripe can't be grown

    Then you have to mark the other array ... but:
    - you can grow any raid-array made with md, even striped
    - your striped raid1-array is made of two fake-raid drives (dm = device-mapper) it seems - so you can alter this array only in the fake-raid-environment (bios of the controller), md "reads" this only ...

    Which physical disk you wanna add to "Home"?



    why would you make a RAID10-bootdisk?

    Setting up RAID-devices as boot-devices is more difficult, cause you have to add the raid-modules to the initramfs ... but if you have already a SSD as a boot-device, it's not worth the time ...

    And you have to do this 1st ... bevor installing OMV, so you have to use the Debian-NetInstall-ISO and after all is finished, install OMV over that.

    For making a RAID1 as boot-device, just google arround ... (e.g.



    Will RAIDZ1 can also cope with such issue?

    More or less, "cutting" the power off destroys any information held in caches ... hdd-cache & RAM, which affects any FS.
    ZFS brings the best algorithms to deal with that, but unexpected powerloss should be avoided at all!



    on your second "blkid" is SDE avail. but SDB is missing ??? What the hell is going on on your box? fdisk shows SDB ... very curios.

    How best do I recover this raid and restore my system?

    What U can do:
    - assemble the remaining 3 drives for a "degraded array" -> and backup your data (read-only mode preferred)
    mdadm --assemble --readonly /dev/md0 /dev/sdc /dev/sde /dev/sdf (change md0 to md127 if ya want, may be readonly won't work)
    You can then escalate the commando:
    mdadm --assemble --run /dev/md0 /dev/sdc /dev/sde /dev/sdf
    mdadm --assemble --run --force /dev/md0 /dev/sdc /dev/sde /dev/sdf

    - SDB has to be zero'ed since it has wrong partitioninfo
    dd if=/dev/zero of=/dev/sdb bs=4096 count=16
    - then array reassemble with the 3 remain. disks
    (see above)
    - adding SDB again (as spare-drive, but it will be immediately used and the rebuil will start)
    mdadm --add /dev/md0 /dev/sdb (tune md0 to md127, as got from cat/proc/mdstat!)

    If you got errors, stop doing and post them!

    Should I upgrade to version 3 openmediavault first?

    Nope, finish RAID-rebuild, then you may upgrade ...



    Current omv uses label as first option for device mounting and mount path generation.

    Right, therefore:

    Just take a look via console (SSH) in your /srv directory:
    ls -la /srv

    ... to check what is used ...

    I assume that the drive under /dev/sda is the one that the kernel sees as "port 1," and so on?

    Right ... but only for older systems. Currently the fstab uses the UUID for mounting, which is more failproof.

    As @subzero79 explained, you'll find unter /srv-directory mountpoints with (hopefully) "dev-disk-by-label-<label>" entries, if you use labels, otherwise you'll find "dev-disk-by-id-<id>" entries. If you have the label-ones, you are good to go with your plan - as far as you keep the UUID-thing in mind (you have to track the UUID down to system, to get the connection between the old sd[a-z] labeling, partition-UUID and drive's serial number)



    I label the drives (okay, partitions) according to their position in the hot-swap bay.

    Understood, but be aware of the fact, that this naming structure is not bound to a "bay" - it is bound to the (SATA-) ports on your mainboard, and which of them is recognized (or coded in) first from BIOS/UEFI, and then found by the kernel (kernel module related) ... so, if your BIOS/UEFI changes the order, or the kernel-module does it, it will not function anymore.

    Therefore most hot-swap-setups use the "port-bounding" (aka sata-port 1 is connected to bay 1 and so on), while retieving the disk-data "dynamically" from outputs, e.g. the serial number from the (dead/faulty) disk via commandsline.

    You can "connect" a sata-port to a special mountpoint, but this is only logically/virtual. You can do this with altering the /etc/fstab file directly (using "temporary labels") - OMV will read and use this.

    Changing drives in a SnapRAID/mergerfs context is easy, since there are two ways you can use:
    - offical way: replace the disk you want and recover it with SnapRAID (takes time, stressing the disks but is easy to manage)
    - inofficial way: just dupe the disk you want to replace to the new one directly (using another PC or eSATA or an other SATA-Port, with "dd" command), and after that is done, edit&correct all the needed files for matching the new "hardware" ...

    You can change the label of a partition as often as you want, but you have to make sure, which detail is used for mounting (should be the UUID in the fstab).

    Just take a look via console (SSH) in your /srv directory:
    ls -la /srv



    The drive labels of the old drives are the location of the drives in the bays, so I want the new drives to use the same labels.

    I'm struggleing araound, what "labels" do you mean, and why you need to bind it to "bays"?
    Drives uses serial-numbers for indentification (which is unique), and partitions uses UUIDs (which are also unique), the only "static" part can be the root-filesystem ... with the mountpoints under /srv (there you can have a directory named "bay1" or "bay2") ...

    6) Use e2label to re-name drive to "proper" label

    e2label makes partition-labels, not drive-labels!

    So you have to alter the mounts in the /etc/fstab file at least, may be there is a way via the OMV-WebGUI, but i don't know that ...



    now I see unused file system labeled from raid 5.

    Where did ya see that? That's not in your screenshots ...

    Please enable SSH on your NAS and copy the outputs as text int he posts, this will make analysys a lot easier ...

    Here are the important commands for the md-RAID:
    cat /proc/mdstat
    mdadm -D /dev/md0

    as well as a complete (not trunkated) output of:

    Btw.: the error shown on your screenshots is not RAID-related, it is FS related ... may be you had a powerloss?



    beginning from the start-post if have a question:

    How did you use your box? 24/7? Powering down the box anyway?

    The errors are not raid related - only FS related. And this seems that you (sometimes?) hard cut the power off your system, or something similar.

    fsck needs often more runs ...

    And what is the recommendation for the 3X3TB drives I have? I need at least 6 GB for storage.

    - Backup, Backup and Backup ... then the redundancy is optional :P
    - RAID5 (even it is "only 4 business contiuity" ... it makes this job)
    - ZFS-Z1 (more complex than RAID, but also more powerful - very RAM aggressive)
    - SnapRAID
    - single disks (or LVM) with rsync-construct
    - UPS (USV in german)



    It appears that drive SDE is part of SDB.

    Where did ya see that? That can not be or happen ... under linux ;)

    lsblk shows your setup correct (as the kernel detects it) - and here is also SDE completely missing.

    Just check your cableing (inkl. power) of this drive - since sdb shows wrong partioning, sde is your only hope to get your array back ... (on R5 you need 3 of the 4 drives to recover).



    - OMV linux server is "seen" by router in the mac table (mac adress & IP given correctly)
    - Router Typ (this is German :-), with a capital letter): Vodafone Easibox (802, if I recall correctly)
    - DHCP is used

    This seems, that either DHCP on the Easybox is working, and your OMV-Box can reach the internet -> no problems here.

    - Ping to worked fine from the OMV-server; pinging the OMV server from the PC (WLAN-router-LAN) did not work (neither v4 nro v6-IP adresses)

    That's the problem, seems that your Easybox did'nt handover the traffic from WLAN to LAN ... make sure you don't use a "guest-mode", double-check the Easybox in the WLAN part.

    - DIrect connection PC to server vie LAN-Router-LAN will be checked Friday, also with server HW details

    This test is highly important - but the HW-details are now unnecessary, because internet is working on LAN-connection ...

    - ssh was enabled, but a list of "how to diagnostics" within the OMV console is appreciated - result will then be copied here and might save some questioning cycles ?

    After concentrating the problem on your Easybox, the list of"diagnostics" is very short :P, just try to reach your OMV-box via PuTTY/KiTTY or a similar SSH-client (as well as other services like SMB/CIFS or HTTP(S)).

    But if you can ping (finally) your OMV-box without issues from your PC (or any other device in the (W)LAN) - you're done ... :D



    - how is your network setup? (where is lan and wlan mixed up?)
    - how does your router "see" your omv-box? (pls explain)
    - is dhcp used?
    - is ssh on omv enabled and can you use it from a "fixed" network (e.g. lan-cabled host)?

    omv-firstaid was tried, without result.

    made on local console? Did you pinged your gateway (router) or other hosts?

    .... and now the IP config shows error code 7003, which is new (tried: activate v4, activate (or) deactivate v6 (both options), decline WOL).

    On the omv-box?

    Please add also details about used hardware: router typ (seems to be something german :D) and the nic-typ on your omv-box ...