Posts by 4k8uiMg3pYTJVtFQ5QsF

    Any idea what can I do to get rid of the error messages after "Setting up Salt environment"?


    Thanks in advance!

    Your problem is not the usage of the drives, but the expectation, that there is an additional spare drive, which you do not have. Removing the spares=1 would make the expectation correct in that you do not have spare disks.


    So the mail should no longer be sent.


    Try whith what geaves said first.

    Ok I have done was geaves said. No mail regarding missing spare after reboot.


    Probably I have a misunderstanding, but I'm still a bit confused because I have 4x 2,73 TiB, the array has a size of 8,19 TiB (3x 2,73 TiB). Shouldn't one of the drives run als spare device? (raid 5 -> n-1)


    WebGUI shows:

    Thats looks to me like I could hit the "file systems - resize" button to increase the array size to 10,92 TiB like JOBD?

    This is something I'm not aware of but the way to create the conf file in OMV5 is to run omv-salt run deploy but not sure if this mdadm or mdadm.conf after deploy, then run update-initramfs -u

    Code
    omv-salt deploy run mdadm

    worked and gave me:


    Code
    update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-5.10.0-10-amd64

    finished, reboot or anything else to do?

    So the config file is different from the reality (it defines 1 spare)


    if it was my system, I would manually replace the ARRAY line in /etc/mdadm/mdadm.conf by the one from the mkconf command, but maybe Mr. RAID geaves knows the proper way to do it from the UI.

    Would that cause that the spare drive is afterwards a "normal drive" or that the OMV config recognize the drive as spare as defined in the mdadm.conf?!?

    The result:

    Try to execute usr/share/mdadm/mkconf and paste the result.

    Zoki


    No I have performed all changes from the webGUI.

    I had an array with two drives, and one with three drives. From the first array one drives died, so I added the remaining drive to array two, expanded the filesystem. But now I get this messages.


    cat /etc/mdadm/mdadm.conf




    usr/share/mdadm/mkconf

    Today I used the NAS after longer time again, boot up got the following email:


    Quote
    Code
    active (auto-read-only)

    ....


    logged in via ssh and checked like geaves told me last time

    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
    md1 : active raid5 sde[2] sda[4] sdd[1] sdb[0]
          8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    active again

    Quote

    mdadm --detail /dev/md0



    10 minutes later I get another email, array active again:



    At the moment I have doubts to trust in my nas-system.... :-D

    The only time I reboot is when there is a kernel update, otherwise mine is on 24/7

    How much energy consumption do you have with your HP N54L Microserver per day? I looking for a while for a replacement for my current server (old Intel Atom D510) but currently the delivery situation is not good plus energy prices are very high in my country (at the moment ~0,80$ / kWh) and I would like to have the next NAS as a 100% silent system.

    I nearly bought a Odroid HC4 but when I search through the web I read often from problems and armbian has no is looking for maintainer at the moment. So I think x86 or raspberry pi (big fan-base) is the best way to be safe regarding debian support. RPi compute modules are everywhere sold out, so I have to wait respectively I have more time to find the right system ....

    Thanks for your answer.

    TBH I'm not sure, this -> Recover "Add hot spares" must be something new or a change

    Okay probably I have a misunderstanding, I don't know.

    I used the "+" and in the next dialog where I selected /dev/sda there the text says "add hot spares / recover raid device"

    Just for info: Each disc has 2.73 TiB capacity


    This info email "spare is missing" seems to be sent everytime I boot the NAS.


    Does it make sense to report this as "bug" anywhere?

    Happy new year to everyone!

    2) Reinstall

    I have upgraded to OMV6 but need your help with this issue. I got today two mails from the NAS:

    But when I check the GUI it tells me that the array is clean, but in the details I see that 4 working device no spare device.


    But I added /dev/sda with the GUI Storage --> Raid Management --> Recover "Add hot spares", nevertheless this seems not to work.

    I already used the GUI button remove to remove /dev/sda, mada a quick erase of /dev/sda added it again a s hot spare. During rebuild process the details showed 1 device as spare device but after finishing all devices are active devices.


    What I'm doing wrong?


    Thanks in advance for your support.

    Hi geaves,

    2) Reinstall

    I reinstalled OMV6. So far so good. Some questions where I hope that you can help me:


    1. Is there anyway to display temperature und fanspeed in the dashboard?
    2. You mentioned the omv-extras plugins. Is the installation still done with?!?:
    Code
    wget -O - https://github.com/OpenMediaVault-Plugin-Developers/packages/raw/master/install | bash

    Thank you!

    Hi geaves,


    thanks for your answer!

    In Raid Management, select the raid, on the menu click remove, this should display a dialog of the listed drives within the array

    There I have selected "sda" an clicked on remove. After that I got the error for #9. But webGUI showed me that sda was removed, so I shut down the NAS and removed sda and rebooted.


    What's the output of fdisk -l | grep "Disk "


    mdadm --readwrite /dev/md0[/tt] should correct that

    Code
    mdadm: failed to set writable for /dev/md0: Device or resource busy


    Either way the array is in a clean/degraded state and requires a new drive to be added.

    When I don't want to replace the drive, what is the best way? Rebuild the RAID?

    Here the output:

    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
    md0 : active (auto-read-only) raid6 sdb[1] sdc[2] sdg[5] sdd[3] sdf[4]
          11720536064 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [_UUUUU]
          bitmap: 1/22 pages [4KB], 65536KB chunk
    
    unused devices: <none>


    blkid

    Code
    /dev/sdb: UUID="51ec32cf-f4a4-abdf-75b7-87de3239679f" UUID_SUB="e4c70246-20e7-f375-1127-a24618676fea" LABEL="nas-omv-netgear:6x3TBWDREDRAID6" TYPE="linux_raid_member"
    /dev/sdc: UUID="51ec32cf-f4a4-abdf-75b7-87de3239679f" UUID_SUB="2166f5e2-05b8-62b0-c4b6-d6567b98197e" LABEL="nas-omv-netgear:6x3TBWDREDRAID6" TYPE="linux_raid_member"
    /dev/sdd: UUID="51ec32cf-f4a4-abdf-75b7-87de3239679f" UUID_SUB="ad00b629-3f9f-a24d-404d-4ed9dcbe10fe" LABEL="nas-omv-netgear:6x3TBWDREDRAID6" TYPE="linux_raid_member"
    /dev/sde1: SEC_TYPE="msdos" UUID="4F9B-6731" TYPE="vfat"
    /dev/sdf: UUID="51ec32cf-f4a4-abdf-75b7-87de3239679f" UUID_SUB="cedd2161-b2ab-6a87-3600-8d19b7fcb708" LABEL="nas-omv-netgear:6x3TBWDREDRAID6" TYPE="linux_raid_member"
    /dev/sdg: UUID="51ec32cf-f4a4-abdf-75b7-87de3239679f" UUID_SUB="ba49dc21-6550-7d59-83b9-43b1f6d183fb" LABEL="nas-omv-netgear:6x3TBWDREDRAID6" TYPE="linux_raid_member"
    /dev/sda2: UUID="d9a339ff-32d3-4af5-ac78-d051f898aacd" UUID_SUB="ab689f8b-4ad6-49d6-990a-25d9e1e5bd47" TYPE="btrfs" PARTUUID="2c6ef4dc-990d-4370-bc29-933a43994e80"
    /dev/sda3: UUID="2cdf4bd5-ce54-4023-8e84-2190040f56a6" TYPE="swap" PARTUUID="b8ab6da8-b910-4ae9-a610-70167339c057"
    /dev/md0: LABEL="NASRAID6" UUID="f2a2f310-a710-4716-80b2-490e1a20a232" UUID_SUB="2bda88d3-0a3d-467e-aafd-4a0fc50ab08a" TYPE="btrfs"
    /dev/sda1: PARTUUID="08dfaaf8-3078-498a-86f5-06520089ca74"


    mdadm --detail /dev/md0

    Ok I tried today to remove the drive via WebGUI. After clicking on remove, I got the following error message:


    Code
    devices: The value {"1":"\/dev\/sdb","2":"\/dev\/sdc","3":"\/dev\/sdd","4":"\/dev\/sdf","5":"\/dev\/sdg"} is not an array.

    Details shows:


    As email OMV send me:


    I have turned off the NAS and removed the NOK drive.

    Is there anyway that OMV rebuild the array as a RAID6 with 5 drives? With the 6 drives I had enough free space, so I assumed that this is somehow possible.

    Thanks for your answer and the hint I found the option in the GUI to remove drives from the RAID array. Does it make sense to remove the suspicious drive yet or should I wait until die GUI shows me in the RAID section anything different to "clean"? In the SMART section the suspicious drive has already a red light.

    Hello forum,


    I'm running OMV 5.6.21 with 6x 3TB drives in a RAID 6.

    I have three backups (1 full backup, 2 backups from important data).


    One of the RAID6 drives is reporting:

    • 197 Current_Pending_Sector -O--CK 200 200 000 - 27
    • 198 Offline_Uncorrectable ----CK 200 200 000 - 1

    The drive was a long time stable at 13 pending sectors but today I get the message that the amount of sectors doubled.


    Can I shutdown the NAS and remove the drive and the array runs then as RAID 5?

    Or what would you do?


    Thanks in advance!