6x 3TB RAID6 - one disk dying, what to do?

  • The only time I reboot is when there is a kernel update, otherwise mine is on 24/7

    How much energy consumption do you have with your HP N54L Microserver per day? I looking for a while for a replacement for my current server (old Intel Atom D510) but currently the delivery situation is not good plus energy prices are very high in my country (at the moment ~0,80$ / kWh) and I would like to have the next NAS as a 100% silent system.

    I nearly bought a Odroid HC4 but when I search through the web I read often from problems and armbian has no is looking for maintainer at the moment. So I think x86 or raspberry pi (big fan-base) is the best way to be safe regarding debian support. RPi compute modules are everywhere sold out, so I have to wait respectively I have more time to find the right system ....

    • Offizieller Beitrag

    How much energy consumption do you have with your HP N54L Microserver per day

    Never measured it, it just sits there, most of the time it's idle, I boot from a usb flash drive and have 6 drives installed, but I would guess it's way lower than my previous second hand Intel server with Xeon processor

    I nearly bought a Odroid HC4

    I had one of those down as a possible, but purely as an rsync server


    EDIT: I have this bookmarked should my N54L bite the dust, but that will require a m'board and ram, and I bought my N54L off ebay

    • Offizieller Beitrag

    How much energy consumption do you have with your HP N54L Microserver per day? I looking for a while for a replacement for my current server (old Intel Atom D510) but currently the delivery situation is not good plus energy prices are very high in my country (at the moment ~0,80$ / kWh) and I would like to have the next NAS as a 100% silent system.

    I nearly bought a Odroid HC4 but when I search through the web I read often from problems and armbian has no maintainer at the moment. So I think x86 or raspberry pi (big fan-base) is the best way to be safe regarding debian support. RPi compute modules are everywhere sold out, so I have to wait respectively I have more time to find the right system ....

    Normally I don't usually express this opinion, but we are starting the year and someday these things have to be done. Many will not like what I am about to say and will argue with it, but I doubt they have many solid arguments to refute it.


    If you compare the power consumption of a raspberry to a modern low-power processor, the raspberry will win ... but by a very small difference. Someone will come to discuss this, but in the end the values are what they are, the differences are small. If you add hard drives to that consumption, the totals increase considerably. Depending on the number of disks, the CPU consumption may be irrelevant in the total. If you also take into account that it is recommended that hard drives do not stop rotating to achieve greater longevity, the differences in consumption between both platforms are minimal.

    If you are thinking of putting your discs to sleep, you must weigh the cost of reducing the useful life of the disc against the consumption that it will generate when rotating 24 hours a day, 7 days a week. Do your numbers.

    From here, taking into account the advantages that an amd64 board brings over a raspberry, such as SATA ports, updates, stability, compatibility ... I do not understand the reasons for using a raspberry.

    The size? Perhaps it could be an advantage, although a small box of itx will be a little larger than a raspberry. Although considering that you can have several external hard drives, several power supplies, cables everywhere ... probably this supposed advantage will also disappear, and become a disadvantage compared to a closed box.

    The price? Now I would ask you, how much are you going to spend on hard drives? How much more is an amd64 board going to cost compared to a raspberry? Do your numbers. Without going into assessing that a disk will end up dying prematurely because of the USB connection. If you are going to use an external enclosure for multiple hard drives, the price will be higher.

    The only real reason I can think of is to carry the travel server in one suitcase, I can't think of another.


    Happy New Year !!

    • Offizieller Beitrag

    I understood they are looking for maintainers for a number of boards. But it is not like they have no maintainers at all. And it seems they do have a maintainer for the HC4 (at least it is not on the list of boards that are lacking a maintainer).

  • Today I used the NAS after longer time again, boot up got the following email:


    Zitat
    Code
    active (auto-read-only)

    ....


    logged in via ssh and checked like geaves told me last time

    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
    md1 : active raid5 sde[2] sda[4] sdd[1] sdb[0]
          8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    active again

    Zitat

    mdadm --detail /dev/md0



    10 minutes later I get another email, array active again:



    At the moment I have doubts to trust in my nas-system.... :D

    • Offizieller Beitrag

    RPi compute modul can have real SATA ports (depending on the board --> https://www.jeffgeerling.com/b…ow-has-sata-support-built).

    Ok, now tell me what advantage do you see in that compared to a good miniitx board. I don't see any. On the contrary, I only see disadvantages. The size is bigger when you connect the pcie card and mount everything. Where are you going to mount it? In a special box, I guess. You cannot boot from a sata port. You will not be able to expand in the future, RAM, CPU, ... You need a special operating system.

    I think there is still time for this system to be competitive. I'm really glad that it's evolving, and I hope that one day it will reach the necessary level, but not yet. I prefer simplicity, good luck.

  • Did you define am additional spare in /etc/mdadm/mdadm.conf which you do not have in your system?


    cat /etc/mdadm/mdadm.conf


    vs /usr/share/mdadm/mkconf

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Zoki


    No I have performed all changes from the webGUI.

    I had an array with two drives, and one with three drives. From the first array one drives died, so I added the remaining drive to array two, expanded the filesystem. But now I get this messages.


    cat /etc/mdadm/mdadm.conf




    usr/share/mdadm/mkconf

  • Read again: ARRAY /dev/md1 metadata=1.2 spares=1 name=nas:1 UUID=587b4360:79e7be24:3e7f1cc8:6841b435


    Try to execute usr/share/mdadm/mkconf and paste the result.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • The result:

    Try to execute usr/share/mdadm/mkconf and paste the result.

  • So the config file is different from the reality (it defines 1 spare)


    if it was my system, I would manually replace the ARRAY line in /etc/mdadm/mdadm.conf by the one from the mkconf command, but maybe Mr. RAID geaves knows the proper way to do it from the UI.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • So the config file is different from the reality (it defines 1 spare)


    if it was my system, I would manually replace the ARRAY line in /etc/mdadm/mdadm.conf by the one from the mkconf command, but maybe Mr. RAID geaves knows the proper way to do it from the UI.

    Would that cause that the spare drive is afterwards a "normal drive" or that the OMV config recognize the drive as spare as defined in the mdadm.conf?!?

  • This is something I'm not aware of but the way to create the conf file in OMV5 is to run omv-salt run deploy but not sure if this mdadm or mdadm.conf after deploy, then run update-initramfs -u

    Code
    omv-salt deploy run mdadm

    worked and gave me:


    Code
    update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-5.10.0-10-amd64

    finished, reboot or anything else to do?

  • Would that cause that the spare drive is afterwards a "normal drive" or that the OMV config recognize the drive as spare as defined in the mdadm.conf?!?

    Your problem is not the usage of the drives, but the expectation, that there is an additional spare drive, which you do not have. Removing the spares=1 would make the expectation correct in that you do not have spare disks.


    So the mail should no longer be sent.


    Try whith what geaves said first.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Your problem is not the usage of the drives, but the expectation, that there is an additional spare drive, which you do not have. Removing the spares=1 would make the expectation correct in that you do not have spare disks.


    So the mail should no longer be sent.


    Try whith what geaves said first.

    Ok I have done was geaves said. No mail regarding missing spare after reboot.


    Probably I have a misunderstanding, but I'm still a bit confused because I have 4x 2,73 TiB, the array has a size of 8,19 TiB (3x 2,73 TiB). Shouldn't one of the drives run als spare device? (raid 5 -> n-1)


    WebGUI shows:

    Thats looks to me like I could hit the "file systems - resize" button to increase the array size to 10,92 TiB like JOBD?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!