Beiträge von lenainjaune

    It's not exactly you who is replacing corrupted data but ZFS itself when running a scrub

    Ooops ! Error of translation ! In fact I had already understood that :)


    If it is possible ... 2 last questions :
    - We have read that deduplication is not a good idea for both performance and excessive memory requirements. At present the NAS hosts about 2 million files of varying sizes. Your opinion?
    - In my todo list, I indicated that the ashift parameter can increase performance. I do not know how to determine the ideal value. What to do ?

    Hi all :)


    Following your proposals, we have tested the implementation on the OMV test PC ZFS technology and configured a mirror RAID with a daily scheduled task that checks if the RAID is working normally and if it is not the case that notifies us by email which slot is affected by the disk in error (ZFS plugin let us configuring disks by path). Furtermore we had tested ZFS RAID mirrored data access from Debian distro with success !



    sample of notification zpool status :


    What was not mentioned is that we are a small association "loi 1901" with limited resources, that the computer service (if we can call it that :) ) is composed of 1.5 people who have some knowledge in computer science and that the original request was to replace at a lower cost a Synology DS209J capricious and unstable NAS on which we have little control.


    We do not have a real DRP plan because our needs are minimal. In addition, in our installation, the NAS data is saved daily on another server.


    In fact in our case what is important is:
    - that you can eject a disk in the event of a disaster, that it contains all the important data and allows us to rebuild the NAS by itself
    - be alerted if there is a disk error (RAID and / or SMART)


    For us, in the notification (mail or OMV HMI) it is not the serial number of the HDD which is important. What is important is rather to know that the disk inserted in the slot identified (or with a DYMOed stick :) ) 2 is in error and therefore it must be replaced ASAP to preserve the redundancy of the mirror.


    The idea of driving the LED racks is interesting but the problem is that we can not really plan and choose the material in a fine way. It's a safe bet that the storage controller of the final PC will not be manageable by ledmon (on the OMV test PC the command ledctl -L returns "ledctl: invalid option - 'L'" for ledmon v0.79 and this option -L does not seem to be documented on the web).


    If we understood well ...


    Using ZFS mirror RAID technology ensures data integrity because a checksum fingerprint is retained for a set of data, and if the current fingerprint of that set no longer matches the retained fingerprint, then either data or either the fingerprint is wrong and therefore the whole is corrupted. In this case, the mirror makes it possible to recover the data / fingerprint pair of the other disk if they prove to be concordant. We can then replace corrupted data with healthy ones. In this case it means that some of the disks are reserved for fingerprints and therefore we lose a little space available compared to a RAID 1 mdadm.


    That's right ? Thank you to confirm if we understood well ?


    PS: we could not read the content of the link '(German) overview with SAS backplanes' since we do not speak German (we are french )

    Hi tkaiser and thanks you for your quick response :)


    Nope :( ! Even if I create a RAID with explicit /dev/disk/by-id these are NOT preserved by OMV web server and notifications. I'm starting to think that there is no solution to my needs and I'm starting to consider changing the display code of the storage topic to reflect something other than the arbitrary device name /dev/sd? which will possibly change between restarts. Another suggestion or I missed your solution ?

    Mmmm... so I tested it and I discovered that my use case of hot-unplug (in case of calamity) worked too. One disk can contain all data by itself.
    I tried :

    Bash
    mdadm --create --verbose /dev/md0 --level=10 --raid-devices=2 /dev/disk/by-id/ata-TOSHIBA_MQ04ABF100_Y7R8PG66T /dev/disk/by-id/ata-ST1000DM003-9YN162_S1D43B7E --layout=f2 --size=2G

    => this works like a charm (even if I hot-unplug a disk and read it from another machine), so thank you
    BUT I do not understand the subtleties of how RAID 1 is bad idea compared to RAID 10 f2. If I understand the same data are written. No parity data. No more data. Just the same data doubled but organized differently. Can you explain me more?

    Hello everyone :)


    I am building an OMV NAS with a RAID 1 on 2 SATA disks in internal racks to be able to insert / eject in hotplug / hotswap (like this :https://i2.cdscdn.com/pdt2/6/5…e-interne-pour-disque.jpg).


    I would like to find a solution so that in case of breakdown I know which disk to replace. It's not so much for me as for non-computer people.


    The trouble is that on the web server the failure indicates that /dev/sdA is operational (so by deduction /dev/sdB is down) but does not say if it concerns rack 1 or rack 2.


    I have browsed a lot of posts on both OMV and mdadm without success (symlinks with udev, configuration of /etc/mdadm/mdadm.conf, etc.). Systematically the only device reassembled is /dev/sd?


    A solution that works ?


    Regards,
    lnj