Beiträge von someuser08

    I agree with BernH , as I said in post 2 BTRFS is not recommended for Raid 5, it is not stable yet. And, it seems, there is still time before it is, it is progressing very slowly. In my opinion EXT4 is the most stable option. But I think you keep losing sight that Raid is not a backup, you should educate yourself on this, you probably don't need it. What you really need is a good backup, not a Raid. I have not yet read anything on this topic said by you.

    I consider raid as high availability tool for me. I already have things that I can't lose synced to cloud storage and the rest I can recover from other places but it would take time and would be inconvenient. That's why I'm paying extra money for raid storage, but would like to spent minimal time maintaining it.

    I read up on mdadm - it can grow raid 5 one disk at a time. Moreover it can have a spare disk to be used automatically when one disk fails without manual intervention which is great. So I might go this route. What FS do you recommend to use on top? Can I use BTRFS with mdadm raid5? Via UI as well?

    I had a look at mergefs and it looks interesting, but I need some fault tolerance. As I'm planning to go with cheapest SSDs I can find so I almost expect some of them to fail (not on the same day though) so parity is important to have.


    I'm going to read up on snapraid now...

    With the SSD prices dropping to a level never seen before I thought it would be time to move my current HDD set up (6Tb RAID 1) into low cost SSDs. I'm planning to get the cheapest SATA 1tb SSDs I can find on amazon (with next day delivery for the future in case of failures) to cover my current usage of about 4Tb and expand in the future (going to get 10x SATA PCIe card). I want to use raid 5 as not to waste much space (and rebuild times with SSDs is not an issue). My use will be 5% writes, 95% reads...


    What's the best FS to use for this and will I be able to build it all from scratch via OMV6 GUI? Thank you.

    Yes, I did create raid from CLI on omv5


    My fstab:




    Out of curiosity I have removed sample commented out mntent section from XML and did a reboot and now see it picked up FS ok:


    1. Tried that - no difference

    2. Yes, but on the create screen there are no devices selectable.

    3. Here it is (the fist mntent entry, if I look into xml file, is actually commented out):


    omv-showkey mntent


    <mntent>

    <uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid>

    <fsname>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|xxxx-xxxx|/dev/xxx</fsname>

    <dir>/xxx/yyy/zzz</dir>

    <type>none|ext2|ext3|ext4|xfs|jfs|iso9660|udf|...</type>

    <opts></opts>

    <freq>0</freq>

    <passno>0|1|2</passno>

    <hidden>0|1</hidden>

    </mntent>

    <mntent>

    <uuid>f1690940-3010-47d4-a661-4ebb1e1acb49</uuid>

    <fsname>/dev/disk/by-id/ata-WDC_WD60EJRX-89MP9Y1_WD-WX31D49NH3H2</fsname>

    <dir>/srv/dev-disk-by-id-ata-WDC_WD60EJRX-89MP9Y1_WD-WX31D49NH3H2</dir>

    <type>btrfs</type>

    <opts>defaults,nofail</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <comment></comment>

    <usagewarnthreshold>85</usagewarnthreshold>

    </mntent>

    <mntent>

    <uuid>46216387-9aaf-4080-9351-da90a13bf66e</uuid>

    <fsname>/srv/dev-disk-by-id-ata-WDC_WD60EJRX-89MP9Y1_WD-WX31D49NH3H2/media/</fsname>

    <dir>/export/media</dir>

    <type>none</type>

    <opts>bind,nofail</opts>

    <freq>0</freq>

    <passno>0</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>0</usagewarnthreshold>

    <comment></comment>

    </mntent>

    Hi. Ever since I've upgraded from omv5 to omv6 the UI page for the file systems is not working correctly for me. It always shows it can't find my BTRFS RAID 1 file system (that I have created manually long time ago). Everything works ok underneath (file shares, smb, nfs etc) based on that FS, but its just UI doesn't pick it up properly. Is there a way to fix it? If I try to edit the entry it does black screen with "Software Failure...". Thanks.

    Weird, settings for 5 min, 10 min and 20 min are working OK (i.e. hdd spins down and stays like that for hours), but 30 min, 60 min and 120 min are not (no spin down). I have iosnoop running and can confirm no disk activity during test time. Is this some sort of a bug?

    After unmounting of BTRFS array I can't see to get it back properly in fstab (which i didn't make a copy before) for some reason openmediavault section looks really bizarre:


    # >>> [openmediavault]
    /dev/disk/by-id/ata-WDC_WD60EJRX-89MP9Y1_WD-WX31D49NH3H2 /srv/dev-disk-by-id-ata-WDC_WD60EJRX-89MP9Y1_WD-WX31D49NH3H2 btrfs defaults,nofail 0 2
    /dev/disk/by-label/nasraid /srv/dev-disk-by-label-nasraid btrfs defaults,nofail 0 2
    /srv/dev-disk-by-id-ata-WDC_WD60EJRX-89MP9Y1_WD-WX31D49NH3H2/media/ /export/media none bind,nofail 0 0
    # <<< [openmediavault]


    And the stats in the dashboard section are all over the place showing OS SSD stats. What's going on?

    And FYI why I chose btrfs in the first place - I wanted to have raid 1 with 2 disks that can be grown into maximum of 3 (with capacity still 50%). Any other options out there that can do teh sane

    great, thanks


    Iotop shows the same fluctuations whilst I'm doing DD. From 250MB/s down to kilobytes. So it is the array.


    Unfortunately I have spent 2 days copying the data from the old NAS (which was much slower 30-50MB/s, that's why I have not noticed this problem initially), so I'm not keep to destroy the volume just yet without knowing what I'm going to replace that with (or know all the options that need testing). I'm assuming its BTRFS and its raid that is culprit, so now the question would be what should i replace that with...