Posts by doscott

    I once had trouble setting up a gmail smtp configuration on a computer in the US from an account originally set up in Canada. It was because I was trying to use smtp outside of the region. I don't remember what I did to resolve it, but it took a fair bit of googling.


    While this doesn't help you, the 64.233.167.109 address resolves to Ouddorp, South Holland, Netherlands, and the other resolves to Lake Monticello, VA.


    Rather than use the smtp url, you could try entering the ip address instead, avoiding dns. I doubt that google changes it.

    If you lose a disk then you will have to replace the files on your own. Depending on how you do your backup then this may be done relatively quickly (eg. rsync), or you may have to do a comple backup.


    I don't know anything about Union File System. To do this from a pure btrfs setup you would ssh in and type the following commands:


    Note: This will remove all existing data from drives /dev/sdb through /dev/sdf

    Note: "BTRFS1" after -L is the label. You can make it whatever you want.

    Code
    mkfs.btrfs -L BTRFS1 -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

    After a reboot, you will have a file system:

    Code
    /srv/dev-disk-by-label-BTRFS1

    which you can then use like any other OMV file system.


    There is more useful information in starting at post #12 of

    Any roadblocks I should be aware of prior to trying to partition the drive(s) & setup a btrfs on seperate partition(s) of individual drives?

    Just to muddy the water a bit. If you feel comfortable working from the command line via ssh then you could consider using btrfs. After it is setup, you can use the OMV interface to create shares, etc.


    With btrfs you would get the ability to:

    - create a group of drives, of same or various sizes, similar to JBOD

    - add drives to the group

    - remove drives from the group

    - convert the group to raid1, raid5 or raid6 on the fly

    - convert back to "JBOD", ie. remove raid, on the fly


    If you don't configure a raid setup, then if you lose a disk without warning you lose the data on that disk. If you get a warning you can use a couple of methods to remove/replace that disk from the group and automatically redistribute the data.


    https://btrfs.wiki.kernel.org/…rfs_with_Multiple_Devices

    I think see what is going on now. You should schedule the tasks so that:

    monthly starts before weekly

    weekly starts before daily

    daily starts before hourly


    You shouldn't need a lot of time between the monthly, weekly, daily and hourly tasks as they are mostly just moving folders, with the longest operation being the deleting of a folder in the monthly task.


    Maybe do hourly at minute 45

    daily at minute 35 hour 0

    weekly at minute 25 hour 0

    monthly at minute 5 hour 0

    I don't use an hourly step, but assuming you have configured for something similar to (note: these must be in this order):

    Code
    retain hourly 24
    retain daily 7
    retain weekly 4
    retain monthly 3

    and that sufficient time has passed that all snapshots have been taken, then:

    - on the hour: hourly.23 is deleted, and all other hourly snapshots are moved up (22->23, etc), then a new snapshot hourly.0 is created

    - on the day: daily.6 is deleted, and all other daily snapshots are moved up (5->6, etc), then snapshot hourly.6 is moved to daily.0 Note: hourly.6 is missing and will be created on the next hourly.


    The weekly and monthly are similar. The "missing" daily snapshot should be the result of the weekly snapshot being taken, not the hourly one.


    This is a log output of my weekly snapshot:


    That is because mdadm is not 'aware' of the btrfs file system, do a google search on how to expand a btrfs filesystem

    I know that one of the file system options when using mdadm is btrfs. This is what

    Code
    /dev/md0 8790402048 4701954984 4087948824 54% /srv/dev-disk-by-id-md-name-openmediavault-MyRAID5

    indicates to me. Using the command line btrfs is simple to expand/change when mdadm is not used. I am not sure that it is applicable when OMV puts btrfs on mdadm.


    If this is the case, I would personally start over and not use mdadm at all, but I have 3 backups of all of my data so it is not as much of a risk.

    btrfs has been marked as stable in the linux kernel since 2013, so it can be considered "relatively" new. I have used it for a few years now in my Tumbleweed desktop without issue. I have used it with OMV on my Thecus N5550 NAS for several months.


    With OMV the disadvantages are:

    - you have to assemble the array from the command line

    - you have to manage changes like converting from JBOD to RAID1, RAID5 or RAID6 (any direction) from the command line

    - you have to install the disk maintenance utilities (scrub and balance) from the command line


    None of the above are difficult to do.


    The advantages are:

    - other than setting up, no other software needs to be added

    - even with JBOD automatic recovery from a single bit flip is possible

    - extreme flexibility in on-line migrating between all raid versions

    - extremely fast raid setup

    - raid does not require identically sized HDDs

    - the OMV folder/share management are transparent


    The only caveat, which applies to other file systems as well, is a simultaneous failure of an HD and a power failure can result with a failed system when raid5 or raid6 is used. Use of a NAS prevents that.


    I would suggest reading through the btrfs kernel wiki and watching some of the videos (the Facebook one is interesting).

    These are two good articles:


    This one shows why SMR has a bad reputation:

    https://create.pro/why-the-dat…our-hdd-could-be-at-risk/


    This one indicates that SMR is probably OK for your application:

    https://www.ionos.com/digitalg…d-magnetic-recording-smr/


    I haven't owned an SMR drive and have avoided owning one so far. But for the right price in a non-raid setup I would be open to using one.


    In terms of "infrequent use", my perception is that is not measured in minutes or hours but more by non-sequential writes and the impact on performance, not reliability.


    Following is a paper on the technology, which I have skimmed through but have not read:

    http://www-users.cselabs.umn.e…MR&IMR/SMR-Evaluation.pdf

    I use btrfs. btrfs snapshots will do what you want, but it will take a bit of effort to set up the scripting required to do this (a bit of googling will find what is required). I would suggest using rsnapshot on the client machine, or if you have several client machines, using something like restic to gain deduplication. It may be possible to use rsnapshot on OMV to pull the backup, but I have no experience with it.


    Personally I use restic backing up to a Qnap, and rsnapshot backing up to OMV.

    Interesting, why did you go with Raid 1 vs 5? Is there a performance benefit or is it simply better at hitting fewer drives during write?

    Actually I started with 4 x 2TB in raid5, and then lost a drive. I opted to add a 6TB drive, and raid1 and raid5 in this configuration both proved 6TB storage, so I went with raid1. I wasn't worried about performance, but I like the simplicity of raid1. After the next drive failure I will get another 6TB drive and then I will chose either raid1 (8TB capacity) or raid5 (10TB capacity).


    My NAS is a relatively old Thecus N5550 that failed a flash a few months ago. I couldn't get it to post so I replaced it with a QNAP. It ended up that it would post but the HDMI port would not display anything; the VGA port did. I put OMV on it, and overall I like it much better than the QNAP.

    A couple of other things with btrfs I find useful. I run two daily cron jobs as root:


    Code
    /usr/bin/btrfs fi show


    which gives a mailout of:


    Label: 'BTRFS1' uuid: 3c116019-c3d4-46f4-856c-cd624761c77e

    Total devices 4 FS bytes used 3.39TiB

    devid 2 size 1.82TiB used 1.14TiB path /dev/sdc1

    devid 3 size 1.82TiB used 1.14TiB path /dev/sdd1

    devid 4 size 1.82TiB used 1.14TiB path /dev/sdf1

    devid 5 size 5.46TiB used 3.42TiB path /dev/sde


    and


    Code
    /usr/bin/btrfs device stats /srv/dev-disk-by-label-BTRFS1


    which gives a mailout of:


    [/dev/sdc1].write_io_errs 0

    [/dev/sdc1].read_io_errs 0

    [/dev/sdc1].flush_io_errs 0

    [/dev/sdc1].corruption_errs 0

    [/dev/sdc1].generation_errs 0

    [/dev/sdd1].write_io_errs 0

    [/dev/sdd1].read_io_errs 0

    [/dev/sdd1].flush_io_errs 0

    [/dev/sdd1].corruption_errs 0

    [/dev/sdd1].generation_errs 0

    [/dev/sdf1].write_io_errs 0

    [/dev/sdf1].read_io_errs 0

    [/dev/sdf1].flush_io_errs 0

    [/dev/sdf1].corruption_errs 0

    [/dev/sdf1].generation_errs 0

    [/dev/sde].write_io_errs 0

    [/dev/sde].read_io_errs 0

    [/dev/sde].flush_io_errs 0

    [/dev/sde].corruption_errs 0

    [/dev/sde].generation_errs 0


    My setup consists of a raid1 array of 4 drives, 3 x 2TB and 1 X 6TB.

    One thing to keep in mind about btrfs RAID5, you can have an unrecoverable failure if two things happen at the exact same time: power failure and disk failure. Probably nothing to worry about, especially if you use a UPS.


    Useful tools for maintenance can be found at:

    https://github.com/kdave/btrfsmaintenance


    From a root login I did a

    git clone https://github.com/kdave/btrfsmaintenance.git


    The following will install it:

    ./dist-install.sh

    Then edit

    /etc/default/btrfsmaintenance

    and change whatever setting you want. I stuck with the defaults but set the balance and scrub mountpoints to "auto".


    Then run:

    btrfsmaintenance-refresh-cron.sh

    to use cron to run the task (you can use systemd but cron is simpler).


    This is a good btrfs cheat sheet:

    https://blog.programster.org/btrfs-cheatsheet


    One of the neatest things with btrfs is that if you ever run short of space on that raid5 setup is that you can convert, on the fly, in addition to any other raid setup, converting to no raid.


    A heads up on something that took me a while to figure out on a drive failure: in order to mount a btrfs raid in a degraded state from the command line, you need to remove it from fstab or it will remount itself in a read only state.

    What I was primarely curious about is if anyone attempted something like this and if OMV gui (webpage) worked well with the similar config.

    As long as you include a label the btrfs drive will show up in the filesystems window (may require a reboot before showing up) and in any pull downs for creating shares. It will not show up in the raid management window.


    btrfs can be used on a drive without partitions, or in partitions on a drive, or both at the same time. I haven't used this configuration with multiple partitions on a single drive, but I have had OMV configured with btrfs spanning 3 drives without partitions and 1 drive with a single partition. This was a result of a typo when adding the 4th drive to a 3 drive array but there were no issues.