Posts by diego

    Hello,


    I already know about how ZFS works, as I come from freenas, and I think that doesn't meet my needs. Can you please explain me, why ZFS would be a good solution for the infrastructure I am searching?


    Kind regards

    No parity = no bitrot protection. Period.
    If you just want bitrot detection, and don't want to spend a disk on parity, then ZFS is IMHO the only way to go. Plus, you have used it before, and so you should have some kind of knowledge on its usage.
    ZFS will detect bitrot, but it won't be able to correct it as it won't have parity data.



    If you plan to do offsite backup (you don't specify details on this), I'd go for another zfs pool, and do zfs send | zfs recv (or better still, use sanoid/syncoid)

    Hi Diego. Thanks for following up. I've almost given up on omv... :( I think OMV plugin had a problem with my pools configured with slog and l2arc on SSD partitions, and not dedicated disks.
    I removed since the slog and l2arc from the pools' definitions, but I also removed OMV install - I'm not too sure of the maturity of the zfz part, and I don't have the time to fiddle with it.


    Probably I'll need to go back to an environment where zfs is native, and the use of the storage is not constrained, or change my storage config - (which I did). I see you use omv with zfs, but on an earlier version - is it a "production" environment (as much as a home media server can be) and are you satisfied with it (absence of issues, stable)?
    Best

    First of all, your disk layout is odd to say the least. No need for L2ARC or SLOG for personal/home use. No sense to have a mirror made out of two partitions on the same disk (for L2ARC or SLOG). Why a separate swap partition when you can use a zvol for that?


    https://github.com/zfsonlinux/…e-a-zvol-as-a-swap-device


    Why you let free space at the end of the disk?


    Just create a zpool using the whole disk, create a zvol for swap and some filesystems as you see fit (I recommend two/three as a minimum: one for root filesystem, maybe one for /boot, and another for /home). Install sanoid or any other policy-based ZFS snapshotting tool, and replicate the snapshots to a secondary computer for backup purposes.


    I've had no issues at all with ZFS in my environment. It's so rock solid that I've migrated all my family PCs to ZFS. Snapshots of every filesystem are taken every hour, and periodically sent to main OMV server. Main server then daily replicate those snapshots to backup server. I keep last 48 hourly snapshots, plus last 15 dailies, plus last 12 monthlies, plus last 3 yearlies. Anytime time I can get back any file I should have accidentally lost or somewhat damaged (think i.e. crypto malware), from any local or remote snapshot, or even if any PC hard disk dies, it's easy to replace the hard disk, boot using a ZFS-enabled liveUSB and transfer back the latest snapshot stored in main server.


    OMV4 ZFS plugin, as @ryecoaaron stated, is totally stable.

    A vps provider has better things to spend his time and money other than sniffing traffic (or checking hard disk contents) from a cheap vps while you are not sending spam or otherwise breaking the law. They earn money renting you a vps. Google sniffs your email to earn money as they offer the email service for free.


    Smtp over tls and a encrypted harddisk would take care of your fears.


    Sw packages: postfix, dovecot, spamassasin, clamav, spfquery, opendkim, amavisd-new, pyzor, razor, just to name main ones.


    Assuming you have two public fixed IP addresses, probably I'd go for an active-active cluster. It's just a matter of set two MX records (one for each computer), and have two A records with the same name (i.e. imap.mydomain.com) pointing to your IPs. You'd need to keep mailboxes synced between SBCs, as emails could get delivered to any of them. Many options to choose: glusterfs, lizardfs, xtreemfs, dfs over samba, or maybe even a rsync cron job would do. Sure many of them are not available for arm.


    If both machines are going to be behind a single IP using NAT, and/or your IP address(es) is/are dynamic, then rent a VPS.

    I have already written some data into Storage. If I use main folder names as datasets I have to set another folder for SMB share. therefore I dont feel like to do tank0/Media/media etc. hope its not that bad of a problem. anyways, here is the ls -la, seems to be fine to me:



    Ok, plan your datasets the way you feel it. I usually create many small datasets instead of just a few big ones: easier to backup, manage, apply different quotas or whatever. You can rename (and reubicate) datasets using zfs rename.



    Now for permissions: As per ls output, root is the owner of Media and Storage, and he can access for read,write and execute. The group users can access for read,write,execution both directories, and anybody else can access for read and execute Media but not Storage.


    This problem is not related to ZFS. You would have faced it using ext4 as well.


    So if you want to let user1 have full access to media and no one else, and same for user2 and storage, I would make user1 be the owner of media


    chown user1 -R /tank0/dataset/Media


    Same for user2 and Storage


    chown user2 -R /tank0/dataset0/Storage


    Owners already have full rights over each folder, but you don't want anybody else to read those directories, so let's remove permissions to group and others


    chmod go-rwxs -R /tank0/dataset0/Media
    chmod go-rwxs -R /tank0/dataset0/Storage


    so, if you run ls -al again you'll see


    drwx------ 2 user1 users 2 Jul 16 17:14 Media
    drwx------ 4 user2 users 4 Jul 17 22:30 Storage


    That means that user1 is owner of Media, and has read, write and execution access. Media also belongs to 'users' group, but no access for them, and also no access for any other user.
    Same thing for user2 and Storage.


    Just as remainder, root user has full access to any file or directory, no matter what permissions are set.


    Have a look at chmod and chown man pages.


    That should fix it (from file access point of view). Test it, and if it keeps failing, it's time to check SMB share access permissions. Unfortunately I have no experience on that.

    First, I'd make a dataset tank0/Media and another one called tank0/Storage. Destroy tank/dataset0 if it's empty or rename it to something meaningful.



    If I recall correctly (it's been almost 20 years since I last had to deal with samba shares), you have to give permissions to user to access a share (and you did), but that user has to be able to read (and/or write) the directory the share points to.


    So run
    ls -al /tank0
    To show owners and permissions to the directories you are trying to share.

    You were doing sequential reads, zfs spans them over your four disks, and data was written sequentially also, so you got a best-case scenario. You couldn't hear
    the disks because WD are quite silent and the heads were not waving back and forth, just jumping from one track to the next one.


    Zfs uses (by default) up to half of RAM for cache (called ARC). It's clever enough to realize that there's no gain caching a big sequential read. It uses LRU+LRU cache algorithms to achieve it.


    Anyway, you could use a parttion on your ssd for level 2 ARC (L2ARC for short) but I think you don't need it by now. There' s always time to make things more complicated.


    More RAM is mandatory if you ever enable ZFS deduplication, or if you end up using you nas to provide more services (think plex, email server, vpn server, virtualbox...)


    To wake up the nas, you'll need to use Wake on Lan. Client sends a special packet to your nas (using its mac address) and it turns on. Nas can be configured (using ethtool) to wake using different kind of packets (magic packets, password protected or to wake just on broadcast activity). Check ethtool man page for more info.

    Even if they emulate 512 byte sectors, keep ashift=12. Most times zfs realize the native sector size and will do ashift=12 by default, but if it doesn't, it will hurt performance.
    And yes, it ready instantly. Zfs knows which sectors are in use and which are not, so no need to sync everything in advance.

    you convinced me to install zfs @diego
    now I was just wiping securely all my 16tb for hours, installed omvextras and zfs and now I cant see how to do a raid10 in zfs, only options are basic/mirrored/zraid1/2/3


    thought I make two mirrored pools and then stripe those two pools and get what I want but not possible. How please?


    e: i think i figured out: mirroring under normal raid management first and then create a basic raid of the two mirrored pools in zfs plugin!?


    Forget RAID. Forget also LVM, NFS, SMB and iSCSI. Forget ext4 and any other traditional filesystems. All of that is done (and managed directly) by ZFS.


    think ZFS as a RAID + LVM + filesystems.
    Two commands to do everything. 'zpool' for the 'RAID+LVM' management, and 'zfs' for the filesystem management (including snapshots, cloning...) Please check the manual pages for both.



    ZFS uses one or more vdevs to create a pool and store data. A vdev can be: a single disk, a mirror (made from two or more disks), a raidz (kind-of RAID5 -single redundancy-), a raidz2 (kind-of RAID6 -dual redundancy) or even a raidz3 (tripe redundancy). every vdev but the single disk provides redundancy to recover from at least a broken disk (just like any RAID). You can add vdevs to an existing pool, (and in newest zfs version 0.8.0, you can remove it too under some circunstances)


    If your pool is made of more than one vdev, the data is split between then (kind-of RAID 0).
    To create a kind-of raid 10, you need to create a pool made of two vdevs, and each one of them is a mirror.


    so


    zpool create -o ashift=12 <name of the pool> mirror <device 1> <device2> mirror <device3> <device4>


    i.e.


    zpool create -o ashift=12 tank mirror disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128500 disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128600 mirror disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128700 disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128800


    (-o ashift=12 is just for proper alignment)


    Best practices recommend to assign full disks instead of partitions for ZFS use, and also avoid using /dev/sdX names for disks, as those names could change if you swap SATA cables around. It's much better to use /dev/disk/by-id/* . You should avoid the trailing /dev/ from the full path (inherited from Solaris days).



    You can check the status with 'zpool list' and 'zpool status'
    At this point, you should have your pool hanging off /tank. Wait a moment before using it. We need to adjust some default properties. These will be inherited by default by any dataset (=filesystem) you may create later on. Just type


    zfs set compression=on xattr=sa atime=off tank


    Now you are ready to create datasets


    zfs create tank/dataset1


    It will hang on /tank/dataset1. Of course you can create nested datasets. Think of a dataset containing personal files, and then a child dataset for Alice's personal files, another for Bob's and so on. Every dataset has properties (like compression, recordsize... check manual pages) and they can inherit them from their parent dataset, or just define their own values.


    It's not recommended to store anything but datasets in the root of your pool.


    Now you can copy files to dataset1.


    For automatic snapshots, there's a package in github called sanoid. Just check it out, very simple to install and configure.