Need advise planning new NAS

  • Hi guys,


    Im currently using a qnap ts212 with 2x1TB raid1 mainly for file storage.


    I got 4x4TB Seagate NAS HDDs for cheap so I began to build a new diy nas:


    - Asrock mini itx j3455
    - 16gb ram ddr3l
    - 128gb ssd
    - 4x4TB


    I was always looking for my old qnap to use jdownloader or the ability to use vpn but the qnap was too slow for this. It also runs full now so even slower. time to upgrade.


    I was planning on building a raid10 with omv. I heared of raid5 problems with newer drives so i passed raid5. i heared raid6 is better when planning to use more than 4 drives but i really dont, 8TB on 4 drives is overkill for me already. speeds with raid10 is plenty enough for GB network and should be enough even for i.e. 2,5gbps or 5gbps even though it will take a long time for me to upgrade my 1gbps network. I also considered using snapraid but somehow i feel like this is not the real deal compared to raid10.


    so what do i want from you guys? Well, somehow I dont have a real backup strategy on this one. i know raid is not a backup but i think raid redundancy is better than single disks. since i have 4 large disks now maybe there is a better strategy than using raid10, i.e. using 2 disks for raid1, automatic backup to 3rd drive once a day and use 4th drive as spare? but somehow i feel like this is what raid10 does minus the performance upgrade even though in 1gbps network this isnt relevant although i might upgrade in the future to more multi gbps network...


    I also wonder about LVM: should I use it even though I dont plan to resize my nas? I man this should be another layer with possible faults in the future!? any caching benefits from LVM in combination with another technology Im not considering right now?


    best regards

  • meanwhile I ve read some more about filesystems and I decided Id go for btrfs because of data checksum/integrity mainly instead of ext4 or LVM. convince me for something else or tyvm

  • question is: why? and isnt this somewhat of a problem without ecc ram?


    Answer is:


    Because ZFS is more stable and more mature and more flexible than btrfs hands down. No problems like btrfs' raid-5 or raid-6 data destroying. No raid 5 write-hole. If you use ECC, it guarantees no silent data corruption due to end-to-end checksumming. If you don't, then it's not perfect, but it's still better than any other filesystem.


    ZFS is Copy-on-write, ZFS is transactional, supports data compression, deduplication and encryption natively. You dont' need LVM, iSCSI or NFS: it's already bundled in. ZFS let's you make snapshots and or clones of your files, and send them over the network to a backup site much more effiently than rsync. It even lets you make volumes (zvols) which can hold anything (even NTFS filesystems) and can be snapshotted, cloned...


    There are even tools (sanoid/syncoid) to schedule snapshots (i.e take one hourly), keep a bunch of them (i.e. last 10 hourly plus one dailyfor the last 30 days plus one monthly for the last 3 years) and sync them to another computer for off-site backup.


    ZFS checksums everything (not just metadata like other filesystems). If your pool has redundancy and some data is damaged on one disk, it repairs it on the fly. Google for RAID write hole. That simply can't happen in ZFS.


    And many more...


    Just go to zfsonlinux.org, read the docs, etc. to get a better explanation.




    Cons:


    Due to a legalese with licenses, it's not merged into linux kernel, but you just need to install openmediavault-zfs plugin and let the magic happen.

    OMV 4.1 on Debian 10 @ HP Microserver gen8 [2x 256GB SSD ZFS mirror on root + 3x 8TB ZFS raidz1 pool]

    Einmal editiert, zuletzt von diego ()

  • you convinced me to install zfs @diego
    now I was just wiping securely all my 16tb for hours, installed omvextras and zfs and now I cant see how to do a raid10 in zfs, only options are basic/mirrored/zraid1/2/3


    thought I make two mirrored pools and then stripe those two pools and get what I want but not possible. How please?


    e: i think i figured out: mirroring under normal raid management first and then create a basic raid of the two mirrored pools in zfs plugin!?

  • you convinced me to install zfs @diego
    now I was just wiping securely all my 16tb for hours, installed omvextras and zfs and now I cant see how to do a raid10 in zfs, only options are basic/mirrored/zraid1/2/3


    thought I make two mirrored pools and then stripe those two pools and get what I want but not possible. How please?


    e: i think i figured out: mirroring under normal raid management first and then create a basic raid of the two mirrored pools in zfs plugin!?


    Forget RAID. Forget also LVM, NFS, SMB and iSCSI. Forget ext4 and any other traditional filesystems. All of that is done (and managed directly) by ZFS.


    think ZFS as a RAID + LVM + filesystems.
    Two commands to do everything. 'zpool' for the 'RAID+LVM' management, and 'zfs' for the filesystem management (including snapshots, cloning...) Please check the manual pages for both.



    ZFS uses one or more vdevs to create a pool and store data. A vdev can be: a single disk, a mirror (made from two or more disks), a raidz (kind-of RAID5 -single redundancy-), a raidz2 (kind-of RAID6 -dual redundancy) or even a raidz3 (tripe redundancy). every vdev but the single disk provides redundancy to recover from at least a broken disk (just like any RAID). You can add vdevs to an existing pool, (and in newest zfs version 0.8.0, you can remove it too under some circunstances)


    If your pool is made of more than one vdev, the data is split between then (kind-of RAID 0).
    To create a kind-of raid 10, you need to create a pool made of two vdevs, and each one of them is a mirror.


    so


    zpool create -o ashift=12 <name of the pool> mirror <device 1> <device2> mirror <device3> <device4>


    i.e.


    zpool create -o ashift=12 tank mirror disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128500 disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128600 mirror disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128700 disk/by-id/ata-WDC_WD3200BPVT-60JJ5T0_WD-WX11CC128800


    (-o ashift=12 is just for proper alignment)


    Best practices recommend to assign full disks instead of partitions for ZFS use, and also avoid using /dev/sdX names for disks, as those names could change if you swap SATA cables around. It's much better to use /dev/disk/by-id/* . You should avoid the trailing /dev/ from the full path (inherited from Solaris days).



    You can check the status with 'zpool list' and 'zpool status'
    At this point, you should have your pool hanging off /tank. Wait a moment before using it. We need to adjust some default properties. These will be inherited by default by any dataset (=filesystem) you may create later on. Just type


    zfs set compression=on xattr=sa atime=off tank


    Now you are ready to create datasets


    zfs create tank/dataset1


    It will hang on /tank/dataset1. Of course you can create nested datasets. Think of a dataset containing personal files, and then a child dataset for Alice's personal files, another for Bob's and so on. Every dataset has properties (like compression, recordsize... check manual pages) and they can inherit them from their parent dataset, or just define their own values.


    It's not recommended to store anything but datasets in the root of your pool.


    Now you can copy files to dataset1.


    For automatic snapshots, there's a package in github called sanoid. Just check it out, very simple to install and configure.

    OMV 4.1 on Debian 10 @ HP Microserver gen8 [2x 256GB SSD ZFS mirror on root + 3x 8TB ZFS raidz1 pool]

  • tyvm @diego for all the info. Since my last post I am currently synching 2 normal raid mirrors with the raid management GUI, it takes ages. I wonder if I have to delete them now or reuse them with what you have written so far. can I somehow use the synched mirrors or was it a waste of time?


    e: nvm, i will abort the sync and make a zfs mirror


    question about the ashift param and my HDDs: I use Seagate 4x ST4000VN000-1H4168 drives. They seem to be able to emulate 512B sectors. I assume I still use the ashift=12 param?!


    Code
    sudo hdparm -I /dev/sdd | grep "Model\|Sector"
            Model Number:       ST4000VN000-1H4168
            Logical  Sector size:                   512 bytes
            Physical Sector size:                  4096 bytes
            Logical Sector-0 offset:                  0 bytes

    e2: did it:


    so it is ready instantly, no need to wait for sync or something?!

  • Even if they emulate 512 byte sectors, keep ashift=12. Most times zfs realize the native sector size and will do ashift=12 by default, but if it doesn't, it will hurt performance.
    And yes, it ready instantly. Zfs knows which sectors are in use and which are not, so no need to sync everything in advance.

    OMV 4.1 on Debian 10 @ HP Microserver gen8 [2x 256GB SSD ZFS mirror on root + 3x 8TB ZFS raidz1 pool]

  • thanks again @diego so far everything works like a charm. I transfered a couple of movies as a test and througput is very good, 1GBE is fully used constantly so the new NAS has absolutely no problem here. Was a little surprised to see of only 4GB of RAM only 2GB were in use by the time I transfered data. After copy I streamed a movie but still only half of the RAM in use. Made me wonder if something gets cached on the SSD but df said no. Somehow I couldn't hear the HDDs but I guess the data was coming from the disks. Nevertheless now I wonder if upgrading to 8 or 16GB RAM is actually worth it.


    Also trying to figure out how to wake a sleeping nas automatically if a client wants to access it. I have a FriitzBox that can WOL by pressing a button but unfortunately that doesnt work automatically.

  • You were doing sequential reads, zfs spans them over your four disks, and data was written sequentially also, so you got a best-case scenario. You couldn't hear
    the disks because WD are quite silent and the heads were not waving back and forth, just jumping from one track to the next one.


    Zfs uses (by default) up to half of RAM for cache (called ARC). It's clever enough to realize that there's no gain caching a big sequential read. It uses LRU+LRU cache algorithms to achieve it.


    Anyway, you could use a parttion on your ssd for level 2 ARC (L2ARC for short) but I think you don't need it by now. There' s always time to make things more complicated.


    More RAM is mandatory if you ever enable ZFS deduplication, or if you end up using you nas to provide more services (think plex, email server, vpn server, virtualbox...)


    To wake up the nas, you'll need to use Wake on Lan. Client sends a special packet to your nas (using its mac address) and it turns on. Nas can be configured (using ethtool) to wake using different kind of packets (magic packets, password protected or to wake just on broadcast activity). Check ethtool man page for more info.

    OMV 4.1 on Debian 10 @ HP Microserver gen8 [2x 256GB SSD ZFS mirror on root + 3x 8TB ZFS raidz1 pool]

  • I have a problem with SMB share. I have one tank0/dataset0 and two folders in it called Storage and Media
    I want to access media with user1 and storage with user2


    Now if I set read/write for each user to each directory and no access to the other folder I only can use one SMB network folder in win10. The other one is rejecting me when trying to access. I tried rebooting but still can only access one SMB.


    Not sure what to do here...

  • First, I'd make a dataset tank0/Media and another one called tank0/Storage. Destroy tank/dataset0 if it's empty or rename it to something meaningful.



    If I recall correctly (it's been almost 20 years since I last had to deal with samba shares), you have to give permissions to user to access a share (and you did), but that user has to be able to read (and/or write) the directory the share points to.


    So run
    ls -al /tank0
    To show owners and permissions to the directories you are trying to share.

    OMV 4.1 on Debian 10 @ HP Microserver gen8 [2x 256GB SSD ZFS mirror on root + 3x 8TB ZFS raidz1 pool]

  • I have already written some data into Storage. If I use main folder names as datasets I have to set another folder for SMB share. therefore I dont feel like to do tank0/Media/media etc. hope its not that bad of a problem. anyways, here is the ls -la, seems to be fine to me:


  • I have already written some data into Storage. If I use main folder names as datasets I have to set another folder for SMB share. therefore I dont feel like to do tank0/Media/media etc. hope its not that bad of a problem. anyways, here is the ls -la, seems to be fine to me:



    Ok, plan your datasets the way you feel it. I usually create many small datasets instead of just a few big ones: easier to backup, manage, apply different quotas or whatever. You can rename (and reubicate) datasets using zfs rename.



    Now for permissions: As per ls output, root is the owner of Media and Storage, and he can access for read,write and execute. The group users can access for read,write,execution both directories, and anybody else can access for read and execute Media but not Storage.


    This problem is not related to ZFS. You would have faced it using ext4 as well.


    So if you want to let user1 have full access to media and no one else, and same for user2 and storage, I would make user1 be the owner of media


    chown user1 -R /tank0/dataset/Media


    Same for user2 and Storage


    chown user2 -R /tank0/dataset0/Storage


    Owners already have full rights over each folder, but you don't want anybody else to read those directories, so let's remove permissions to group and others


    chmod go-rwxs -R /tank0/dataset0/Media
    chmod go-rwxs -R /tank0/dataset0/Storage


    so, if you run ls -al again you'll see


    drwx------ 2 user1 users 2 Jul 16 17:14 Media
    drwx------ 4 user2 users 4 Jul 17 22:30 Storage


    That means that user1 is owner of Media, and has read, write and execution access. Media also belongs to 'users' group, but no access for them, and also no access for any other user.
    Same thing for user2 and Storage.


    Just as remainder, root user has full access to any file or directory, no matter what permissions are set.


    Have a look at chmod and chown man pages.


    That should fix it (from file access point of view). Test it, and if it keeps failing, it's time to check SMB share access permissions. Unfortunately I have no experience on that.

    OMV 4.1 on Debian 10 @ HP Microserver gen8 [2x 256GB SSD ZFS mirror on root + 3x 8TB ZFS raidz1 pool]

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!