Is ZFS supported in Kernel 4.13-4.15?

    • OMV 4.x
    • Resolved
    • Upgrade 3.x -> 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • donh wrote:

      So does that imply the upgrade script has a bug?
      Maybe. I think it should change jessie to stretch in the backports file but I haven't tested enough to know if the stretch-backports repo being enabled would cause issues upgrading. I don't think it would but I need to test. If the backports repo is disabled before upgrading, there definitely is no bug.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • @ ryecoaaron I can't speak for others, but I've been reading what you've written very carefully.

      ryecoaaron wrote:

      SO, if you want a OMV 4.x install with working zfs, set the OMV_APT_USE_KERNEL_BACKPORTS="no" in /etc/default/openmediavault before upgrading to 4.x. Then you will keep the 4.9 kernel and never ever install the 4.13 kernel. When you install OMV extras and zfs plugin, the 4.9 linux headers will be installed making zfs module compiling work.

      I can only get the method you described to work with these additional steps using a fresh & updated OMV3 install :


      1. Prior to upgrading from OMV 3.0.91 edit /etc/default/openmediavault and execute both "omv-mkconf apt" and "apt-get update" before using "omv-release-upgrade" to OMV4. This ensures OMV4 boots using a stable stretch kernel and there is no kernel pinning.

      2. Immediately after OMV4 first boots, edit /etc/apt/souces.list to include contrib and non-free for debian sources. This ensures the zfs plugin can find its dependent packages from stable stretch.

      3. Execute apt-get update && apt-get upgrade. Any remaining backports packages from OMV3 should now be replaced except for old installed jessie-backport kernels. (optionally remove these now or later.)

      4. After installing OMV4 extras, remove stretch-backports pinning from the file /etc/apt/preferences.d/omv-extras-org.

      5. Install zfs plugin via webui.

      6. Check status post plugin install:

      Source Code

      1. root@omv-vm:/# uname -a
      2. Linux omv-vm 4.9.0-4-amd64 #1 SMP Debian 4.9.51-1 (2017-09-28) x86_64 GNU/Linux
      3. root@omv-vm:/# dpkg -l | grep linux-
      4. ii firmware-linux-free 3.4 all Binary firmware for various drivers in the Linux kernel
      5. ii firmware-linux-nonfree 20161130-3 all Binary firmware for various drivers in the Linux kernel (meta-package)
      6. ii linux-base 4.5 all Linux image base package
      7. ii linux-compiler-gcc-6-x86 4.9.51-1 amd64 Compiler for Linux on x86 (meta-package)
      8. ii linux-headers-4.9.0-4-amd64 4.9.51-1 amd64 Header files for Linux 4.9.0-4-amd64
      9. ii linux-headers-4.9.0-4-common 4.9.51-1 all Common header files for Linux 4.9.0-4
      10. ii linux-headers-amd64 4.9+80+deb9u2 amd64 Header files for Linux amd64 configuration (meta-package)
      11. rc linux-image-4.9.0-0.bpo.3-amd64 4.9.30-2+deb9u5~bpo8+1 amd64 Linux 4.9 for 64-bit PCs
      12. ii linux-image-4.9.0-0.bpo.4-amd64 4.9.51-1~bpo8+1 amd64 Linux 4.9 for 64-bit PCs
      13. ii linux-image-4.9.0-4-amd64 4.9.51-1 amd64 Linux 4.9 for 64-bit PCs
      14. ii linux-image-amd64 4.9+80+deb9u2 amd64 Linux for 64-bit PCs (meta-package)
      15. ii linux-kbuild-4.9 4.9.51-1 amd64 Kbuild infrastructure for Linux 4.9
      16. ii linux-libc-dev:amd64 4.9.51-1 amd64 Linux support headers for userspace development
      17. root@omv-vm:/# apt-cache policy linux-image-amd64
      18. linux-image-amd64:
      19. Installed: 4.9+80+deb9u2
      20. Candidate: 4.9+80+deb9u2
      21. Version table:
      22. *** 4.9+80+deb9u2 500
      23. 500 http://ftp.uk.debian.org/debian stretch/main amd64 Packages
      24. 100 /var/lib/dpkg/status
      25. root@omv-vm:/# dpkg -l | grep openmed
      26. ii openmediavault 4.0.9-1 all openmediavault - The open network attached storage solution
      27. ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      28. ii openmediavault-omvextrasorg 4.1.0 all OMV-Extras.org Package Repositories for OpenMediaVault
      29. ii openmediavault-zfs 4.0 amd64 OpenMediaVault plugin for ZFS
      30. root@omv-vm:/# dkms status
      31. spl, 0.6.5.9, 4.9.0-4-amd64, x86_64: installed
      32. zfs, 0.6.5.9, 4.9.0-4-amd64, x86_64: installed
      33. root@omv-vm:/# dpkg -l | grep -Ew "spl|zfs"
      34. ii openmediavault-zfs 4.0 amd64 OpenMediaVault plugin for ZFS
      35. ii spl-dkms 0.6.5.9-1 all Solaris Porting Layer kernel modules for Linux
      36. ii zfs-dkms 0.6.5.9-5 all OpenZFS filesystem kernel modules for Linux
      37. ii zfs-zed 0.6.5.9-5 amd64 OpenZFS Event Daemon
      Display All

      A quick test of zfs plugin:

      Source Code

      1. root@omv-vm:/# zpool status
      2. pool: TestPool
      3. state: ONLINE
      4. scan: none requested
      5. config:
      6. NAME STATE READ WRITE CKSUM
      7. TestPool ONLINE 0 0 0
      8. mirror-0 ONLINE 0 0 0
      9. ata-VBOX_HARDDISK_VBf1d93530-3d58384b ONLINE 0 0 0
      10. ata-VBOX_HARDDISK_VB2e0b3cb6-14e9dc49 ONLINE 0 0 0
      11. errors: No known data errors
      12. root@omv-vm:/#
      13. root@omv-vm:/# zpool history
      14. History for 'TestPool':
      15. 2017-11-09.15:51:13 zpool create -o ashift=12 TestPool mirror ata-VBOX_HARDDISK_VBf1d93530-3d58384b ata-VBOX_HARDDISK_VB2e0b3cb6-14e9dc49
      16. 2017-11-09.15:51:22 zfs set compression=on TestPool
      17. 2017-11-09.15:51:27 zfs set atime=off TestPool
      18. 2017-11-09.15:52:03 zfs create -p TestPool/archive
      19. 2017-11-09.15:52:09 zfs set mountpoint=/mnt TestPool/archive
      20. 2017-11-09.15:54:05 zfs snapshot TestPool/archive@today
      21. 2017-11-09.15:56:19 zfs rollback TestPool/archive@today
      Display All

      I had already tested upgrading a 3.0.91 base where omv3 extras and zfs plugin where installed. The same steps results a OMV4 system with working zfs and no stretch-backports. I don't know if there are any unwanted side-effects by doing this.
    • @ ryecoaaron I can't speak for others, but I've been reading what you've written very carefully.

      ryecoaaron wrote:

      SO, if you want a OMV 4.x install with working zfs, set the OMV_APT_USE_KERNEL_BACKPORTS="no" in /etc/default/openmediavault before upgrading to 4.x. Then you will keep the 4.9 kernel and never ever install the 4.13 kernel. When you install OMV extras and zfs plugin, the 4.9 linux headers will be installed making zfs module compiling work.

      I can only get the method you described to work with these additional steps using a fresh & updated OMV3 install :


      1. Prior to upgrading from OMV 3.0.91 edit /etc/default/openmediavault and execute both "omv-mkconf apt" and "apt-get update" before using "omv-release-upgrade" to OMV4. This ensures OMV4 boots using a stable stretch kernel and there is no kernel pinning.

      2. Immediately after OMV4 first boots, edit /etc/apt/souces.list to include contrib and non-free for debian sources. This ensures the zfs plugin can find its dependent packages from stable stretch.

      3. Execute apt-get update && apt-get upgrade. Any remaining backports packages from OMV3 should now be replaced except for old installed jessie-backport kernels. (optionally remove these now or later.)

      4. After installing OMV4 extras, remove stretch-backports pinning from the file /etc/apt/preferences.d/omv-extras-org.

      5. Install zfs plugin via webui.

      6. Check status post plugin install:

      Source Code

      1. root@omv-vm:/# uname -a
      2. Linux omv-vm 4.9.0-4-amd64 #1 SMP Debian 4.9.51-1 (2017-09-28) x86_64 GNU/Linux
      3. root@omv-vm:/# dpkg -l | grep linux-
      4. ii firmware-linux-free 3.4 all Binary firmware for various drivers in the Linux kernel
      5. ii firmware-linux-nonfree 20161130-3 all Binary firmware for various drivers in the Linux kernel (meta-package)
      6. ii linux-base 4.5 all Linux image base package
      7. ii linux-compiler-gcc-6-x86 4.9.51-1 amd64 Compiler for Linux on x86 (meta-package)
      8. ii linux-headers-4.9.0-4-amd64 4.9.51-1 amd64 Header files for Linux 4.9.0-4-amd64
      9. ii linux-headers-4.9.0-4-common 4.9.51-1 all Common header files for Linux 4.9.0-4
      10. ii linux-headers-amd64 4.9+80+deb9u2 amd64 Header files for Linux amd64 configuration (meta-package)
      11. rc linux-image-4.9.0-0.bpo.3-amd64 4.9.30-2+deb9u5~bpo8+1 amd64 Linux 4.9 for 64-bit PCs
      12. ii linux-image-4.9.0-0.bpo.4-amd64 4.9.51-1~bpo8+1 amd64 Linux 4.9 for 64-bit PCs
      13. ii linux-image-4.9.0-4-amd64 4.9.51-1 amd64 Linux 4.9 for 64-bit PCs
      14. ii linux-image-amd64 4.9+80+deb9u2 amd64 Linux for 64-bit PCs (meta-package)
      15. ii linux-kbuild-4.9 4.9.51-1 amd64 Kbuild infrastructure for Linux 4.9
      16. ii linux-libc-dev:amd64 4.9.51-1 amd64 Linux support headers for userspace development
      17. root@omv-vm:/# apt-cache policy linux-image-amd64
      18. linux-image-amd64:
      19. Installed: 4.9+80+deb9u2
      20. Candidate: 4.9+80+deb9u2
      21. Version table:
      22. *** 4.9+80+deb9u2 500
      23. 500 http://ftp.uk.debian.org/debian stretch/main amd64 Packages
      24. 100 /var/lib/dpkg/status
      25. root@omv-vm:/# dpkg -l | grep openmed
      26. ii openmediavault 4.0.9-1 all openmediavault - The open network attached storage solution
      27. ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      28. ii openmediavault-omvextrasorg 4.1.0 all OMV-Extras.org Package Repositories for OpenMediaVault
      29. ii openmediavault-zfs 4.0 amd64 OpenMediaVault plugin for ZFS
      30. root@omv-vm:/# dkms status
      31. spl, 0.6.5.9, 4.9.0-4-amd64, x86_64: installed
      32. zfs, 0.6.5.9, 4.9.0-4-amd64, x86_64: installed
      33. root@omv-vm:/# dpkg -l | grep -Ew "spl|zfs"
      34. ii openmediavault-zfs 4.0 amd64 OpenMediaVault plugin for ZFS
      35. ii spl-dkms 0.6.5.9-1 all Solaris Porting Layer kernel modules for Linux
      36. ii zfs-dkms 0.6.5.9-5 all OpenZFS filesystem kernel modules for Linux
      37. ii zfs-zed 0.6.5.9-5 amd64 OpenZFS Event Daemon
      Display All

      A quick test of zfs plugin:

      Source Code

      1. root@omv-vm:/# zpool status
      2. pool: TestPool
      3. state: ONLINE
      4. scan: none requested
      5. config:
      6. NAME STATE READ WRITE CKSUM
      7. TestPool ONLINE 0 0 0
      8. mirror-0 ONLINE 0 0 0
      9. ata-VBOX_HARDDISK_VBf1d93530-3d58384b ONLINE 0 0 0
      10. ata-VBOX_HARDDISK_VB2e0b3cb6-14e9dc49 ONLINE 0 0 0
      11. errors: No known data errors
      12. root@omv-vm:/#
      13. root@omv-vm:/# zpool history
      14. History for 'TestPool':
      15. 2017-11-09.15:51:13 zpool create -o ashift=12 TestPool mirror ata-VBOX_HARDDISK_VBf1d93530-3d58384b ata-VBOX_HARDDISK_VB2e0b3cb6-14e9dc49
      16. 2017-11-09.15:51:22 zfs set compression=on TestPool
      17. 2017-11-09.15:51:27 zfs set atime=off TestPool
      18. 2017-11-09.15:52:03 zfs create -p TestPool/archive
      19. 2017-11-09.15:52:09 zfs set mountpoint=/mnt TestPool/archive
      20. 2017-11-09.15:54:05 zfs snapshot TestPool/archive@today
      21. 2017-11-09.15:56:19 zfs rollback TestPool/archive@today
      Display All

      I had already tested upgrading a 3.0.91 base where omv3 extras and zfs plugin where installed. The same steps results a OMV4 system with working zfs and no stretch-backports. I don't know if there are any unwanted side-effects by doing this.
    • Krisbee wrote:

      Prior to upgrading from OMV 3.0.91 edit /etc/default/openmediavault and execute both "omv-mkconf apt" and "apt-get update" before using "omv-release-upgrade" to OMV4. This ensures OMV4 boots using a stable stretch kernel and there is no kernel pinning.
      Yes, I forgot to put omv-mkconf apt (or just delete the backports list) to my process overview. apt-get update isn't necessary since it is run by omv-release-upgrade.

      Krisbee wrote:

      Immediately after OMV4 first boots, edit /etc/apt/souces.list to include contrib and non-free for debian sources. This ensures the zfs plugin can find its dependent packages from stable stretch.
      You could do this before the upgrade as well. I didn't have to do this since I already had it in my sources.list.

      Krisbee wrote:

      After installing OMV4 extras, remove stretch-backports pinning from the file /etc/apt/preferences.d/omv-extras-org.
      If the backports repo isn't enabled, the pinning don't hurt anything and they will be added back anytime you do something in omv-extras. I guess I could check to the same backports variable but this won't matter once the zfs packages are updated to compile on 4.13.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I have zfs working quite well with Kernel 4.13 and spl-dkms / zfs-dkms 0.7.3, installed via sid temporarily enabled.
      Me stupid also upgraded the pool and could not go back to backed up 3.0.91 due to incompatibility.

      I am on Arrakis 4.0.9-1 and unfortunately have still the issue reported in Mantis as solved since 4.07 (see Mantis # 0001827).

      The pool itself and random datasets can't mount because mounting locations are not empty. Can be solved manually after each restart.

      Else everything works perfectly fine regarding zfs with Kernel 4.13 on a HP Proliant Gen8. FTP transfer speed e.g. is better than ever.

      @Skaronator :Do you also still face this issue?
      HP Microserver Gen8 - 16GB RAM - 1x Kingston 30 GB SSD - 4x 3TB WD Red / MDADM Raid 5 - OMV 4.1.x bare metal - Docker running Plex - Synology DS214 with 1x 4TB WD Red for rsync backup
    • belierzz wrote:

      @Skaronator :Do you also still face this issue?
      Kinda, maybe? Well I created FS inside of my Pool as you can see here.

      So I've /StorageZFS/Multimedia as Path and on ZFS Level it is the StorageZFS Pool with Multimedia Filesystem.

      My Shared-Folder now point to the `/StorageZFS/Multimedia` FS and the Path is `/`. And now when I reboot it'll create random folders on the root drive of my OS drive so on `/` eg. `/Multimedia` but after the ZFS mount it'll use the ZFS folders.


      Bad explained but currently not at home.


      Edit: Mainproblem is now, that StorageZFS won't mount because the folder are not empty. Didn't digged deep in that problem just notice it because I got a mail with something something StorageZFS_fs something something but folders and Sub-FS works just fine.
      OMV 4 - Ryzen 7 1700 (8 Cores / 16 Threads 65W TDP) - 32 GB DDR4 ECC
      128 GB OS SSD - 256 GB Plex SSD - 32 TB RAIDZ2 (6x8TB HGST NAS)

      The post was edited 1 time, last by Skaronator ().

    • What if I put the zfs 0.7.3 modules in the omv-extras repo? Then contrib wouldn't be needed in the main sources.list and it would compile on 4.13.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      What if I put the zfs 0.7.3 modules in the omv-extras repo? Then contrib wouldn't be needed in the main sources.list and it would compile on 4.13.
      As a temp fix, or ? I would think die-hard zfs users would welcome the chance to use zfs-0.7.3 , being eager to try out all the new features and improvements. Using debian sid + OMV4 might be OK as a test, but to me it sounds like playing with fire on real data.

      From OMV's point of view, do you want to be saddled with another maintenance task of providing zfs modules that complie and are in sync with backport kernels?
    • Is ZFS supported in Kernel 4.13?

      What about using the Proxmox kernel?

      [IMG:https://uploads.tapatalk-cdn.com/20171109/321c638cbf09b1f562dcd2f02cbadf69.jpg]

      Proxmox 5.1 (debian stretch) also uses kernel 4.13 and zfs 0.72. This can be read here in german:

      proxmox.com/de/news/pressemitteilungen/proxmox-ve-5-1

      Maybe it’s an alternative... At the moment I don’t use Proxmox or omv4.

      The zfs modules are integrated in kernel 4.4 of Proxmox 4 (debian jessie). It should also be the case for Proxmox 5.1.

      Greetings Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------

      The post was edited 4 times, last by hoppel118 ().

    • Proxmox is a good example of maintaining your own repo. The zfs stuff and kernels comes from their repo and not stock debian. I think it is probably a lot of work and the goal of omv is to not have to maintain the base repo, just the added features.

      I am just going to wait for it to work itself out.

      Thanks
      If you make it idiot proof, somebody will build a better idiot.
    • Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
      I didn''t do it for now, but I will probably this Saturday or Sunday.

      Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2
    • Blabla wrote:

      Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
      I didn''t do it for now, but I will probably this Saturday or Sunday.

      Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?
      Assuming there is no compelling reason to use the openmediavault-zfs plugin in OMV4 extras, it's not essential to upgrade to OMV4 to use zfs. You could just fully update your current OMV3 install to version 3.0.91, then install the zfs plugin from OMV3 extras. You would essentially be using the same 4.9 Kernel and version number of the zfs packages as if you upgraded to OMV4 without backports.
    • Krisbee wrote:

      As a temp fix
      I would just leave them in the repo. If debian released a new version, it would automatically use that version.

      Krisbee wrote:

      Using debian sid + OMV4 might be OK as a test, but to me it sounds like playing with fire on real data.
      It wouldn't be Debian sid. Yes, the source code would be the same as Sid but it would be compiled on stretch. Considering OMV 4.x isn't even beta, you shouldn't be using OMV 4.x for production anyway.

      Krisbee wrote:

      From OMV's point of view, do you want to be saddled with another maintenance task of providing zfs modules that complie and are in sync with backport kernels?
      I'm not providing modules. Just the packages which don't change very often. Your system would compile its own modules.

      hoppel118 wrote:

      What about using the Proxmox kernel?
      I will look at adding that back into the code.

      donh wrote:

      Proxmox is a good example of maintaining your own repo. The zfs stuff and kernels comes from their repo and not stock debian. I think it is probably a lot of work and the goal of omv is to not have to maintain the base repo, just the added features.
      If you add the proxmox repos to your system, then it would automatically update when proxmox releases updates. No work for us. It probably is a lot of work for them but they make money from their work.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • @ryecoaaron

      Understood it would just be deb packages for zfs 0.7.3 and not modules, that was just my careless use of language. My comment about Sid was to those using it now, not yourself.

      @Blabla

      Blabla wrote:

      Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
      I didn''t do it for now, but I will probably this Saturday or Sunday.

      Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?

      I hesitate to ask, but what are/have been your mdadm RAID troubles? Fixing those may mean you don't need zfs, which if you believe what you read should not be used on systems like yours that do not support ECC memory.
    • Krisbee wrote:

      @ryecoaaron

      Understood it would just be deb packages for zfs 0.7.3 and not modules, that was just my careless use of language. My comment about Sid was to those using it now, not yourself.

      @Blabla

      Blabla wrote:

      Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
      I didn''t do it for now, but I will probably this Saturday or Sunday.

      Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?

      I hesitate to ask, but what are/have been your mdadm RAID troubles? Fixing those may mean you don't need zfs, which if you believe what you read should not be used on systems like yours that do not support ECC memory.
      I created a topic in the raid section :) my first raid1 (wd red 4tb) is working perfectly, while the new one (Seagate ironwolf 6tb) will disappear every time that I reboot :/

      Send by my Sony XZ1 using Tapatalk
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2
    • @Blabla

      Blabla wrote:

      Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
      I didn''t do it for now, but I will probably this Saturday or Sunday.

      Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?

      I hesitate to ask, but what are/have been your mdadm RAID troubles? Fixing those may mean you don't need zfs, which if you believe what you read should not be used on systems like yours that do not support ECC memory.[/quote]


      I created a topic in the raid section :) my first raid1 (wd red 4tb) is working perfectly, while the new one (Seagate ironwolf 6tb) will disappear every time that I reboot :/


      Send by my Sony XZ1 using Tapatalk
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2
    • After reading your RAID thread @Blabla , I can see why you're considering using a ZFS mirror for your new 6TB drives. Putting the debate about Btrfs to one side, @ryecoaaron has a point when he asked do your really need RAID at all.

      I don't think it can be said too often that RAID is not backup, whether it's mdadm based or using zfs, and I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs. Even with latest zfs 0.7.3, scrub and resilver times for a 6TB mirror will run into multiple hours. How are you going backup your data?

      You have to ask yourself how you are using your data, it is for example write once and read many times or constant read & writes which requires real time duplication across some from of RAID to ensure uptime in case of disk failure. Will timed, or ondemand, rsync between separate drives suffice as @ryecoaaron suggested? Do you even want the two 6TB drives in the same box?
    • Users Online 1

      1 Member