How Rebuilt my old ZFS Pool

    • OMV 4.x
    • How Rebuilt my old ZFS Pool

      I don’t know, if it is possible to help you.

      One last try. Let’s try to change the mountpoint of your pool:

      1. Export the pool:

      Source Code

      1. zpool export -f Pool1


      2. Set a new mountpoint:

      Source Code

      1. zfs set mountpoint=/Pool1 Pool1


      —> If this doesn‘t work, you may need to create the directory first:

      Source Code

      1. mkdir /Pool1


      3. Import the pool:

      Source Code

      1. zpool import -d /dev/disk/by-id Pool1


      4. Look for zpool status:

      Source Code

      1. zpool list


      Show us the output of all commands.

      Maybe you don’t need to export/import the pool to change the mountpoint. So, if step 2 doesn’t work, because there is no pool imported, you only need to execute step 2 (and 4).

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------

      The post was edited 3 times, last by hoppel118 ().

    • Your situation does not look good:

      Your pool:
      root@HashServer:~# zpool list
      NAME...SIZE... ALLOC..FREE...EXPANDSZ...FRAG ...CAP...DEDUP..HEALTH...ALTROOT
      Pool1...10,9T..251M...10,9T.............- ......1%....0%....1.00x...ONLINE....-

      My Pool:
      root@omv-server:~# zpool list
      NAME...SIZE....ALLOC...FREE..EXPANDSZ..FRAG...CAP...DEDUP...HEALTH...ALTROOT
      ZFS1....3.62T..1.51T...2.12T ...........-.....16%.. 41%...1.00x....ONLINE....-
      ________________________________________________________________

      In my pool (ZFS1), I have approximately 1.51TB of data, in a 3.62TB pool. That's about 41% of the total capacity of the pool. In your pool (Pool1), 251Meg is probably empty directories. It appears that you have a 10.9TB pool with nothing in it (CAPACITY USED = 0% and FREE 10,9T).
      ________________________________________________________________

      Since @hoppel118 brought me in - I read this thread from the beginning:

      There's a big difference between EXPORT and DELETE. I think the difference, in the meaning of these two words, was lost in translation. If you used delete, it's permanent.

      I hope you have backup. With ZFS, even when using snapshots, having reliable backup is critical.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)

      The post was edited 2 times, last by flmaxey: edit2 ().

    • How Rebuilt my old ZFS Pool

      Hey @flmaxey

      thanks for coming in.

      My assumption and my hope was/is, that cap used is 0%, because there is no mounted zfs filesystem. But I am not sure how „zpool list“ calculates the values (with or without mounted filesystems).

      For me it still seems that there is no filesystem mounted because the root mount point „/mnt“ is not empty.

      What do you think about this?

      Thanks and regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------

      The post was edited 2 times, last by hoppel118 ().

    • How Rebuilt my old ZFS Pool

      @.H. Maybe we can see what happened with the output of the following command:

      Source Code

      1. zpool history


      If the output is to long, use a service like: pastebin.com

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • hoppel118 wrote:

      Hey @flmaxey

      thanks for coming in.

      My assumption and my hope was/is, that cap used is 0%, because there is no mounted zfs filesystem. But I am not sure how „zpool list“ calculates the values (with or without mounted filesystems).

      For me it still seems that there is no filesystem mounted because the root mount point „/mnt“ is not empty.

      What do you think about this?

      Thanks and regards Hoppel
      I think your history command, zpool history, will reveal what happened. We'll see what @.H. finds. If filesystems were deleted out of the pool, they're gone. Unfortunately, there is no "undo" command.

      In my pool history, since I have automated snapshots, I see some snapshot configuration and a lot of created and destroyed snapshots. Delete is not in the list, anywhere.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • my zpool history

      Source Code

      1. root@HashServer:~# zpool history
      2. History for 'Pool1':
      3. 2015-07-01.11:16:34 zpool create -m /mnt Pool1 raidz2 ata-ST2000VN000-1HJ164_W520JHC9 ata-ST2000VN000-1HJ164_W520JHGD ata-ST2000VN000-1HJ164_W520JGXY ata-WDC_WD20EFRX-68AX9N0_WD-WMC301013769 ata-WDC_WD20EFRX-68AX9N0_WD-WMC301090495 ata-WDC_WD20EFRX-68AX9N0_WD-WMC301111420
      4. 2015-07-01.11:16:34 zpool export Pool1
      5. 2015-07-01.11:16:46 zpool import Pool1
      6. 2015-07-06.21:22:37 zpool import -d /dev/disk/by-id -N Pool1
      7. 2015-07-12.11:57:32 zpool import -d /dev/disk/by-id -N Pool1
      8. 2015-07-31.18:28:11 zpool import -d /dev/disk/by-id -N Pool1
      9. 2015-08-17.17:17:51 zpool import -d /dev/disk/by-id -N Pool1
      10. 2015-08-23.17:40:52 zpool export Pool1
      11. 2015-08-23.17:42:01 zpool import -d /dev/disk/by-id -N Pool1
      12. 2015-08-23.18:31:26 zpool export Pool1
      13. 2015-08-23.18:33:30 zpool import -d /dev/disk/by-id -N Pool1
      14. 2016-04-20.14:15:54 zpool import -d /dev/disk/by-id -N Pool1
      15. 2016-06-26.14:30:41 zpool export Pool1
      16. 2016-06-26.14:31:52 zpool import -d /dev/disk/by-id -N Pool1
      17. 2016-06-29.15:00:09 zpool import -d /dev/disk/by-id -N Pool1
      18. 2016-08-14.14:43:42 zpool import -d /dev/disk/by-id -N Pool1
      19. 2016-08-14.20:34:49 zpool import -d /dev/disk/by-id -N Pool1
      20. 2017-01-04.10:59:29 zpool import -d /dev/disk/by-id -N Pool1
      21. 2017-01-23.01:53:47 zpool import -d /dev/disk/by-id -N Pool1
      22. 2017-04-03.10:00:18 zpool export Pool1
      23. 2017-04-03.10:01:36 zpool import -d /dev/disk/by-id -N Pool1
      24. 2017-04-17.14:54:42 zpool export Pool1
      25. 2017-04-17.14:55:53 zpool import -d /dev/disk/by-id -N Pool1
      26. 2017-04-17.19:16:37 zpool export Pool1
      27. 2017-04-17.19:17:46 zpool import -d /dev/disk/by-id -N Pool1
      28. 2017-04-28.07:59:01 zpool import -d /dev/disk/by-id -N Pool1
      29. 2017-04-28.09:29:13 zpool import -d /dev/disk/by-id -N Pool1
      30. 2017-06-16.11:10:25 zpool import -d /dev/disk/by-id -N Pool1
      31. 2017-08-28.12:36:56 zpool import -d /dev/disk/by-id -N Pool1
      32. 2017-09-06.09:45:07 zpool import -d /dev/disk/by-id -N Pool1
      33. 2017-09-06.10:30:39 zpool import -d /dev/disk/by-id -N Pool1
      34. 2018-05-12.03:37:28 zpool import -d /dev/disk/by-id -N Pool1
      35. 2018-05-12.13:48:19 zpool import -d /dev/disk/by-id -N Pool1
      36. 2018-05-20.14:44:18 zpool import -d /dev/disk/by-id -N Pool1
      37. 2018-09-23.00:41:13 zpool import -f -a
      38. 2018-09-23.00:41:18 zfs set omvzfsplugin:uuid=29d86ae3-67f5-485d-af33-21037f879be9 Pool1
      39. 2018-09-23.00:50:02 zpool import -c /etc/zfs/zpool.cache -aN
      40. 2018-09-23.11:14:14 zpool import -c /etc/zfs/zpool.cache -aN
      41. 2018-09-23.12:41:27 zpool import -c /etc/zfs/zpool.cache -aN
      42. 2018-09-23.14:45:26 zpool import -c /etc/zfs/zpool.cache -aN
      43. 2018-09-23.15:04:07 zpool import -c /etc/zfs/zpool.cache -aN
      44. 2018-09-23.16:56:23 zfs set omvzfsplugin:uuid=29d86ae3-67f5-485d-af33-21037f879be9 Pool1
      45. 2018-09-23.16:57:14 zfs set omvzfsplugin:uuid=29d86ae3-67f5-485d-af33-21037f879be9 Pool1
      46. 2018-09-23.17:11:21 zfs set omvzfsplugin:uuid=29d86ae3-67f5-485d-af33-21037f879be9 Pool1
      47. 2018-09-23.17:13:11 zfs set acltype=posixacl Pool1
      48. 2018-09-23.17:13:16 zfs set omvzfsplugin:uuid=29d86ae3-67f5-485d-af33-21037f879be9 Pool1
      49. 2018-09-23.23:01:40 zpool import -c /etc/zfs/zpool.cache -aN
      50. 2018-09-23.23:35:20 zfs set omvzfsplugin:uuid=29d86ae3-67f5-485d-af33-21037f879be9 Pool1
      51. 2018-09-29.11:37:20 zpool import -c /etc/zfs/zpool.cache -aN
      52. 2018-09-29.18:11:33 zfs set omvzfsplugin:uuid=1d6f42fe-ed87-4739-b0c5-ceffe35a68ed Pool1
      53. 2018-09-30.09:59:00 zfs set omvzfsplugin:uuid=1d6f42fe-ed87-4739-b0c5-ceffe35a68ed Pool1
      54. 2018-10-01.08:38:20 zpool import -c /etc/zfs/zpool.cache -aN
      55. 2018-10-03.14:49:22 zpool import -f -a
      56. 2018-10-03.14:49:27 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      57. 2018-10-03.15:06:24 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      58. 2018-10-03.15:07:20 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      59. 2018-10-03.15:07:31 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      60. 2018-10-03.15:12:31 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      61. 2018-10-03.15:12:48 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      62. 2018-10-03.15:43:34 zpool import -c /etc/zfs/zpool.cache -aN
      63. 2018-10-03.15:46:07 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      64. 2018-10-03.16:36:41 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      65. 2018-10-04.08:49:01 zpool import -c /etc/zfs/zpool.cache -aN
      66. 2018-10-04.10:35:08 zpool import -c /etc/zfs/zpool.cache -aN
      67. 2018-10-04.10:37:53 zpool export Pool1
      68. 2018-10-04.10:38:30 zpool import Pool1
      69. 2018-10-04.10:40:50 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      70. 2018-10-04.10:41:51 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      71. 2018-10-04.10:43:05 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      72. 2018-10-04.10:44:45 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      73. 2018-10-04.10:45:00 zfs set omvzfsplugin:uuid=62a2512a-41f4-4ba4-85e9-c4a5fb153f77 Pool1
      74. 2018-10-05.12:21:14 zpool export -f Pool1
      75. 2018-10-05.12:22:01 zpool import -d /dev/disk/by-id Pool1
      76. 2018-10-05.12:25:55 zpool export -f Pool1
      77. 2018-10-05.12:26:35 zpool import -d /dev/disk/by-id Pool1
      78. root@HashServer:~#
      Display All
      ?(
    • If this is the complete "zpool history", you never created zfs filesystems on your zpool. Didn't know that it is possible to work on the root of the pool directly.

      Can't see any critical commands in your zpool history. Sorry, I don't know what happend to your data.

      @.H. Please, do one last thing. Stop smb/nfs services and show us the output of the following commands. I want to see, if the directory "Hashare" is located at your zpool and mounted to "/mnt" or if it is located under "/mnt" directly.


      1. Export the pool:

      Source Code

      1. zpool export -f Pool1
      2. Show the content of your zpool root mount point:

      Source Code

      1. ls -l /mnt
      3. Import the pool again:

      Source Code

      1. zpool import -d /dev/disk/by-id Pool1
      4. Show the content of your zpool root mount point again:

      Source Code

      1. ls -l /mnt

      By the way: I would never use "/mnt" for the zpool root mountpoint. But this is my personal flavor.


      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------

      The post was edited 1 time, last by hoppel118 ().

    • Source Code

      1. root@HashServer:~# zpool export -f Pool1
      2. root@HashServer:~#



      Source Code

      1. root@HashServer:~# ls -l /mnt
      2. total 0
      3. root@HashServer:~#

      Source Code

      1. root@HashServer:~# zpool import -d /dev/disk/by-id Pool1
      2. root@HashServer:~#

      Source Code

      1. root@HashServer:~# ls -l /mnt
      2. total 1
      3. drwxr-sr-x 2 root users 2 oct. 4 10:35 Hashare
      4. root@HashServer:~#
    • How Rebuilt my old ZFS Pool

      Ok, so the directory „Hashare“ comes from your pool and seemingly it’s all the data your pool includes.

      Sadly, I can’t do more for you.

      @flmaxey What do you think about this result?

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------

      The post was edited 1 time, last by hoppel118 ().

    • hoppel118 wrote:


      @flmaxey What do you think about this result?

      The zpool history doesn't show anything as being deleted. (That's obvious.)

      While it's speculation, something must have happened when the OMV2.X installation crashed. Data disks usually survive an operating system crash, but that's not always the case.

      While I'd like to be able to suggest something else, I'd have to agree with you - I think the pool is empty.
      _________________________________________________

      @.H.

      My sincere regrets...

      While this is painful to consider right now, next time, I'd suggest full backup.
      That would include backing up all data AND, separately, backing up the boot drive.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • How Rebuilt my old ZFS Pool

      @flmaxey Thanks for your opinion.

      @.H. I read the thread again, because in the beginning you wrote that you have problems to apply acls to your data.

      .H. wrote:

      when i try to validate the rules for my pool ( 7T with full of folders and files.

      it takes a lot of time to get done and ends up opening a window telling me that there is an internal error


      I try to delete the folders but impossible to do that...

      what can i do ?

      thanks


      What exactly did you try to delete the folders and why did you want to delete them?

      .H. wrote:

      hi,

      i'd reinstalled everything.
      but when try to apply ACL right, it takes too many times ( and not validate )
      because i apply that on my 7 Tera Pool with many folders..

      can i do that with another way ?

      thanks

      PS: i'm the only user / admin on server.


      So you saw all your data under omv4, but had problems to apply the acls. What happened to your data?

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • yes, at the beginning, i had problems to apply ZCL right to my 7tera - Pool1 ( in the Hashare folders )

      the 2nd time, it was the same thing ( acl right was impoosble to apply to the full folder

      i reinstalle a third time, and when i apply the ACL, i was enjoy necause it's was apply in 1S. !)
      but when i'm looking folder in don't found my files...


      i never delete Pool or files. ( i just would try to delete the sharing file on OMV in ordfer to try to re-import )

      but i'"m follow your instruction in order to see my files. !
    • How Rebuilt my old ZFS Pool

      .H. wrote:

      i never delete Pool or files. ( i just would try to delete the sharing file on OMV in ordfer to try to re-import )


      Is it possible to delete all files when deleting a share? I don’t know.

      So you decided to delete everything and start from the beginning... Did you solve your issue with the referenced shared filesystems?

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • How Rebuilt my old ZFS Pool

      New

      It is really hard to understand what you mean.

      Do you mean you want to reset omv to factory settings instead of rebuilding omv?

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • How Rebuilt my old ZFS Pool

      New

      I don’t know an option for this. I found this thread, which confirms this:

      Reset OMV like a fresh install?

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------