How Rebuilt my old ZFS Pool

  • Yes, you can do this by command line.


    Show us the ouput of the following command:


    Code
    ls -l /path/to/your/pool


    The output should look like this for example:


    Code
    root@omv4:~# ls -l /tank/
    insgesamt 5
    drwxr-xr-x 3 1002 1001 3 Okt  6  2016 backups
    drwxrwxr-x 3 1002 1001 3 Apr  2  2016 pictures
    drwxrwx--- 3 1002 1001 3 Aug 26  2016 private


    There you should see the old UID (1002) and the old GID (1001) of your old omv configuration.


    After that you can change the owner (user and group) by the following command:

    Code
    chown user:group -R /path/to/your/pool
    • change "user" to the name of your user (for example: fred)
    • change "group" to the name of your group (for example: family)
    • change "path/to/your/pool" to the path of your pool (for example /tank)


    So the command has to look like this:

    Code
    chown fred:family -R /tank

    Now, you can check the result with "ls -l" again. The output should look like this for example:


    Code
    root@omv4:~# ls -l /tank/
    insgesamt 5
    drwxr-xr-x 3 fred family 3 Okt  6  2016 backups
    drwxrwxr-x 3 fred family 3 Apr  2  2016 pictures
    drwxrwx--- 3 fred family 3 Aug 26  2016 private


    You can do this within some seconds. I never used the web ui to change file permissions or owners. ;)


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    5 Mal editiert, zuletzt von hoppel118 ()

  • hi,


    i'd reinstalled another time my system.
    everything are ok.


    but this time when i import my pool, it's seems to be empty ?(


    ( when i


    i see 7T available ...???


    i make the same thing that the other time..
    it is possible ? i don't format anything...


    OMV had reimpot my pool with the good name of folder that i use before

  • Sorry, it’s not possible to help you. Why didn’t you try, what I described?


    I need screenshots of your errors or command line code where I can see your commands and it’s results.


    Which kernel, which omv and which zfs version do you use at the moment?

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • hi,


    i didn't try because , it was impossible to connect :/


    when i out the pasqsword , the login page reload.


    so I reinstalled everything cleanly



    Now i don't have error.


    i install linux 4.15.18 kernel
    zfs 4.0.4


    and normally my 7T is full..


  • Please, show us the command line output of:


    Code
    uname -a


    and:


    Code
    dpkg -l | grep zfs


    and:



    Code
    zfs list


    Thanks and regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • alright :


  • Is your folder „/mnt“ really empty?


    Code
    ls -l /mnt

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Ok, the folder „/mnt“ must be empty, if it is your zfs root mountpoint. Do the following:


    1. export your zfs pool:


    Code
    zpool export Pool1


    2. move the folder „Hashare“ to „/tmp“:


    Code
    mv /mnt/Hashare /tmp


    3. import your zfs pool:


    Code
    zpool import Pool1


    Show us all the command line results.


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    3 Mal editiert, zuletzt von hoppel118 ()

  • impossible to umount


    Code
    root@HashServer:~# zpool export Pool1
    umount: /mnt : cible occupée
           (Dans certains cas, des renseignements sur les processus utilisant
            le périphérique sont accessibles avec lsof(8) ou fuser(1).)
    cannot unmount '/mnt': umount failed
    root@HashServer:~#


    i just run the pc this morning...

  • Sorry, I don’t understand that language. Do the following:


    1. shutdown your server
    2. unplug all the sata-cables of your zfs disks
    3. start your server
    4. move the folder “Hashare“ (What is this folder for?) to „/tmp“
    5. shutdown your server
    6. plug all the sata-cables of your zfs disks
    7. start your server
    8. do a „zfs list“ at the command line


    Maybe your zfs works as expected now. Your Pool1 should be automounted to the directory „/mnt“.


    As an alternative you could try to configure a new mountpoint for your Pool1. But I never did this.


    So, please try the described way.


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    3 Mal editiert, zuletzt von hoppel118 ()

  • No, why do you want to delete Pool1?

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Hm..., I think our linguistic barrier is to high.


    When did you create a new Pool1? Did you try what I described above.


    Maybe someone else can help here.


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Before deleting this filesystem, You must delete shares referencing this filesystem. ( shared folfers ? )


    Is this a result of a command at the command line?


    So, if you use smb or nfs, deactivate the service/s by the omv web ui, before trying to export the pool.

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • thanks hoppel118



    1/ i try evertything :



    1. shutdown your server


    2. unplug all the sata-cables of your zfs disks


    3. start your server


    4. move the folder “Hashare“ (What is this folder for?) to „/tmp“ or delete it


    5. shutdown your server


    6. plug all the sata-cables of your zfs disks


    7. start your server


    8. do a „zfs list“ at the command line



    but the pool1 are ever the same size freewith your step that i follow i ask


    i created the pool1 , in 2015, and impoted yesterdy, after zfs plugin install



    2/ I would try to delete it, in order to try import it a new time ;(


    i don't undertand how can i've lost more thn 7T without seeing..


    that's why, i follow your indications and ask few weird question :S


    but thanks for amm

  • No, don’t delete your pool! Export your pool. What exactly do you do, when you delete your pool?


    Show me the output of:


    Code
    ls -l /mnt


    What did you mount to the folder „/mnt/Hashare“?


    Which filesystems were on your Pool1?


    Stop smb and/or nfs services by the omv webui. Then do the following:


    Show me the output of:


    Code
    zpool export -f Pool1


    Show me the output of:


    Code
    zpool import -d /dev/disk/by-id Pool1


    Show me the output of:


    Code
    zpool list


    Maybe I am wrong with my suggestion, but your root mountpoint „/mnt“ must be empty, when you import the pool. And of course your pool should be exported, before you import the pool.


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    5 Mal editiert, zuletzt von hoppel118 ()

  • @flmaxey Do you have an idea what happened here?


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Code
    root@HashServer:~# ls -l /mnt
    total 1
    drwxr-sr-x 2 root users 2 oct.   4 10:35 Hashare


    Code
    root@HashServer:~# zpool export -f Pool1
    root@HashServer:~#


    Code
    root@HashServer:~# zpool import -d /dev/disk/by-id Pool1
    root@HashServer:~#

    root@HashServer:~# zpool list
    NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
    Pool1 10,9T 251M 10,9T - 1% 0% 1.00x ONLINE -
    root@HashServer:~#

    Code
    root@HashServer:~#     zpool list
    NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
    Pool1  10,9T   251M  10,9T         -     1%     0%  1.00x  ONLINE  -
    root@HashServer:~#



    on my pool1 , i have every kind of files ( images, videos, works - after effect project, , illustrator and co. i'm creatic )
    15 years of graphic activities..


    :/

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!