NOT seeing ZFS i supose after apt-get update [SOLVED]

  • This is the problem, the system have gone "corrupt" i suposse because the kernel and the zfs.


    if this will be correct, i will have the 2nd that is the filesystem mounted.



    Now i have recover the zfs, but something of the last configuration don't works.... and i can't delete the shared folders. or the objects of the zpool

  • via ssh i get:


    root@tokyo:~# zfs get all


    NAME PROPERTY VALUE SOURCE

    Tokyopool type filesystem -

    Tokyopool creation Sun Aug 25 0:55 2019 -

    Tokyopool used 3.34T -

    Tokyopool available 3.68T -

    Tokyopool referenced 139K -

    Tokyopool compressratio 1.00x -

    Tokyopool mounted yes -

    .

    .

    .

    Tokyopool guid 3737134360514090584 -

    .

    .

    .

    Tokyopool omvzfsplugin:uuid 6ac263a5-6605-4ba9-b050-a83da04f1f1c local

    Tokyopool/Tokyo2 type filesystem -

    Tokyopool/Tokyo2 creation Sun Aug 25 0:56 2019 -

    Tokyopool/Tokyo2 used 3.34T -

    Tokyopool/Tokyo2 available 3.68T -

    Tokyopool/Tokyo2 referenced 3.34T -

    Tokyopool/Tokyo2 compressratio 1.00x -

    Tokyopool/Tokyo2 mounted yes -

    Tokyopool/Tokyo2 quota none default

    Tokyopool/Tokyo2 reservation none default

    Tokyopool/Tokyo2 recordsize 128K default

    Tokyopool/Tokyo2 mountpoint /Tokyopool/Tokyo2 default

    Tokyopool/Tokyo2 sharenfs off default

    .

    .

    .

    Tokyopool/Tokyo2 guid 8543986563402028725 -

    .

    .

    .

    Tokyopool/Tokyo2 omvzfsplugin:uuid 8baec714-a543-42d9-b231-fb7b2c45679b local

    Tokyopool/Tokyo2/Tokyo2 type filesystem -

    Tokyopool/Tokyo2/Tokyo2 creation Wed Apr 1 14:18 2020 -

    Tokyopool/Tokyo2/Tokyo2 used 128K -

    Tokyopool/Tokyo2/Tokyo2 available 3.68T -

    Tokyopool/Tokyo2/Tokyo2 referenced 128K -

    Tokyopool/Tokyo2/Tokyo2 compressratio 1.00x -

    Tokyopool/Tokyo2/Tokyo2 mounted yes -

    Tokyopool/Tokyo2/Tokyo2 quota none default

    Tokyopool/Tokyo2/Tokyo2 reservation none default

    Tokyopool/Tokyo2/Tokyo2 recordsize 128K default

    Tokyopool/Tokyo2/Tokyo2 mountpoint /Tokyopool/Tokyo2 local

    .

    .

    .

    Tokyopool/Tokyo2/Tokyo2 omvzfsplugin:uuid 8baec714-a543-42d9-b231-fb7b2c45679b inherited from Tokyopool/Tokyo2



    and now i get

    root@tokyo:~# omv-confdbadm read --prettify conf.system.filesystem.mountpoint

    [

    {

    "dir": "/Tokyopool",

    "freq": 0,

    "fsname": "Tokyopool",

    "hidden": true,

    "opts": "rw,relatime,xattr,noacl",

    "passno": 0,

    "type": "zfs",

    "uuid": "6ac263a5-6605-4ba9-b050-a83da04f1f1c"

    },

    {

    "dir": "/export/Tokyo",

    "freq": 0,

    "fsname": "/Tokyopool/Tokyo/Tokyo",

    "hidden": false,

    "opts": "bind,nofail,_netdev",

    "passno": 0,

    "type": "none",

    "uuid": "70b67db7-3f61-427f-a999-6a31a7694ded"

    },

    {

    "dir": "/Tokyopool/Tokyo2",

    "freq": 0,

    "fsname": "Tokyopool/Tokyo2",

    "hidden": true,

    "opts": "rw,relatime,xattr,noacl",

    "passno": 0,

    "type": "zfs",

    "uuid": "9d166b96-f35e-44ad-bbdc-324d016bfb7d"

    }



    Because i was trying to rename the object of the pool.... and now the gui tells me that the zfs have to objets trying to


    The configuration object 'conf.system.filesystem.mountpoint' is not unique. An object with the property 'dir' and value '/Tokyopool/Tokyo2' already exists.


    i want to delete from the zpool the filesystem /tokyopool/tokyo2/tokyo2 (that is created by error trying to rename and copy the data)


    rename the tokyopool/tokyo2 to tokyopool/tokyo

    and delete the sharedfolder and get the zfs shared via nfs

  • OK: i'm seeing the data:


    root@tokyo:~# zfs mount Tokyopool/Tokyo

    cannot open 'Tokyopool/Tokyo': dataset does not exist

    root@tokyo:~# zfs mount Tokyopool/Tokyo2

    cannot mount 'Tokyopool/Tokyo2': filesystem already mounted

    root@tokyo:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 3.9G 0 3.9G 0% /dev

    tmpfs 790M 8.9M 781M 2% /run

    /dev/sda2 102G 4.1G 93G 5% /

    tmpfs 3.9G 0 3.9G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup

    tmpfs 3.9G 0 3.9G 0% /tmp

    /dev/sda1 511M 132K 511M 1% /boot/efi

    Tokyopool 3.7T 128K 3.7T 1% /Tokyopool

    Tokyopool/Tokyo2 7.1T 3.4T 3.7T 48% /Tokyopool/Tokyo2

    root@tokyo:~# cd /Tokyopool/Tokyo2/

    root@tokyo:/Tokyopool/Tokyo2# ls

    Tokyo

    root@tokyo:/Tokyopool/Tokyo2# cd Tokyo/

    root@tokyo:/Tokyopool/Tokyo2/Tokyo# ls

    Series


    before that the tokyopool and the tokypool/tokyo2 wasn't mounted....¿?¿?


    At least i have confirmed that the data is not lost, but now i have a problem with the configuration between the zfs and the openmediavault

  • And now in the openmediavault i see that is mounted in the storage-> file systems


    but i can't umounted via the webui, and cant delete the shared folder... and less the objects in the storage->zfs because he tellsme that two things are sharing the same directory.

  • I have to say this way out of my comfort zone as it's something I don't use.


    This makes sense;

    root@tokyo:~# zfs mount Tokyopool/Tokyo

    cannot open 'Tokyopool/Tokyo': dataset does not exist

    as it states that pool no longer exists, but if you have recreated that to Tokyopool/Tokyo2 it should show in the File System, unless there is another command to run.

  • Little by little, after i have delete the filesystem extra, that i have created by error and rename to the old name


    Seems that openmediavault the Storage->ZFS and Storage->Sistemas de archivos are ok


    I have to delete the NFS share, after that i can delete the shared folder,....


    and after i have configure the shared folders seems to be the zfs working, but now i'm working to recover the zfs via nfs....


    AND IT WORKS ;-) (i change the tittle to SOLVED)

  • kapoira

    Changed the title of the thread from “NOT seeing ZFS i supose after apt-get update” to “NOT seeing ZFS i supose after apt-get update [SOLVED]”.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!