migration OMV 2.x to omv 4.x

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • migration OMV 2.x to omv 4.x

      New

      Hello

      I just upgraded from omv 2 to omv 4 (fresh install) and I have a small problem.
      When the installation was finished, I have plug my disks (ext4), there are visible on disks tab but not in the filesystem tab.

      result of lsblk -f :

      Shell-Script

      1. root@NAS:~# lsblk -f
      2. NAME FSTYPE LABEL UUID MOUNTPOINT
      3. sda
      4. ├─sda1 ext4 7d5820c1-9014-4c3c-a4b3-e83d9c6ec952 /
      5. ├─sda2
      6. └─sda5 swap 43de1bce-9328-46a5-a54d-966416ec508e [SWAP]
      7. sdb zfs_member
      8. └─sdb1 zfs_member

      I don't understand why partition is zfs_member instead of ext4.

      PS : I formated a disk to test and create new partition and the end of the process, same "zfs_member"... ?(
      Thx for your help :P
    • New

      The zfs signature remaining on disks seems to be giving a few people problems lately. The disk needed to be wipe with wipefs -a /dev/sdb before formatting ext4. Do you have the zfs plugin installed?
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      johnblade wrote:

      Can I use the wipefs and don't format after ?
      What is the output of: wipefs /dev/sdb
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      All disk ok after "wipefs -a /dev/sdx" and a quick format
      last one, I will try to keep data and do not format

      Shell-Script

      1. root@NAS:~# lsblk -f
      2. NAME FSTYPE LABEL UUID MOUNTPOINT
      3. sda
      4. ├─sda1 ext4 7d5820c1-9014-4c3c-a4b3-e83d9c6ec952 /
      5. ├─sda2
      6. └─sda5 swap 43de1bce-9328-46a5-a54d-966416ec508e [SWAP]
      7. sdb
      8. └─sdb1 ext4 movies 93f14fe8-19a1-4ba9-b142-19903c05d034 /srv/dev-disk-by-label-movies
      9. sdc zfs_member
      10. └─sdc1 zfs_member
      11. sdd
      12. └─sdd1 ext4 data a709e365-9dca-4b3f-8be6-68d8576fe0ce /srv/dev-disk-by-label-data
      13. sde
      14. └─sde1 ext4 photos b833e5e5-69cb-47e9-b81d-94c339f48966 /srv/dev-disk-by-label-photos
      Display All
      result of the wipefs /dev/sdc :

      Brainfuck Source Code

      1. root@NAS:~# wipefs /dev/sdc
      2. offset type
      3. ----------------------------------------------------------------
      4. 0x200 gpt [partition table]
      5. 0xe8e0c3fc00 zfs_member [filesystem]
      I think, if I try wipefs -a, I should format my disk, I don't know if I have another solution ?
    • New

      johnblade wrote:

      I think, if I try wipefs -a, I should format my disk, I don't know if I have another solution ?
      wipefs -a will wipe everything. I think you can do wipefs -o 0xe8e0c3fc00 /dev/sdc but I would make sure to have a backup.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!