Disk : In Storage exit, in File System no

  • Hi at all,


    I've 6 Hard Drives:

    /dev/sda

    /dev/sdb

    /dev/sdc

    /dev/sdd

    /dev/sde

    /dev/sdf


    all ext4.


    when I go in Storage/File System mount only 4 drivers, others 2 (sdb,sdd) not propose me , but there're in " Create".


    If in "Create" I select for example drives sdd, OMV start to create file system, but when finished, nothing's changed.


    I don't know how to do, or I don't know where i'm wrong


    p.s.: very important that disks are full of data that I don't want to lose

    • Offizieller Beitrag

    After you have created the file system have you tried to select the drive and mount it?

    System Backup Typo alert: Under the Linux section the command should be sudo umount /dev/sda1 NOT sudo unmount /dev/sda1

    Backup Data Disk to Backup Disk on Same Machine: In a Scheduled Job:rsync -av --delete /srv/dev-disk-by-uuid-f8814ed9-9a5c-4e1c-8830-426968c20ea3/ /srv/dev-disk-by-uuid-e67439d5-00a3-4942-bd5f-b84ab86aa850/ Don't forget trailing slashes, and BE CAREFUL. (HT: Getting Started with OMV5)

    Equipment - Thinkserver TS140, NanoPi M4 (v.1), Odroid XU4 (Using DietPi): PiHole

  • You cannot use disks not formatted.


    In "Storage" -> "Disks" you have to select the disk then click on "Reset to factory" (on OMV6); WARNING: All data will be permanently erased.


    After this, you can select the disk and make a new file system.

  • p.s.: very important that disks are full of data that I don't want to lose

    I don't get it: you have DATA on the drives?


    Then you click + and Mount. NOT create

  • luciano

    I think you need to start from the top:

    You say you have 6 drives and they have DATA.


    Where were they connected and how?

    What filesystem do they have? Besides beeing ext4, were they on some RAID system? Any particular special setup you had on them (SnapRAID, for eg) ?


    How and what are you running now? How are the drives attached to the system? And what kind of system?


    Some outputs on the CLI can also show better info on what is going on:

    Install tree with apt install tree


    tree /dev/disk/by-*

    cat /etc/fstab

    lsblk

    blkid

    fdisk -l | grep sd

  • Soma


    First of all thanks for the help,


    Yes, I've 6 drives ( see disks.png ), all full of my files ( /sdc1 is system drive ).


    They all come from a very old installation of omv1.


    Now i'm still in the same pc, but i formatted the system disk and i installed Omv6


    Here outputs:


    tree /dev/disk/by-*


    cat /etc/fstab

    lsblk

    blkid

    Code
    /dev/sdd1: UUID="a8b62c6e-2131-451a-9edb-bc6db14f294b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="97447e97-101c-419e-b477-b04fbd5be33e"
    /dev/sdc1: UUID="2f762ba8-1bf4-4df0-bca0-889818769978" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c7464c2b-01"
    /dev/sdc5: UUID="059c9f97-3f9d-4181-a0aa-c590f485e8cc" TYPE="swap" PARTUUID="c7464c2b-05"
    /dev/sda1: UUID="4dae5952-7e7d-49de-af1b-a6d35dd57295" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9880e211-a638-4a07-8051-40df1c407b10"
    /dev/sde1: LABEL="dati2015" UUID="8728a0d9-aa82-4e96-8f3b-3c625d8480bb" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ce0ad713-8360-4198-9438-346afa103c4b"
        

    fdisk -l | grep sd

  • /dev/sdd1 2048 1953523711 1953521664 931,5G Microsoft basic data


    /dev/sda1 2048 1953523711 1953521664 931,5G Microsoft basic data

    Your partitions don't add up on fdisk: it says Microsoft basic data but on the GUI, it's mounted as ext4.


    And, again DO NOT TRY TO CREATE fs on drives that already have data. I really hope it didn't screw up the partition header.

    They weren't recognized in the first place properly and know it's scrambled.


    If you can, boot up a live CD (Gparted, Clonezilla, SystemRescue, etc) and see if it shows ALL drives.

    If it does, shutdown and unplug ALL drives except the OS drive.


    Reinstall Debian/OMV.

    Plug only 1 drive to the system and boot up. Check if you see it and MOUNT it on the GUI.

    Shutdown and plug the 2nd one. Do the same as above.


    Once a drive doesn't show up or is unable to MOUNT, put it aside and continue with the rest.


    Then we'll try to figure out what's the issues with those failing drive's

  • And, again DO NOT TRY TO CREATE fs on drives that already have data.

    ...... already done .... I hope nothing has changed ....

    If you can, boot up a live CD (Gparted, Clonezilla, SystemRescue, etc) and see if it shows ALL drives.

    If it does, shutdown and unplug ALL drives except the OS drive.

    I don't know why, but Gparted live after configuration windows, don't start.

    I try to use Gparted on Ubuntu in other pc to see file system of all drivers

  • Plug only 1 drive to the system and boot up. Check if you see it and MOUNT it on the GUI.

    Shutdown and plug the 2nd one. Do the same as above.


    Once a drive doesn't show up or is unable to MOUNT, put it aside and continue with the rest.

    ok, all done, step by step.


    2 drivers has a problems to mount.


    I don't know if reset or not ...... :(

  • Run and post the same commands of #8 with just the working drives attached.


    And, I hope you do have a backup of the DATA on the drives that fail.

  • I discovered that on these 2 drives the files are unrecoverable.

    I decided to restore them with OMV6 and format them as BTRFS and not as ext4, I don't know if it will be better, I hope so.

    Now I see all the disks correctly.


    who knows what files I will have lost ....

    • Offizieller Beitrag

    ext4 is one of the most mature and robust filesystem on Linux. Whatever you experienced it is not related to the filesystem type.


    Be aware that BTRFS needs some maintenance, i.e scrub and balancing.

    Most users would select BTRFS over ext4 for its additional features, which you will probably not use.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!