3 ssd disks that I use in a zfs pool - the 3 with Problem

  • Hi guys.


    I hope you are very well.


    I have 3 1TB ssd disks that I use in a zfs pool for less than 8 months.


    I have analyzed the disks with scrutiny.


    The results are very rare.


    First disk:

    https://ibb.co/XJhDjS0


    Second Disc:

    https://ibb.co/smTwR8x


    Third Disc:

    https://ibb.co/y5bbLyx


    The three ssd are about to die in a near moment?




    Aditional info:


    fdisk -l

    Code
    Disk /dev/sdb: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
    Disk model: Patriot P200 1TB
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes




  • sismondi

    Hat den Titel des Themas von „3 ssd disks that I use in a zfs pool“ zu „3 ssd disks that I use in a zfs pool - the 3 with Problem“ geändert.
    • Offizieller Beitrag

    Can you show the full result of a short SMART test on one of them?

  • I also have the problem that it is impossible to delete these partitions



    Device Start End Sectors Size Type

    /dev/sde1 2048 2000392191 2000390144 953.9G Solaris /usr & Apple ZFS

    /dev/sde9 2000392192 2000408575 16384 8M Solaris reserved 1



    this is what you need?

    • Offizieller Beitrag

    SMART overall-health self-assessment test result: FAILED!

    Drive failure expected in less than 24 hours. SAVE ALL DATA.

    5 Reallocated_Sector_Ct 0x0032 100 100 050 Old_age Always - 80

    197 Current_Pending_Sector 0x0032 100 100 050 Old_age Always - 80

    198 Offline_Uncorrectable 0x0032 100 100 050 Old_age Always - 5


    These values indicate that the drive is going to fail and should be replaced.


    It is surprising that it has very few hours of use.


    9 Power_On_Hours 0x0032 100 100 050 Old_age Always - 5803


    On the other hand, SMART does not seem fully compatible with this model. And some values that could be relevant are not reported, such as 187 and 188.


    Device is: Not in smartctl database [for details use: -P showall]


    If you don't have a backup, I would try to make one immediately. And I would replace these disks, preferably with server-class disks. There are SSDs prepared for 24/7 operation.


    I can't find any reason for all three units to fail so soon. Maybe it is related to the applications you use and the amount of data written. Maybe someone will see something else.

  • do you know how to delete these partitions? Traditional methods don't work.



    Device Start End Sectors Size Type

    /dev/sde1 2048 2000392191 2000390144 953.9G Solaris /usr & Apple ZFS

    /dev/sde9 2000392192 2000408575 16384 8M Solaris reserved 1

    • Offizieller Beitrag

    sudo wipefs -a /dev/sde might have to run it many times.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I did it many times, but dont word.


    Disk /dev/sde: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors

    Disk model: Patriot P200 1TB

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: gpt

    Disk identifier: 056A3B2F-3D26-FB4B-8D54-AA810C85B67B


    Device Start End Sectors Size Type

    /dev/sde1 2048 2000392191 2000390144 953.9G Solaris /usr & Apple ZFS

    /dev/sde9 2000392192 2000408575 16384 8M Solaris reserved 1



    sudo wipefs -a /dev/sde might have to run it many times.

    • Offizieller Beitrag

    Just curious if you allow me the question. What is the reason to delete these partitions now?

  • Just curious if you allow me the question. What is the reason to delete these partitions now?

    I want to use the disk again to investigate more.


    There is no chance that all 3 are broken.


    The disks, since they were installed, had less than 5 TB of writing.


    They were used for a z1 zfs pool and nextcloud was installed.


    Nothing else.

    • Offizieller Beitrag

    Yes. It's certainly strange that the three of them split up so quickly for no apparent reason.

    Once a backup is done, I would try to find out if they are really broken. There must be some disk recovery software that can help.

    That's why I asked, I really don't understand the reason for deleting those partitions. It's not going to help you much now.

  • Yes. It's certainly strange that the three of them split up so quickly for no apparent reason.

    Once a backup is done, I would try to find out if they are really broken. There must be some disk recovery software that can help.

    That's why I asked, I really don't understand the reason for deleting those partitions. It's not going to help you much now.

    if I don't delete those partitions, it won't let me create a new partition. I get an error:


    the same problem had other people:

    https://www.reddit.com/r/zfs/c…8s/wiping_zfs_disks_help/

    • Offizieller Beitrag

    I did it many times, but dont word.

    Is zfs mounted and/or still active?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Is zfs mounted and/or still active?

    root@NAS:~# zpool status ncdisk

    cannot open 'ncdisk': no such pool

    root@NAS:~# zpool import ncdisk

    cannot import 'ncdisk': no such pool available

    root@NAS:~# zpool destroy ncdisk

    cannot open 'ncdisk': no such pool

    root@NAS:~# zpool list

    no pools availableroot@NAS:~# zpool clear ncdisk

    cannot open 'ncdisk': no such pool



    the pool was degraded and lost.


    as I had backup of everything, I did not worry and simply erased the three disks.


    Two discs were erased without problem.


    The third there is no way to erase the partitions and I cannot create a new filesystem.

    • Offizieller Beitrag

    sudo dd if=/dev/zero of=/dev/sde bs=1M count=5000

    reboot

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • sudo dd if=/dev/zero of=/dev/sde bs=1M count=5000

    reboot

    dont work


    root@NAS:~# sudo dd if=/dev/zero of=/dev/sde bs=1M count=5000

    5000+0 records in

    5000+0 records out

    5242880000 bytes (5.2 GB, 4.9 GiB) copied, 12.0442 s, 435 MB/s

    root@NAS:~# sudo reboot


    fdisk -l


    Disk /dev/sde: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors

    Disk model: Patriot P200 1TB

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: gpt

    Disk identifier: 056A3B2F-3D26-FB4B-8D54-AA810C85B67B


    Device Start End Sectors Size Type

    /dev/sde1 2048 2000392191 2000390144 953.9G Solaris /usr & Apple ZFS

    /dev/sde9 2000392192 2000408575 16384 8M Solaris reserved 1

    • Offizieller Beitrag

    Something is very strange here. Zero the entire disk then. Just remove the count parameter. If you write zeroes to the entire disk, it cannot still be a zfs disk unless something on your system is re-importing/re-creating it.


    Personally, I would boot a rescue disk that I knew wasn't going to support zfs and use wipefs and dd.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Something is very strange here. Zero the entire disk then. Just remove the count parameter. If you write zeroes to the entire disk, it cannot still be a zfs disk unless something on your system is re-importing/re-creating it.


    Personally, I would boot a rescue disk that I knew wasn't going to support zfs and use wipefs and dd.

    I installed debian 10 and OMV again.


    I tried to fill the entire disk with zeros, but it didn't work:



    After that, I removed the disk and connected it to a Windows PC.

    With the HD TUNE PRO program I again filled the disk with zeros.


    Still not working.


    But it gave me the following error.


    https://ibb.co/sq4jdTg


    Would it be correct to interpret that the disk does not allow to overwrite some sectors?

    • Offizieller Beitrag

    Would it be correct to interpret that the disk does not allow to overwrite some sectors?

    No. Between wipefs and dd, nothing can survive that. I would boot systemrescuecd and run the commands again. If it somehow comes back, something in your setup is changing it. I have never seen anything like this. So, I don't know what your system is doing.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    If the other two disks are still in a pool, maybe zfs is adding the disk back in every time you reboot. I don't use zfs.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!