In active RAID6 array some hdds are formatted as 4k drives and some are not?!

  • I have set up a raid6 array with 6 x 1TB WD HDDs - 4 of them are WD Greens and 2 of them are WD Reds.


    Before setting up the array I had completely wiped them with dban. Without taking any further action (I was under the impression I did not need to) I added them all in the Raid6 array through the webui. The array was created ok, I created shares etc - everything seemed to work.


    I run write and read tests using dd and I got strange results - 40MB/s to 75MB/s for 10GB files. So I subsequently run fdisk and the results show that the 4 "greens", despite being 4k drives, are formatted as "older" type (512KB physical) while the 2 "reds" were correctly formatted as 4k drives (4096 physical)!!! I am attaching examples from the fdisk:


    "Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000"

    vs
    "Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdf doesn't contain a valid partition table"


    I am completely buffled ( ?( ) - why is this happening and how can I correct it? Is it possible that the drives report themselves (incorrectly) as 512KB physical??


    I really want to fix this as I believe it is degrading my array's performance (and could be potentially dangerous for my data?) How can I specify to the OMV to treat them as Advanced Format?

    • Offizieller Beitrag

    You are looking at fdisk output and sector size. mdadm on omv doesn't use partitions. So, you need to look at the filesystem's block size.


    dumpe2fs /dev/md127 | grep Block

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi!


    I have run the command you instructed replacing the md127 with my drive and I got: "dumpe2fs: Bad magic number in super-block while trying to open /dev/sde"


    what I am I doing wrong? syntax?

  • And in addition you can run egrep 'ata[0-9]\.|SATA link up' /var/log/dmesg and check if all disks are connected well and have NCQ active.

    Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
    Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
    Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!