Beiträge von Nordmann

    Heyho, i'm just growing my >20TB Raid since days now and hoping there is no power outage or other odd interruptions :rolleyes:


    While waiting I was wondering where OMV stores the backup file for the grow command (mdadm --grow ... --backup-file=)
    I started the growing via WebUI, so I hope OMV does define a backup file at all?! :saint:


    Found nothing under /tmp and in /var/backups are only user/group, dpkg backups....


    Thx in advance :!:

    Solved the issue for myself by revoking/cancelling the ASRock Z87E-ITX order while still possible.
    New Board is a Gigabyte GA-H97N-WIFI and it doesnt touch the drives during boot like it's supposed to be...


    Re-created the Array with same specs one last time and all data is still there :thumbup:

    I've got the same problem...
    Superblock in MBR gets wiped after every reboot


    Good news first:
    I can still access my data by re-creating the array but:

    • the drive order must be exactly the same like back then when you created the raid

      • maybe you got an old mdadm --detail or /proc/mdstat output where you can see the raiddevice number for the drive letters (given they're still in the same SATA ports and no BIOS settings like SATA mode where changed)
    • all other details like Number of drives, chunk size etc. must match as well, of course
    • assume-clean should be used so it doesnt actually touch the data
    • no guarantee this works for you, if it doesnt most probably all data is lost.

    Then define as much details as possible within the create, mine for example:
    mdadm --create /dev/md/n0rd --chunk=512 --raid-devices=4 --spare-devices=0 --assume-clean --level=5 --name=datacube:n0rd /dev/sda /dev/sdb /dev/sdc /dev/sdd --verbose


    **edit**
    Found a very nice thread at serverfault from 2012 (see 1st answer with highest score)
    This Shane Madden guy literally tries to break the data with attempts of array creations with wrong order, chunk size and so on but still could access the data after all invalid attepmts

    Zitat

    Wow. So, it looks like none of these actions corrupted data in any way. I was quite surprised by this result, frankly;
    I expected moderate odds of data loss on the chunk size change, and some definite loss on the layout change. I learned something today.


    The issue:
    Did you switch your mainboard/controller or even did a BIOS update recently?
    I did switch from an old ivy bridge based board to an ASRock Z87E-ITX.


    In my case I can clearly reproduce that its caused by the mainboard :
    Recreating the array as described above > can mount and access my data > reboot > all superblocks missing > mdadm --examine doesnt find anything neither on mbr nor on partition 1 on each drive


    Interestingly though it somehow cleares the superblock but leaves the GPT partition table intact still showing my single FD00 Linux RAID Partition.
    Sometimes it occurs that one single drive out of the array doesnt get affected, mdadm --examine then still shows the superblock as intact and gdisk throws "corrupted GPT table" > but after another reboot this last-man-standing superblock is also wiped


    To make sure its not OMV i've put in my ubuntu OS SSD from the previous machine (which is still running fine with another raid set) created the array/superblocks > rebooted > gone
    Now i've connected one drive via an PCI-E SATA controller > Superblock deleted again... (The same controller worked like a charm with these hard disks in the ivy system)


    Then i tried an older drive no 4k blocks, no GPT table and guess what? > Mainboard didnt wipe it ... so i guess its some weird behaviour during boot initialisation with UEFI and 4k/GPT disks no matter how the device is connected
    Now i started going crazy with the BIOS Settings, AHCI Mode, RAID Mode, even IDE mode, disabling S.M.A.R.T., disabling SATA Power Management, resetting default BIOS settings - all without success

    Drives are 4x brand-new ST8000AS0002 or let's say "almost" new, the array ran completely fine for a quarter in the ivy machine @Ubuntu
    Yes I know Seagate doesnt recommend their Archive series for NAS purposes to sell their expensive Enterprise versions but until the mainboard change I had 0 issues with them.


    Solution?
    My last idea (except using another board) would be giving the partitions /dev/sdx1/ instead of just the whole drive /dev/sdx during array creation.
    Does this write the superblock to the partition itself instead to the MBR (which seems to get wiped) of the disk?? And even more important: will this destroy my existing data on the drives?

    Greetz,
    N0rd