mdadm

  • Hello!


    After having a diskcrash and insert new disk in Raid1 I got a sparesmissing errormail, then I chang spares to 0 in mdadm file, as described here in the forum and that error was fixed. But now I got a new errormail, with this text;


    /etc/cron.daily/mdadm:
    mdadm: Unknown keyword


    mdadm: unreconised word on DEVICE line: partitions


    mdadm: Unknown keyword


    mdadm: auto= arg of "yes
    " unrecognised: use no,yes,md,mdp,part
    optionally followed by a number.
    run-parts: /etc/cron.daily/mdadm exited with return code 2


    Any solutions??


    Regards
    Kåre!

    • Offizieller Beitrag

    What is the output of:


    fdisk -l
    cat /proc/mdstat

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • When running fdisk -l
    root@storefjell-nas:/dev# fdisk -l


    Disk /dev/sda: 80.0 GB, 80032038912 bytes
    255 heads, 63 sectors/track, 9730 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00076fcd


    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 9607 77166592 83 Linux
    /dev/sda2 9608 9730 987137 5 Extended
    /dev/sda5 9608 9730 987136 82 Linux swap / Solaris


    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdb doesn't contain a valid partition table


    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdc doesn't contain a valid partition table


    Disk /dev/md127: 3000.6 GB, 3000591794176 bytes
    2 heads, 4 sectors/track, 732566356 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/md127 doesn't contain a valid partition table


    Then running, cat /proc/mdstat


    root@storefjell-nas:/dev# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra
    id10]
    md127 : active raid1 sdb[2] sdc[1]
    2930265424 blocks super 1.2 [2/2] [UU]


    unused devices: <none>


    And now in RAID management the Raid is missing, in the Filesystems it is ok, the system is working and I can reach the files in every shares.


    Any solutions??

    • Offizieller Beitrag

    Not sure why it isn't showing up the raid tab. Everything looks ok. I assume a reboot didn't help?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    It won't let you create a new raid because the disks are in use. That would erase your data anyway. Your raid is fine. It just is a display issue. What is the output of cat /etc/mdadm/mdadm.conf?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The output from cat /etc/mdadm/mdadm.conf


    Last login: Thu Mar 14 10:25:55 2013 from 192.168.1.222
    root@storefjell-nas:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md/storefjell-nas:0 metadata=1.2 spares=0 name=storefjell-nas:0 UUID=
    b7628fce:8effb043:933c3930:282d2cc8


    # instruct the monitoring daemon where to send mail alerts
    MAILADDR lerberg@c2i.net
    MAILFROM root

  • Ist it possible, that you have the /etc/mdadm/mdadm.conf file in dos mode?


    This lines makes me suspcious:
    mdadm: auto= arg of "yes
    " unrecognised: use no,yes,md,mdp,part
    optionally followed by a number.
    run-parts: /etc/cron.daily/mdadm exited with return code 2


    It looks like that the last argument is CRLF terminated (see the " in the next line"), where it should not.


    Open the file with vi and see if the lines end with ^M ... This will indicate dos mode.


    You can also try to run dos2unix on the file to convert it to unix mode.


    I am not sure if it helps, but it may be the answer.

    Everything is possible, sometimes it requires Google to find out how.

  • Hi, I read something before to solve this problem. You have to stop the raid and then assemble it again. The procedure is:

    1. Check UUID of md127

    Code
    mdadm --details /dev/md127

    You will see a UUID of raid and record it.

    2. Stop the raid.

    Code
    mdadm --stop /dev/md127

    Now you will see the raid is stopped.

    3. Reassemble it again.

    Code
    mdadm --assemble /dev/md127 --uuid=XXXXXXXXXX

    You will read from screen that the raid has been started again.

    Now you may see the raid on omv.


    In case you can't read it, just try the third step again with below command:

    Code
    mdadm --assemble /dev/md0 --uuid=XXXXXXXXXXXXXXXXXX

    Good luck.

  • chente

    Hat das Thema geschlossen.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!