Posts by cutting42

    Hi geaves

    I am an idiot, I have found the disk and how to mount it. The RAID is still degraded but I can see the fileshares now and can read them from Windows.



    I have just upgraded my system from 0.3 to the current version. I have a 4 (2Tb) disc array in RAID 10 and I can see the raid in the RAID management section but it is "clean,degraded" missing the second disk from the array although I can see the disk in the Disk view.

    I cannot see any of the file system and assume it has not been mounted but cannot see a way to mount it. Hardware is a HP Prolient Micro Server and disks are WD Red

    root@hpnas:~# cat /proc/mdstat
    Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid10 sde[3] sdd[2] sdb[0]

    3907025920 blocks super 1.2 512K chunks 2 near-copies [4/3] [U_UU]

    root@hpnas:~# blkid /dev/sda1: UUID="bb6c9002-65f9-400e-b4d7-52ce8156911a" TYPE="ext4" PARTUUID="a0e34cf5-01" /dev/sda5: UUID="6a17fcf8-781d-49fd-98bd-1d220cd97f13" TYPE="swap" PARTUUID="a0e34cf5-05" /dev/md127: LABEL="Datadisk" UUID="c4c4da08-76d1-4575-9548-592145361169" TYPE="xfs" /dev/sdb: UUID="2d67ea9f-ec38-0a82-cfc1-a9afe60df991" UUID_SUB="db20c8fd-173f-b52f-6473-635d4d4c4261" LABEL="HPNAS:Data" TYPE="linux_raid_member" /dev/sdd: UUID="2d67ea9f-ec38-0a82-cfc1-a9afe60df991" UUID_SUB="155cdab1-def8-e9d6-b892-84c14bf24abf" LABEL="HPNAS:Data" TYPE="linux_raid_member" /dev/sde: UUID="2d67ea9f-ec38-0a82-cfc1-a9afe60df991" UUID_SUB="303b66e7-6795-d668-9e25-eb0abf44aa1c" LABEL="HPNAS:Data" TYPE="linux_raid_member" root@hpnas:~# root@hpnas:~# fdisk -l | grep "disk "
    Nothing output

    root@hpnas:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    # Please refer to mdadm.conf(5) for information about this file.

    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions

    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes

    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>

    root@hpnas:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/HPNAS:Data level=raid10 num-devices=4 metadata=1.2 name=HPNAS:Data UUID=2d67ea9f:ec380a82:cfc1a9af:e60df991

    # definitions of existing MD arrays

    Thanks in advance for any assistance

    I am a long time user of OMV but at a very low level of knowledge. I have a very old version - Omnius - and am having some logging in problems due to the system drive being full.

    Rather than fixing the drive I assume it would be best to upgrade to the current release. I have watched the video and all seems simple enough but I have a question on the existing data discs.

    I have a 4 x 2Tb RAID 10 with about 2 Tb of data on them. Although I do have backups of the key data can I install the new OMV version and use the data drives as they are currently? I appreciate I will need to recreate the users but will the directories etc still be there.

    Thanks for any help