Posts by CaNsA


    Pulled drive 1 from the array, carried out a low level format on it and it passed with no errors.
    Copied over the data from drive 2 to drive 1.
    Destroyed the array and pulled drive 2 from the server.
    Carried out a low level format on it and it passed with no errors.

    I have put both drive 1 and drive 2 back in the server.
    Created a new array using drive 1 and drive 2, but specified that drive 1 is missing.
    Drive 1 is currently rsyncing to drive 2
    Once this is complete i will add drive1 to the raid array.
    This will rebuild the array using the data on drive2.

    Hopefully there will not be any errors thrown up in my face.

    The replacement 3tb that I received turns out to have a terribad sector slap bang in the middle thus rendering it un-usable.

    Re-sync'd, still reporting as "Degraded"

    After a bit of hunting about, I'm giving this a try.

    One of my drives was showing read errors, so I pulled it and replaced it with an identical 3tb HGST.

    Re-sync (5hrs) went fine as far as i can tell.

    The array reports as "Clean, but "Degraded" in OMV.

    How can I remove this flag?

    root@NobKitten:~# blkid
    /dev/sdb: UUID="962c0810-0000-1583-d1b8-e4e0481e7a1d" UUID_SUB="fba9b6bc-0824-c8d4-61cf-b7738bb9767b" LABEL="NobKitten:M2" TYPE="linux_raid_member"
    /dev/sdc: UUID="3fac3cb1-281b-bae1-5c88-23b820c6e73b" UUID_SUB="9dba2023-d531-4f26-bb82-59855f9cb15e" LABEL="NobKitten:M1" TYPE="linux_raid_member"
    /dev/sdd: UUID="3fac3cb1-281b-bae1-5c88-23b820c6e73b" UUID_SUB="2f910984-6d7d-1434-4ad2-5721f854c6cc" LABEL="NobKitten:M1" TYPE="linux_raid_member"
    /dev/sde1: UUID="59448263-0293-45d9-bb96-9390a189728c" TYPE="ext4"
    /dev/sde5: UUID="3bf1317d-3952-47fe-8854-7b0e00cbda40" TYPE="swap"
    /dev/md127: LABEL="M2" UUID="f539bc29-030c-4bdf-ac3e-2fd81569c5d5" TYPE="ext4"
    /dev/md126: LABEL="M1" UUID="ad965792-7ecb-484b-8718-c54b9ec74790" TYPE="ext4"
    /dev/sda: UUID="962c0810-0000-1583-d1b8-e4e0481e7a1d" UUID_SUB="104d783d-be32-a0e2-06b6-3586fbbf17ef" LABEL="NobKitten:M2" TYPE="linux_raid_member"

    Not had a power outage, even for a second, in over 15yrs.
    I live in a major city in the UK, our grid is pretty damn good.

    The last time I had a power outage was bout 15yrs ago,
    I'm more concerned with drive failure, hence the use of a Raid.

    I've tested the process of awesomeness in a VM and it all seems to work fine.

    Just got to move data about now.

    Drive 4 should be fine... I hope.

    I've not built a Raid array from command line before, how difficult is the process?

    How reliable is a Raid5 array in OMV ?


    Currently I have the following setup using 4 x 3TB drives.

    I would like to change it to a Raid 5 config, using the 4 drives available and a 2TB drive in my pc.

    My plan,
    Back up what I can from M1 to separate 2TB drive.
    Break down M1 (drive 1 & drive 2)
    Pull a single drive from M2 (drive 3) - will the data still be safe and accessible?
    Create a Raid5 using the 2 drives from M1 and the single drive from M2 (drive 1, 2 and 3)
    Copy data from the other M2 drive (drive 4) over to Raid5
    Grow Raid5 to incorporate drive 4.

    Would that work?
    Is it likely to to rip a hole in time and space?
    Am I being an idiot by not understanding how things work?


    I am the new guys in here, read some post to say that if OS on the USB stick will be kill it in the short time, is it true?
    anyone who are using the USB stick as the Operation system?

    Due to the high number of read/writes carried out by OMV it is a bad idea to install it on a USB stick.

    only the upnp version would make sense i guess.

    My RPi is pulling all media from my OMV box over NFS.
    In fact, NFS is recommended for use with XBMC on the R-Pi due to the limited bandwidth available to the eth port.

    and what should the plugin do?
    show a tab with the xbmc-webinterface ?

    Provide a WebGUI to select folders to be monitored for additions to the XBMC SQL media library.

    Kinda what this whole thread is about.