Beiträge von CJRamze

    When I say it is done, I mean the array is done growing/expanding. The filesystem has not been resized to use the array's additional space. Did you resize the filesystem in the filesystem tab?

    Code
    Failed to grow the filesystem '/dev/md0': resize2fs 1.42.5 (29-Jul-2012) resize2fs: Device or resource busy While checking for on-line resizing support Filesystem at /dev/md0

    That means it is done. Otherwise, you would see information about rebuilding and speed.

    I wish that it was :( I'd be putting more data on it if it was.
    This is what I see.
    Raid Shows the full amount and that its clean
    File System doesn only shows the original 8TB, I grew the array after I'd added it into raid management and its been stuck like this since.


    http://imgur.com/a/QGUUA

    Hi Ryecoaaron
    Thanks for the help!


    cat /proc/mdstat
    Shows -


    Code
    root@dagobah:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sde[5] sda[0] sdf[4] sdd[3] sdc[2] sdb[1]
          9766914560 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    
    
    unused devices: <none>
    root@dagobah:~#


    Apart from the 512k I wasn't sure if I was meant to see something else?
    I ran the second command just incase.
    Ran the original command again and it looks the same to me so I'm not sure :S


    Sorry I appriciate any help I'm just confused as to whats going on.

    I'm not using the array at all.
    All access has been revoked whilst ongoing.
    Is there anyway to see whats going on or why its taking so long... or if its even doing anything at all?

    Afternoon All.
    I've had a 8TB RAID 5 array running for some time now.
    I filled all 8TB's so I've added another disc taking usable space up to 10GB
    In the interface I added the drive to the array.


    I've then expanded the drive which I assume now spreads the 8TB over the new 2TB drive thats been added.
    Its been 7 days now since it started 'Growing the partition'


    Is this normal? Should it take this long? They are all SATA drives and 2TB 7200RPM's, all matching brand etc.
    I just didn't expect it to take this long!

    Sorry to bring this back to the top.
    I have the exact same issue but caused by a different problem.
    PSU went bang, Took motherboard with it.


    I swapped them out and now I see at certain points during boot:
    cannot read interfaces file /etc/network/interfaces
    Then finally it will settle on
    Cannot connect to monit daemon. Did you start it with http support?


    I searched the forums for a while, Ran a few fixes I'd found
    One about removing a persistent routes file, tried the omv-firstaid but all it would say when trying to configure the network card (Which kept showing as eth2 for some reason) was
    OMV failed to execute Rpc (service=network
    There was a little more than that but I cant remember it now.


    I did a --reinstall -install of OMV and searched the forums a little more.


    If I leave the server (Overnight) all the services that are failing to load which I assume are all network based will eventually time out and I will be prompted with the regular login screen.
    It will say the login for the server will be 192.168.1.101 however upon logging in if I run ifconfig it shows only the localhost



    At this point I'm considering installing Debian and then installing OMV as part of the OS so my partner can use it as a desktop.
    However I have that Raid 5 that I setup within the webgui... and I really don't fancy like loosing 8TB worth of data.


    Does anyone have any ideas?
    if I load into maintenance mode, or load into normal boot (Which takes a LONG time to get to the login prompt) and I use omv-firstaid if I try to configure the network card now as soon as I select the option the grey box goes blank and it kicks me back to the CLI with no errors displayed.

    Lock the topic. Fixed.
    Will post my fix now.



    I logged back into Open Media Vault... and there is the raid... and there is the data.
    Overjoyed.

    Anyone out there? I'm feeling pretty desperate for help. Even tried #debian but they basically told me where to stick it.
    Feel like I may just have to try a bunch of random commands I've found on the net.
    Dont know what they do but I'm stuck!

    Just an update, I did a Dmesg yesterday that pulled errors mentioning RAID from log files.


    It mentioned the raid had failed




    Edit:
    I get the feeling that my dogs may have run into the side of the server and knocked something loose?
    I powered down and checked all connectors, Powered back up.

    Hi Guys
    I've had my Raid 5 setup running for around a week now, everythings running as sweet as a nut and I've had no issues.
    Until tonight.


    First I noticed my RasPi wouldn't connect to any of the films.
    Had a look in OMV shares are there, Thats weird.


    So I decided to reboot the machine.
    Tried to mount the shares on my linux laptop, Permission errors... Weird.


    Deleted the shares in NFS and SMB
    Clicked the share in SHARED FOLDERS, that UUID looks wrong, Checked raid management... Gone.
    Checked the drives, all drives are showing however in raid managment its totally blank


    I saw this command to show your MD, and sure enough MD127 there it is.

    Code
    root@OMV:/dev# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdd[0](S) sdc[4](S) sdb[3](S) sdf[2](S) sde[1](S)
          9767567800 blocks super 1.2
    
    unused devices: <none>
    root@OMV:/dev#


    Inactive? What? Why and how can I get it back. Otherwise I just lost 4TB of data. Nothing hugely important, but not stuff I want to download again!


    Can anyone help?

    Thanks for the update.
    I've bought an additional 3 drives to make a total of 5, I'll bear that in mind I wasn't aware of the stresses of raid.


    I will create a 3 drive Raid 5.
    Copy all my data from my existing two drives onto the new RAID. Then unmount and grow the raid over those new discs.


    I'm thinking of writing on each drive


    Disk 1,
    Disk 2,
    Disk 3,
    Disk 4,
    Disk 5,


    Same for the SATA cables, Can I then just label them this way in OMV to match? So if a drive fails in OMV it will say "Drive 3 has failed" and then I can match it with what I've written on the disks/cables?


    Sorry for the newb questions. First attempt at a raid 5!

    Afternoon All
    Bit of a newbie to RAID and I'm looking to move my data from my current setup (2x2TB Drives) to a RAID 5 setup consisting of 5x2TB Drives.
    Both my current 2TB drives are full of data.


    So I figured my best option was to dismount the current drives (Two standalone drives) and put them to one site.
    Create a new RAID5 Array 3x2TB Drives, Copy the data to this new Array.


    Then wipe the existing 2TB drives put them into the server and then grow the array.


    Is this advised? I dont really have the funds at the moment to buy 5 new drives ontop of the additional 2 drives.


    How difficult is it to Grow the raid? Do i run a high risk of wiping the array doing so?


    Apologies for all the questions, this will be my first jaunt into RAID 5 and my first time Growing a RAID (If that will even work!)