Posts by Xandareth

    Hey guys,


    I've been running a Raid6 +1 spare for quite some time now and everything was running fine.
    The other day I went to go install a new disk and expand the array. Everything worked as per norm until it hit about 10% then my system hung completely.


    The system wasn't booting into the OS, and once I plugged a screen in I saw:


    Main: recovering journal
    task md0_raid6:404 blocked for more than 120 seconds".


    To me, it seemed like it must have been a dodgy disk, so I removed it and the OS booted. However now my system isn't showing the raid array.
    I did some investigating and I've found this


    Syslog says:
    md0: Reshape will continue
    MD0: cannot start dirty degraded array


    mdadm --examine says:


    Raid Level : raid6
    Raid Devices : 7
    Device Role : Active device 4
    Array State : AAAAAAA ('A' == active, '.' == missing


    as well as:


    Reshape pos'n : 1748454400 (1667.46 GiB 1790.42 GB)
    Delta Devices : 1 (6->7)



    So it looks like my raid array isn't showing up because it was cut-off mid resizing and is waiting on the new disk so it can continue, but then when I plug the disk back in my system hangs again when 'recovering journal'.


    Any ides on what I can do to stop this reconstruction and make it recover the array?
    Something in the startup logs said "Consider --force" but I can't find it in the log viewer.

    Right click -> Properties only shows about 1/3 of the data size that the NAS originally had. Is there any way to resume the building?

    Hey guys!
    So I've come across a little problem and I could do with some help.


    I'm currently running a Raid6 array and one of my drives failed. I took it back because it was still covered by warranty and everything was fine. Turns out though, that while checking which disk had the problem, I didn't reinsert it back properly. So I stuck it back in, wiped it and allowed for the raid to rebuild itself. But then I must have unintentionally run a smarttest because then I got this error via email:


    The following warning/error was logged by the smartd daemon:


    Device: /dev/disk/by-id/wwn-0x50014ee6aedf4696 [SAT], 48 Currently unreadable (pending) sectors


    and now the raid menu is showing this:


    Raid Devices : 6
    Total Devices : 5


    State : clean, FAILED
    Active Devices : 3
    Working Devices : 4
    Failed Devices : 1
    Spare Devices : 1


    Number Major Minor RaidDevice State
    0 0 0 0 removed
    5 8 48 1 active sync /dev/sdd
    2 8 32 2 active sync /dev/sdc
    3 0 0 3 removed
    6 8 80 4 active sync /dev/sdf
    5 0 0 5 removed


    7 8 64 - faulty spare /dev/sde
    8 8 16 - spare /dev/sdb


    Any help with how to proceed would be so incredibly appreciated right now.

    Since it's not working under two browsers, and is working under 1... this to me suggests a cache issue. Open Chrome or Firefox in "Privacy" (or Incognito) mode, and try logging in to your webUI. If it works, you know it's a cache issue, and clearing…


    You were right, this fixed the issue on Firefox, but I don't actually think it was the cache.
    I cleared the cache in both Firefox and Chrome, but it was still giving me a hard time though it was working in incognito on Firefox. I deleted the cookies to my OMV IP on Firefox and it's working fine now.


    Thanks for the help! Much appreciated.

    Hey guys!
    I'm having a mild issue and was wondering if someone could give me a hand. I'm having trouble with the WebUI login screen.
    Under Chrome, although the page looks to be completely loaded, it's constantly reloading itself (giving the appearance of flashing) and has the language box selected in red. Same deal with Firefox, however the login window itself doesn't appear to flash.


    Only seems to work fine under IE11. Though when I went to install an update, nothing was showing in the status window.

    Hey guys!
    When running the upgrade script, I noticed that this error had come up:


    Code
    [....] Starting web server: apache2Syntax error on line 15 of /etc/apache2/openmediavault-webgui.d/default.conf:
    Wrapper /var/www/openmediavault/php-fcgi cannot be accessed: (2)No such file or directory
    Action 'start' failed.
    The Apache error log may have more information.
     failed!
    invoke-rc.d: initscript apache2, action "start" failed.


    and now every time I try to access OMV via browser I get:


    400 Bad Request
    Request Header Or Cookie Too Large


    Any help on fixing this would be greatly appreciated :)

    Hey guys!
    I'm currently running an APC UPS through USB and I've been using these driver config commands:

    Code
    driver = usbhid-ups
    port = auto


    However, in the last week or so, I keep getting this error when trying to check my logs under the 'services' tab:



    Code
    Failed to execute command 'upsc ups 2>&1': Error: Driver not connected
    Error #4000: exception 'OMVException' with message 'Failed to execute command 'upsc ups 2>&1': Error: Driver not connected' in /usr/share/openmediavault/engined/rpc/nut.inc:151 Stack trace: 
    #0 [internal function]: OMVRpcServiceNetworkUPSTools->getStats(NULL, Array) 
    #1 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) 
    #2 /usr/share/php/openmediavault/rpc.inc(62): OMVRpcServiceAbstract->callMethod('getStats', NULL, Array) 
    #3 /usr/sbin/omv-engined(495): OMVRpc::exec('Nut', 'getStats', NULL, Array, 1) 
    #4 {main}


    Any clue as to what is happening?

    Code
    Package: minidlna
    Status: install ok installed
    Priority: optional
    Section: net


    is what came back


    Edit:
    just in case it's needed, minidlna.conf says this:


    Hey guys!
    I've been having issues with minidlna in that it doesn't seem to start.
    I'll set everything up, save the settings then apply it through the yellow bar that comes up, but then I won't see minidlna anywhere in the process list. When I then go back to the settings page, none of the settings have been saved and the enabled box is off.

    Hey!
    So as the subject suggests, I'm in a bit of a pickle, so here's a quick run down of whats happened.


    - 4 Bay NAS PSU died
    - Took back for warranty
    - Given credit notice; had an older model
    - New model doesn't support the filesystem in the old model.


    On the 4bay, I had about 5tb of data that I still need, so putting the old drives into that new NAS just won't work. BUT there is a solution.
    This page says that I can actually put the old disks into my OMV server (being debian based) and mount them there. This option would suit me easily as I can then work around the data issue.


    I am, however, quite noob when it comes to linux.


    So my major question is - Is there anything with those instructions given on that page that could make OMV kick up a fuss?

    Updated fine, but then with it being late and me not really thinking, I installed mini-dlna now it won't let me get into the UI.


    I've tried the upgrade removal script but it aborts.
    I'm a bit of a linux console noob, anyone throw some advice my way?


    Edit: I got the script to run, had to edit it and change a few lines. Everything works fine now though :)

    Hey guys, some help would be appreciated :)
    So today I thought I'd check if my server case and motherboard support hot-swappable hdd's. What I wanted to accomplish by this is the knowledge that if I had to put another hard drive in, I wouldn't have to offline the system.
    So I went in, pulled out a hard drive (it's a front loading case with bays), nothing shut down and I did a small victory dance. But then I got an email the nas.


    "A DegradedArray event had been detected on md device /dev/md0."


    Now I can't seem to fix the issue.
    Under the 'raid management' tab it's showing the array as "Clean, degraded" and here the array details:


    Raid Devices : 5
    Total Devices : 4


    State : clean, degraded
    Active Devices : 4
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 0


    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 0 0 1 removed
    2 8 32 2 active sync /dev/sdc
    3 8 64 3 active sync /dev/sde
    4 8 80 4 active sync /dev/sdf


    It's showing the drive as 'removed' yet it's coming up in 'Physical Disks'.


    Any ideas?