Beiträge von IamTed

    Hi everyone,
    I made the jump from 0.5 to 4.1.0. One thing that I have found, is that with 0.5, it named the NIC Eth0, but in 4.1.0, it is now calling the NIC "enp26s0". Is there some place that I can change the name?


    Under the General Settings, Network, Interfaces, when I choose to edit the NIC, under the General Settings, the name "enp26s0" is in a light font and I cannot remove it or put anything else in the field to replace it.


    Thanks,
    Ted

    Yup, it looks like the problem was that the text editor was not saving the file as unicode. Problem solved.

    I will try that. In the past, when I was running 0.5, I just pasted OMV_MONIT_SERVICE_FILESYSTEM_SPACEUSAGE="95" into the file and had no issues. (I actually kept a bookmark to the bug tracker post so I could find it when I reloaded the OMV software.)

    It was a bad day. I totally missed that it wasn't original in the file and that I had to add it. The problem is, when I added it, it started giving me an Error #0. Here is the details:


    Here is a link to the syslog. https://www.dropbox.com/s/2h00hl9d0satk9m/syslog.txt?dl=0


    Here is the post that I put up about the Error #0 https://forum.openmediavault.o…?postID=169217#post169217
    Once I removed the line OMV_MONIT_SERVICE_FILESYSTEM_SPACEUSAGE="99" from the oenmediavault file, the Error #0 went away.


    So, has there been a change in the way that line for ver 4? I copied and pasted the line from the bug tracker https://mantisbt.openmediavault.org/view.php?id=946 Is there a new way to implement the change to the monitoring?
    Thanks!

    Yup, I edited the /etc/default/openmediavault file to stop the error about the storage drives being too full. Here is the post about that.https://forum.openmediavault.o…?postID=168809#post168809


    I didn't put 2 and 2 together and notice that the Error#0 started at that time. I tried to modified the line and that didn't work, but when I removed it, the Error #0 went away.
    Thanks for the heads up.

    So, last weekend, I bit the bullet and upgraded from OMV 0.5 to 4.0.19. I did the tried, tested and true method of pulling all of my data drives, replacing my system drive with an old spare 40gb hdd. Everything seemed to go smoothly. After the install, I put back in my data drives and started to set OMV up.


    One of those data drives was an unused 2Tb drive that wasn’t part of my RAID array. I formatted it and left it. Yesterday, I copied a bunch of stuff to it, without a problem. Today, I went to create a Shared Folder on the drive and got the following error:


    Now, no matter what I try to do, I get an Error #0. I’m not sure if the rest of the info under the Error #0 is the same, as there is a lot of it. I have been careful to only make changes to the system through the web GUI, with the exception of adding the entry into the /etc/default/openmediavault file to stop the system from complaining about the drives being over 85% full.


    I’m not sure what to do now. Here is my syslog file.
    Thanks!
    Ted



    http:// https://www.dropbox.com/s/2h00hl9d0satk9m/syslog.txt?dl=0




    Sent from my iPad using Tapatalk

    So, last weekend, I bit the bullet and upgraded from OMV 0.5 to 4.0.19. I did the tried, tested and true method of pulling all of my data drives, replacing my system drive with an old spare 40gb hdd. Everything seemed to go smoothly. After the install, I put back in my data drives and started to set OMV up.


    One of those data drives was an unused 2Tb drive that wasn’t part of my RAID array. I formatted it and left it. Yesterday, I copied a bunch of stuff to it, without a problem. Today, I went to create a Shared Folder on the drive and got the following error:


    Now, no matter what I try to do, I get an Error #0. I’m not sure if the rest of the info under the Error #0 is the same, as there is a lot of it. I have been careful to only make changes to the system through the web GUI, with the exception of adding the entry into the /etc/default/openmediavault file to stop the system from complaining about the drives being over 85% full.


    I’m not sure what to do now. Here is my syslog file.
    Thanks!
    Ted



    http:// https://www.dropbox.com/s/2h00hl9d0satk9m/syslog.txt?dl=0




    Sent from my iPad using Tapatalk

    Hi,
    In OMV 0.5, you could change the /etc/default/openmediavault file to change the percentage that the system log would complain about how full your drive were.


    Yesterday, I installed OMV 4.0.19 and I am getting the error again, so I tried to make the changes. The problem is, the OMV_MONIT_SERVICE_FILESYSTEM_SPACEUSAGE="95" entry isn't in the /dev/default/openmediavault anymore.


    Where do I make the changes now?
    Thanks,
    Ted

    Ah crap. I was afraid of that. Ok, now time to schedule time to take TedFlix offline with minimal complaints from the peanut gallery here.



    Sent from my iPad using Tapatalk

    I just tried to update Plex and I got this:


    It is running 0.5.60. Any ideas?
    Thanks,
    Ted



    Sent from my iPad using Tapatalk

    Ok, what a wild ride so far.... I ran SpinRite on all the drives and sdb is toast. So, I focused on the five original disks. I did a

    Code
    mdadm --assemble --run --force /dev/md127 /dev/sd[b-f]

    which seemed to work. (Being that I had booted up without the old sdb, all the drive letters got reassigned.) I tried mdadm --detail /dev/md127 and got this:


    I shut the box down to add another drive in to replace the toasted sdb and when it booted, it hung for a while "Checking Quotas". When it finally finished that and booted up, I got this:


    So far, so good. Through the webGUI, I chose the Recover option and added the new sdb to the array. The array started to do the rebuild and I went to bed.


    This morning, I woke to find that the array was listed as clean, FAILED, and I could not access any files on it. It looks like it ran up against more issues with drive sdd. Here is the mdadm --detail /dev/md127:


    I then added another drive to the array, thinking that it would rebuild on to that, but no dice. At that point, I realized that I was back at square one, so I rebooted the box, and issued this:

    Code
    mdadm --assemble --run --force /dev/md127 /dev/sd[b-f]
    mdadm: forcing event count in /dev/sdd(3) from 1902723 upto 1903306
    mdadm: clearing FAULTY flag for device 2 in /dev/md127 for /dev/sdd
    mdadm: /dev/md127 has been started with 4 drives (out of 5) and 1 spare.


    I was able to mount the array and access the files. Obviously, there is something with the sdd drive, as it seems to crap out during the rebuilding process. I am looking for the best option to go with from here. I realize that I am on the edge of the cliff with my toes danging over. If one more drive fails, I'm pooched. I am sitting here with an array that is listed as clean, degraded, recovering, but craps out during the rebuild due to a drive that if cranky. I realize that there could be a very small area on sdd that is causing problems. Here is what I've come up with for ideas:

    • Run SpinRite again on the sdd drive to see it can access the trouble area. If I do this on a Level 3 or 4, it was saying that it would take about 2 weeks to complete, is it worked. Then add the sdd drive back to the array and rebuild to a spare.
    • Take the sdd drive and use Clonezilla to copy it to a spare 2Tb drive that I have and use the -rescue switch. Then replace the old sdd drive with the cloned one and rebuild with a second spare drive. What I don't know, is if the rebuild will handle the missing sector differently that it did when it was getting a I/O error back from the old sdd drive during the prior rebuild attempts.
    • Say screw it, and run the array on the four out of five drives, so the array is clean and degraded and copy everything off the array and go from there, with either Greyhole or SnapRAID.

    Opinions?

    Well, I decided to run Spinrite on all the drives and sure enough, sdb and sdd had read issues. SpinRite seems to have corrected the issues on sdd, but sdb seems to be a little more of a challenge. Seeing that the array was originally sdc, sdd, sde, sdf and sdg, then sdd dropped out and the whole thing crapped out when it was trying to rebuild on to sdb to replace sdd, can I try to force assemble it with sdc-sdg? There was no changes to the content originally while this was going on with the array, so I hope that helps with the odds. Or would it be better to wait to see if SpinRite can bring sdb back from the dead?

    Looking through the syslog, I've come across this from when it was rebuilding:


    What that tells me, is that sdd put it's fingers in it's ears and stopped talking, possibly due to a bad sector. I know that when I went to bed at midnight, the array was at 80% rebuilt and this happened at 3:20, so it was pretty close to finishing the rebuild. If I run Spinrite on sdd and am able to get the drive reading again in that area, would I be able to try and rebuild again?


    I realize that RAID isn't a form of backup, which is why I have been eyeing SnapRaid. The problem is that this collection has been growing slowly over the years and has sort of grown out of control size wise. If I could have found someone that was willing to loan me about 8TB of drives to move this stuff to while I switched to SnapRaid, that would have been great, but it never happened.... :(