Beiträge von tdeg20

    This is my current fstab:

    The only difference was that I added "noatime,nodiratime," to the line starting UUID to make it "UUID=c900083f-700e-0023-b000-fbb000000e32 / ext4 noatime,nodiratime, errors=remount-ro"



    The error I got was parsing line 9, so the section I edited, reverting has fixed it all. I wonder if some misplaced punctuation on that line?

    I've just managed to fix it, thank you for replying. :)


    I added the extra parts as indicated on the flash memory plugin page to fstab which I suspect is what caused the read-only error. Going into recovery mode, editing the configuration and changing the "ro" to "rw" on the line beginning with "linux" unlocked it for edit, undid the changes and voila.


    I didn't realise initially that it wasn't strictly completely necessary to edit fstab for the flashmemory plugin to work. Lesson learned.

    Apologies for hijacking/resurrecting an old thread and thank you in advance for any help!


    I have just installed the flash memory plugin (system drive on SSD) and edited /etc/fstab exactly as per the instructions, rebooted and have a read-only filesystem.


    I have tried mount -o rw, remount / to no avail. I can log in via SSH quite happily, and also into the box directly under my desk. Does it make a difference that I edited fstab as sudo not root (trying to be good and not always SSH in as root!)


    Bright ideas always welcome

    David: Indeed! Though my blood pressure couldn't quite take the strain of rebuilding it and trying to salvage data on a regular basis. I will see what happens switching the AUFS mode, I'm not overly bothered by one disk being full and one empty, it just looks strange in Filesystems. At least the parity drive is reassuringly full!

    Sorry to revisit this now it's been resolved. Just wanted to ask a follow up question. The SnapRaid/AUFS combination is working great, but I'm a bit confused as to why one of the two "data " drives in SnapRaid is almost full and the other almost empty. These are the two branches in the AUFS share. The way I understood it was that AUFS wrote to either branch rather than filling one then the other. Is this correct or is something not quite right? Thanks.

    Yeah, it seems to have worked quite well. Allows me to have different access rights for each folder as well. Thank you for all your help on this by the way, been an absolute lifesaver. Hopefully I won't see any more dropped hard drives!

    It's probably a bit messy and not the best way, but I've created shared folders under Storage_Pooled/xxxxxxx which I can then share separately via samba. Seems to have worked but I'm sure there is a neater way round it!

    Having created a single share in samba, I have mounted it in windows and it is showing up as 1.79TB which is correct, so I think I have got both snapraid and aufs working correctly, just can't work out how to fine-tune it.

    Apologies for coming back to this one...


    I have formatted the 3 drives individually, (named 1,2,3 for ease of identifying), and mounted the filesystem (formatted ext4) in the filesystems tab. I've created the snapraid data drives and content drives (disks 1 and 3) and named disk 2 the parity drive and snapraid seems fairly happy with that (I think). Sync or check is showing nothing as the drives are all currently empty.


    I'm having a bit of trouble with aufs pooling. I have created shared folders in the ordinary OMV shared folders tab so they show up in the aufs share creation box. I want to create a share of 2 of the drives pooled with the 3rd as parity (so broadly analogous to the raid 5 I had before) so branches 1 and 2 of the aufs are disk 1 and 3 from snapraid above. I don't understand binding the share though. I created a shared folder on the parity drive called Storage_Pooled and have then used that as the "bind share" in the aufs plugin. Is this right or should it be one of my snapraid data drives? Ultimately, I want to create 3 or 4 samba shares from this single pool for different users/purposes, but can't work out how this is going to work. Is anyone able to point me in the right direction please?

    Thanks for the reply. That's good to know. How would the backports kernel help? I don't really understand the differences.


    I have a vague understanding of Snapraid, but don't understand what you mean by a combination of it and aufs.

    Hi,


    I've been using OMV for a while on an old PC I was given, and decided recently to get a "proper" server, so bought a HP ML115 on eBay cheap. Since then, I've upgraded my old bunch of standalone hard drives with a nice shiny trio of WD Red 1TB drives, and decided on RAID5 for this. Since I set this up about 3 weeks ago (if that), I've had 2 complete failures of the array preceded by a single hard drive dropping out. One of the failures I briefly managed to fix by removing the offending drive and reformatting under windows so OMV saw it as a "new drive", as I refused to believe these brand new drives would fail under such light use so quickly. Since then, the array has entirely failed once (cue hours wasted trying to resurrect) and after a complete reinstall of OMV (including a write-zero on each of the drives to ensure all data is clean, I had an email the other day to tell me that the array had degraded more or less immediately following a reboot;



    So I turned it off and have left it until this morning when I received this email;



    Understandably, quite annoyed about this, not for loss of data (as it's all a backup anyway) but the frustration.


    I know this is very broad but I've come up with a few potential things I've already tried
    - bad SATA cables - I've tried swapping all the cables out to known good ones (that were working perfectly in the old server)
    - bad SATA connectors on motherboard - tried swapping them all around to see what happens. The BIOS sees them all, as does OMV in "physical disks" most of the time. Also, no matter where I put the connectors, it always boots and the OS itself is fairly stable
    - I don't really want to think about it, but I guess it could be bad hard drives. I have read about Reds going bad or being duff right from the off, so I suppose it's possible. Have also tested them using the WD windows SMART utility (I understand there's more to drive health than SMART but it's at least an indicator)
    - I don't think it has any effect with a software RAID, but could the built-in TLER be causing the drives to drop out at all? I did some research and have concluded probably not, but have been unable to test as I can't work out how to change it.


    Has anyone got any suggestions please? I've posted the info from the Support Info plugin below with some personal stuff redacted. Thanks a lot!