Posts by Nicoloks

    Just wanted to loop back on this in case it helps someone else. Ended up getting a brand new 9211-8i controller off ebay for $35USD and then used using a standard Freedos USB stick made using Rufus I added the following files based off various guides I read;


    With all this on a USB stick in the back of my NAS along with the new 9211-8i controller in place with all disks disconnected I booted into the BIOS for my motherboard and selected the option to boot into the UEFI Shell. Once loaded into this command line shell I typed (minus what is in brackets);

    • mount fs0: (to mount the USB as a file system, the file system number might be different for you so you may need to cycle through)
    • fs0: (change to USB)
    • dir (verify that USB contents are readable)

    Following commands interfaces with your disk controller and can potentially brick it. By continuing you acknowledge all risk in doing so sits on you.

    • sas2flash.efi -listall (verify your 9211-8i controller can be seen)
    • sas2flash.efi -o -e 6 (wipes the 9211-8i BIOS, do NOT reboot once this has been completed)
    • sas2flash.efi -o -f 2118it.bin -b mptsas2.rom (writes the latest P20 IT Mode firmware to the 9211-8i controller, swap out the 2118it.bin for 2118ir.bin in the command line argument for IR Mode. I don't want any RAID functionality at the controller level here, so I went with IT Mode so there is nothing getting in between OMV and the SATA disks)


    After this I just put my OMV NAS back together and everything just worked. I have been thrashing the SAMBA file transfers and very happy to say that I am once again pretty much saturating my 1gbps network, so at this stage it is looking like it was a fading disk controller that was causing the disk drop outs and massively slowed file transfer speeds.


    Hope this helps someone.


    Absolutely. I have SMART monitoring my disks. Not sure it can be used to probe the disk controller itself though. Found an Ebay seller selling M1015 units new for around $50, so just ordered myself one. Once I get it, at least I will have the parts to start swapping things around to see what makes a difference. The throughput is really starting to irritate me.

    Hi All,


    I have a 10x4TB SnapRAID config with the 8 x data drives in a single unionFS coming off an IBM ServeRAID M1015 controller in "IT Mode" and 2 x drives connected via USB 3.1 as parity. This has been working well for me for sometime and transfer rates have always managed to saturate my 1Gbps network.


    Last week I started having issues with file systems being intermittently unavailable which was followed by all my devices attached via my M1015 dropping out. Seemed a little suspect to me so I shut down my NAS fully for around an hour and then booted it back up. Everything seems to be functioning, however I have noticed now that transfer speeds are massively decreased (locked at 3.5MiB/s) and it doesn't seem to matter what protocol I am using (have tried SMB, FTP and SFTP).


    My M1015 is several years old now and am wondering if it is on the way out. Are there any tools I can use to probe/test hardware or specific log entries I should be looking at that might give me some insight. Good news is these controllers are still plentiful and cheap, bad news is I have completely forgotten how I managed to flash the firmware. Anyway, first things first I guess.

    Thanks for the feedback. Turns out I need to measure twice and cut once. I looked at the I/O panel of my OMV box prior to going down the USB route, turns out I completely missed an e-SATA port. Double whammy after actually getting the motherboard model number and reading the manual to find out it also has 2 x USB3.1 ports. For some reason I thought USB3+ only came out around 2015, tuns out I was way off the mark.


    https://www.asrock.com/mb/Intel/H61M-ITX/index.asp

    Hi All,


    I have a Silverstone DS380 8 bay case running 8 x 4TB drives using an old Intel i5 2500k based system and a firmware modded IBM M1015 disk controller. 7 of these drives are merged using UnionFS configured as data and content drives for SnapRaid with the 8th drive allocated as content and parity. Ideally I want more than one parity drive, however I also really need more storage space and I don't have the cash to replace the drives with something bigger. Everything is backed up to the cloud and currently I am deleting large files I don't have immediate need of to reclaim space (knowing I can restore from the cloud if need be).


    I do have a couple of spare 4TB drives though and I was thinking I might get a basic 2 bay external hard drive enclosure in which I could put these drives into. I have no free expansion slots on the motherboard of my OMV box, so I'd have to attach this external enclosure via USB2. To maximise the data access performance I was thinking I'd add these two disks in the external enclosure as parity disks. Once fully synced I'd then reconfigure the internal parity disk to be part of the UnionFS and add it into SnapRaid as a data/content disk.


    Does this sound like an ok plan given my situation? Are there any pitfalls of using external enclosures for parity disks?


    **Edit**

    I see USB is specifically called out as not recommended;


    https://www.snapraid.it/faq#hardware


    Not sure what option I have at present though. I'm the only user of this NAS, so provided the only impact of using USB is performance I think I'll just have to wear it. Ended up buying one of these as the seller had a refurb unit going cheap;


    https://www.amazon.com/Archgon…h-Enclosure/dp/B00LO3LUF6

    Ok, so "restoring" SnapRaid from the command line seems to have been a breeze. I just dropped the snapraid.conf file saved on anyone of my drives in over the running config in /etc/snapraid.conf . I ran various checks from the command line and the configuration looks be fine.


    The issue I have now is that even after doing this and rebooting my NAS, no drives are showing up in the drives tab of the snapraid plugin. I can't see this anywhere in the OMV Snapraid guide, so just unsure if I need to do something in particular for the OMV plugin to ingest these drives so they are shown, or if I need to just add them manually again using the same data / content / parity configuration and settings as are already in use.


    Anyone have any guidance here?


    *Edit*


    Not sure if it has any bearing, however I can use any menu item off the tools menu drop down on the Drives tab and it works. This seems to be purely an issue where the OMV Snapraid plugin. Almost as if it does not pull in the snapraid disk configuration from snapraid.conf. Would this data be stored anywhere else?

    So I seem to have made progress here. 100% the problem exists between the keyboard and the chair.


    I revisited wipefs after reading the man page for blkid which stated that the "ID_FS_AMBIVALENT" is returned when more than one file system is detected, which is exactly what is shown in the output above. I was super confused as I have never used the ZFS plugin for OMV, and why did some disks have this ZFS filesystem while others didn't. Penny dropped then as waaaaaay back I used to use FreeNAS which was ZFS. When I created my filesystems in OMV 3 it mustn't have cleared all the disk signatures. No idea how this wasn't an issue under Jessy as blkid / udev behavior according to the man page should have been the same.


    Anyway, so I go back to wipefs and delete all signatures (ZFS had left 13 signatures per partition) for each disk while ensuring I take a backup;


    Code
    sudo wipefs --all --force --backup /dev/sdX1


    Then I used dd to restore just the ext4 partition signature;


    Code
    dd if=~/wipefs-sdX1-0x00000438.bak of=/dev/sdX1 seek=$((0x00000438)) bs=1


    Within seconds of doing this OMV started mounting my partitions and following a reboot the unionFS was showing all branches as present. Most importantly I can access all my data again.


    My SnapRAID config remains trashed. I've checked each partition I had under SnapRAID and the old snapraid.conf/content/parity files are all still there, however when I go into SnapRAID GUI this is not picked up it seems. I'll need to read up on whether I should delete these snapraid files or if I can safely import them somehow. That is tomorrow's job.

    All the buttons on the Filesystems page are greyed out for each entry with the exception of /dev/sdj1 .


    I think I am getting closer as to what is causing this, how to fix it will be another thing. As above, the output from wipefs shows that all the drives not being mounted are being reported as ZFS_Members.


    My fstab file shows that it is referencing all sources by label. These labels are automatically generated by udev using blkid as far as I can tell. Specially /lib/udev/rules.d/60-persistent-storage.rule here;


    Code
    # by-label/by-uuid links (filesystem metadata)
    ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
    ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*", SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"


    So looking at one of the drives that is mounting you can see ID_FS_LABEL_ENC required for labels is being populated;



    Using the same query on any of the partitions not mounting I get this;


    Code
    root@TryanNAS:~# blkid -o udev -p /dev/sdd1
    ID_FS_AMBIVALENT=filesystem:ext4:1.0 filesystem:zfs_member:5000
    root@TryanNAS:~#


    Weird (to me at least) is if I query the partition label via other methods and it returns what I expect;

    Code
    root@TryanNAS:~# e2label /dev/sdd1
    4TB1
    root@TryanNAS:~#


    Anyone have any idea why this might be or how I might fix it?

    Not convinced this is a OMV issue, seems to be some sort of disk labeling issue perhaps?


    Wipefs seems to be reporting my drives that aren't mounting as ZFS Members. I've never installed ZFS, would this be right?



    Was absolutely expecting a reboot at some stage, just didn't want to go ahead if what I was seeing was an indication of a bigger issue as I've only read of a few having issues with filesystem through the upgrade with no solid solution found in any of their cases. I have now rebooted and no longer have access to my data.


    I do notice though that syslog is full off messages about mount point errors. I'll start digging into that.

    I have now, even tried different browsers. No difference. The issue above was not the only one I experienced after the upgrade, though I think it is likely the root of them all. I should have two file systems, one called NASDATA comprised of 8x4TB drives in SnapRaid using Union FS. The other is one called NASSYSTEM which is comprised of 2x500GB drives in software RAID1.


    The other issues I had in the upgrade was ClamAV will not start, however I wonder if this because of the underlying file system issues. I also had an issue with VirtualBox showing as uninstalled (just like SnapRaid) after the upgrade. I had VirtualBox configured to use the NASSYSTEM filesystem exclusively. Fail2ban also would not start post upgrade, however an uninstall and re-install seems to have fixed that.


    I'm only really used to dealing with OMV via the gui, so am at a complete loss of where to go from here.


    I do have everything on my NASDATA filesystem saved in my Business GDrive account, and there is nothing on the NASSYSTEM filesystem that can't be rebuilt in time. My absolute preference though would be to get this all up and running again as restoring 25TB+ from Gdrive is going to take a month or more on my Internet connection.


    Not sure it is required (have seen it requested in other threads), but I have attached a status report and my config.xml file if this helps zero in on anything.


    *EDIT*


    Should also point out I have not rebooted since the upgrade as I was a bit worried I might lose everything.

    Files

    Hi All,


    I've been putting off my OMV 3.x -> 4.x upgrade for a while, but decided to give it a shot a few hours ago. I uninstalled all the 3.x plugins that had no 4.x counterpart and then rolled through with omv-update.


    All seemed to go pretty well from a noobs point of view. However, I found that the SnapRaid plugin was no longer installed, so I went ahead an installed it. My previous config for SnapRaid was 8 x 4TB drives (7 Data, 1 Parity) which were presented as a single filesystem using UnionFS. I also have 2 x 500GB drives I use for VirtualBox and a single 30GB SSD for the OMV install.


    Looking at disks everything seems to be there;


    Disks.jpg


    When I get to filesystem however all but two of my SnapRaid devices are showing offline with the last two disks seemingly having lost their SnapRaid labels. My Union FS is showing online however I it appears I can access it via Samba, though haven't used it for fear of corruption;


    FileSystems.jpg


    If I look at the UnionFS config, it is only showing 1 disk;

    Union.jpg


    When I go into the SnapRaid gui there are no drives shown, and when I click add drive I get this;

    snapraid.jpg


    I'm quite a novice at this, so would really appreciate it if anyone could give me some pointers what to do next to get my config back online properly.

    Hi All,


    I've been running my NAS on a Asrock Q1900-itx based system for years, however the mainboard has today called it quits. Ideally I'd replace with a new low voltage SoC mainboard, however I do have a Intel i5-2500k on a ASRock H61M-ITX sitting around doing nothing so will probably go that way.




    My system config is as follows;


    OMV 3.0.99
    1 x 32GB SSD boot device
    2 x 512GB SSHD in SW RAID 1 (for Virtual Machines)
    8 x 4TB HDD using snapraid and unionfs (using cross flashed 9240)


    Given the dissimilar hardware are there any particular steps I need to follow to swap out the mainboard and have OMV recognize all the existing file systems?

    Hi All,


    Currently performance statistics graphs disk usage capacity usage. Just wondering how hard it would be to expand this to include things like latency, queue depth, I/O read/write, HDD temp, etc? I imagine you could use something like iostat to generate most of the data, perhaps a SMART poll for temp?

    Hi All,


    Have been using FreeNas for the past few years and have decided to give OMV a try. I have a few questions I'm hoping more seasoned OMV users could answer;


    1. I am wanting to install OMV to a USB flashdrive as I want to use all spinning media for data only (worked well with FreeNas). For this I understand I'll need a plugin from the OMV 3 line for memory based filesystems. My question is how stable / reliable is the OMV branch at the moment?


    2. For my main array I have 8 x 4tb disks. Would I be better of having 2 x RAID 5 arrays of 4 disks which are then striped (RAID 50), or a single 8 disk RAID 6? Thinking both from a performance perspective as well as ease of recovery should 1 or more disks fail. Coming from FeeNas my NAS has 16GB of memory if allocating extra here helps with RAID management.


    Thanks all!


    Sent from my LG-D855 using Tapatalk