Posts by Ralms

    Update:


    It seems to be related to the path size of the SharedFolder or the fact it has spaces.
    in this case the path I want to share is: plexmediaserver/Library/Application Support/Plex Media Server/Logs/


    I was able to share both the root folder of that drive, as well I managed to add in NFS "plexmediaserver/Library/" but if I start adding the paths with spaces it will throws a json error.
    I cant change this path since it will screw up Plex.


    Help plz :(


    Thanks.

    I beleive that Client IP is set incorrectly.
    If you want to just allow a single host you would use a 32 mask, so 192.168.1.110/32 allowing only the PC with 192.168.1.110 to connect.
    If you want to allow anyone on that ip range, so from 192.168.1.1 to 192.168.1.254 you would set 192.168.1.0/24, a good solution if you are not using static IPs.


    I dont know if it will help about the issue but just something I noticed.

    Hello,


    Its not the first share I was adding in NFS (its the second actually) but for some reason its returning an error now:


    I just selected a shared folder, set an ip range (192.168.1.0/24), with Read Only privileges and default extra options (subtree_check,secure).


    Anyone knows how to fix it? :/


    Thanks

    Stop... Don't go down this path. Proxmox and OMV have conflicting dependencies. While I was able to make it work by removing things from OMV's code, this is not maintainable. Just run OMV as a VM in Proxmox.

    yeah I was expecting that, that is why I wanted to confirm.
    Thank you for that.


    I cant run OMV as a VM because I cant do passtru of PCI devices and due to that OMV wouldnt have direct access to the drives.

    Hi there,
    I need some sort of virtualization Hypervisor to run VMs on my OMV3.
    VirtualBox isnt working and its not my favorite coice to be honest.


    I was looking at installing Proxmox, has anyone ever installed Proxmox on OMV3? is it safe/ok?


    Thanks.

    What do you mean with "N+X style data protection" ??

    I wouldn't raid10 instead of Raid1 with only 4 drives on BTRFS.... its just increasing your chance of failure.
    2 pairs, with 1 drive each, versus 4 drives, with 2 failures total.


    Just making sure btrf balance is done regularly would work.


    Why did you choose Raid10? Im happy to be corrected!

    Im not 100% sure, but I went searching around to try to confirm doing Raid 1 on 4 drives and it seems that you would have the capacity of 1 drive in the end.
    It makes sense if you think about it, Raid 1 is mirror, so if you have more than 2 drives, it will just keep mirroring the drives, having basically 4 copies of your data instead of 2.
    Not very effecient unlesss you want to be really safe.


    In case of Raid 10 you add performance with reliability. Just to clarify, Raid 10 is a Raid 0 of (Raid 1 + Raid 1).
    You can loose a drive in each bank and still be fine.

    No. It has four 4TB WD Red Pros in it and uses unionfilesystem plugin to pool the drives. So, one drive is saturating gigabit.
    Look at the Pentium G4560 ($65). Dual core but hyper threaded. Supports ECC. So, it should be able to do what you want. I don't think you the motherboard you picked supports it though.

    I dont see why wouldnt support it and it seem a really good CPU.
    It has as much performance as the i3 at a much lower price.


    On top of that the T processors are pretty much impossible to get since Intel in their special way, only sells them in trays.


    That might be the one.

    Unless you are doing virtualization or transcoding, you don't even need an i3. My backup server is a Celeron J1800 (QNAP TS-451 running OMV 3.x) and it can saturate gigabit.

    When you saturate a Gigabit line, is from a Raid 5 array?


    I want it to be able to saturate 2 to 3 Gbits, handle Raid 5 with 5 disks and some checksumming on a Raid 1 :S


    From what Ive seem compared with what I have now, this i3 should be more than enough, although if I could find something cheaper, would be great.

    If below image looks familiar to you then you can use Midnight Commander to copy files locally, over ssh. And the speed will be the speed of your disks.


    Hi there,


    Its the first time seeing that image, does it support folder merging?


    I tried WinSCP last night and it works although I have 0 feedback on the transfer and doesnt seem to support folder merging because it tries to create a folder that already exists (kinda stupid).

    I haven't used an actual Windows fileserver in a long time but I don't even think Windows can copy local across disks.

    Are your clients Windows? If yes, what are you using than?

    Did you try WinSCP? It is just like Windows Explorer.

    That is a good point, didnt think of that. Since on my previous setups I never configure FTP, I didnt though of it but OMV has it by default.
    Will give it a try for now.


    Thanks-

    Hi there guys,


    Due to a lot of headaches in the past with my home server handling a lot of stuff in a single machine, Ive been trying to separate everything.
    One of the main reasons being that normally if something went wrong, even with an app, could cause the system not to boot or whatever making my storage unavailable.
    On top of that is power savings, If I separate everything, I can have a server dedicated for applications that turns off during the night, saving power.


    So Im looking to have a machine dedicated to just storage and my requirements are the following:


    As low power as possible, I mean around 20w iddle + disks.
    Have at least dual Gb lan, although I can always put a Network card on it, but wanted to avoid it to save power.
    At least 8 Sata ports, although I can always put a raid card on it, but wanted to avoid it to save power lol.
    ECC Ram (than I use either a normal i3 CPU or a Xeon)
    If possible, small form factor.
    Cheap, I just need MB, CPU and RAM, and Im looking into spending absolute max 400€


    Here is what I was considering buying:
    AsRock C236 WSI (ECC support DDR4, 1*PCIe 3.0 x16, 2*Intel Gbit Lan, Mini ITX)
    Intel i3-6300T ( 2 cores, 4 threads. 3.3Ghz, 35W TDP, ECC support) Some people in this Intel Forum are claming between 10 to 15watts on iddle for CPUs like 6700T or 6100T, that is amazing.
    1x4GB DDR4, most likely 2133Mhz ECC


    Im just worried if that CPU is not strong enough.


    Do you guys have any better sugestions?

    Hi there guys,


    I'm looking for some suggestions to improve my shares speed.
    I have many SMB shares and each of them point to a different drive.


    What happens is when I copy files from Share A (pointing to drive A) to Share B (pointing to drive B) Windows is stupid and doesn't realize they are in the same machine causing the data to go over the network.
    Due to that, although my NAS has 4 Gbit connection (4 ports in 802.3ad) I'm limited to the client computer with a maxed out Gbit connection and in turn limit my copy speed between shares to around 50MB/s.


    Is there a way to configure SMB so the file operations happen on the NAS when Im copying stuff between disks instead of over the network?


    Thanks,
    Ralms

    First tip: Based on http://manpages.ubuntu.com/manpages/wily/man8/fsck.8.html your filesystem has some errors (see exit code 4).

    Ok, so I went messing around with fsck wihouth sucess lol, he would tell me that the drive was in use, etc and not do anything.


    Due to that I decided to test removing all drives and the system booted.
    Since that error seems to be from one of the SSDs, I reconnected all the HDDs except the SSDs and the system booted fine.


    So thank you for getting me on a path ^^

    Im no expert but form what Ive been reading everywhere, that setup and drives are dangerous as hell.


    From your specs you look to want to do Raid 5 on 12 8TB Drives? That is asking for a massive data lost.
    To start you are planning to do a Raid 5 of 12 drives, all good that you get around 92% usable space that is awesome, but you get a lot of failure points as well. I would use AT LEAST a Raid 6 on that setup.


    On top of the fact that Raid 5 is considered to be outdated for the last 3 to 4 years, if it happens for a drive to fail, the risk of getting a unrecoverable read error from one of the drives is fairly high with so many drives and gigantic drives.


    Just a small article about URE
    http://www.raidtips.com/raid5-ure.aspx


    And a good blog post about it:
    https://standalone-sysadmin.co…e-b06d9b01ddb3#.2h4gm3j9q


    So looking at the drives you were considering, I looked into Seagate Entreprise NAS, Seagate Enterprise Helium, WD Gold, an they all have a Max URE of 1 sector per 10^15. Basically, on a SINGLE drive, worst case scenario it will happen once every 1000TB read.
    But you have 12 Drives and in case of a Drive Failure, your data is reliant of 11 of them, so that error every 1000TB quickly turned to 90TB (1000TB/11) , so every 90 TB read from those drives can have an error. Considering you have an Array as big, I would say that is fairly risky and the possibility of your entire Array become useless is high.


    Again, Im just a home enthusiast, Im no expert on this.
    On top of this, something ryecoaaron said to me made a lot of sense knowing all the risks of Raid 5 (from bitrot, URE, multiple drive failures, etc), Raid 5 is not backup. So consider having a cold storage backup, maybe something with those 8TB Drives in Raid 1 or 10.


    I would talk with that media house for the need of having such a big Hot storage. I would make a fast Hot-Storage, something like 30TB or less and than have a big Cold save storage as backup in the back, maybe those 100TB.


    Consider looking into LinusTechTips setup, they using 10Gbe networking, multiple servers where their Hot-Storage has been around 20TB in SSDs on a server with 100TB of cold using Seagate Drives with a diferent server.
    Currently they are using I think 24 Intel 750 1.2TB Nvme drives (LOOL so fast and expensive) on their Hot-Storage and he is setting up a 1PetaByte cold storage array using FreeNAS and GlusterFS.
    And they do OffSite backup of some of that data.


    So yeah, reconsider the needs and risks very well with that amount of storage and people using it.


    The big thing to ask is really: How important is your data and how much of it as you willing to loose?
    That way you can decide what to build better.


    Hope it helps.