Posts by crashtest

    Nope. That was wrong already years ago and it doesn't get better by copy&paste over and over again. You need huge amounts of RAM for dedup but otherwise the less RAM you have the smaller your ARC cache gets.

    Did you see the beginning of this thread? I get the distinct feeling that elastic is not following the latest developments in ZFS. I'll include myself in that assessment as well. And I still standby by my "NOOB" assessment regarding the ZFS learning curve and RAM considerations. Why? Because the various scenarios that ZFS can be applied to are vast. It's an enterprise solution. ZFS is "complex". Also, in the exploration process (remember, NOOB's here!) one might trigger a de-duplication process when mucking around on the CLI so I'd still argue that skimping on RAM, as a basic requirement, is not a good idea.


    Wrt tests I was talking about real-world tests (nothing simulated in a VM with 'virtual disks', that's just a waste of time). It's disks that start to slowly misbehave that matter, it's not about 'black or white' scenarios like booting a system with two virtual disks disabled.

    Again - NOOB alert!


    But when it comes to the basic behaviors of RAID, nothing I said in this thread is wrong. Where you're concerned, I get the sense that maybe you've run test scenarios in a lab environment. I can tell you that I have as well - where I attempted to structure scenarios and control as many variables as possible. Outside of a LAB? For conclusive tests of computer hardware, I tend to trust Toms Hardware . However, in a home environment, most do not have the luxury of extensive hardware laying on the bench to test the real thing, or even the time required for obtaining true empirical data. Alas, this leaves the NOOB (and many of OMV's developers by the way) with few testing options, outside of using VM's. So, if VM tests are "wasting time", I believe it's time well wasted.


    I completely understand that the real world has a way of producing odd, even bizarre, behaviors that basic tests can not simulate and/or reproduce. However, in most cases, those odd ball events are exceedingly rare. Again, I stand by the test scenario posted and my comments regarding the basic behaviors of RAID6 as being, "typical" or "nominal". I believe, that if elastic pulls drives he'll see the same basic behaviors.
    Lastly, in other threads I've stated in no uncertain terms that I'm not a fan of RAID because, as many mistakenly believe, it's simply NOT backup. Herewith -> Thoughts on RAID



    However, you're right to send up a flag regarding pulling drives. So...
    __________________________________________________________________



    I'll definitely test it by pulling out those two drives, I am very curious :) .


    tkaiser raises valid points regarding pulling hard drives out of an array. Even if you have "hot swap" rated hardware, you're taking an unnecessary risk by pulling out a drive "hot". Drives simply do not fail in that way, where the interface and power disconnects all at once.
    So, if you want to see RAID6's drive recovery capabilities, do it like I did it in the VM. Shutdown, disconnect a drive (or two) and power up. To add the drives back, shutdown, plug them back in, and add them back to the array after booting again. You might want to think about doing your testing before copying huge amounts of data onto the array. (Add a GB or two, maybe a few music files, for test purposes.) Otherwise, the process laid out about above, applies.
    tkaiser is also correct on the issue of not maintaining backup. RAID is not backup and may be giving you a false sense of security. If you truly want to keep your data, you need backup. In the link -> Thoughts on RAID , I state that I prefer full, platform independent, backup. It doesn't have to be expensive either. At the first level, I'm using an R-PI and a 4TB WD "My Passport" which is USB powered. They're a bit larger than the size of two packs of cards, use around 12 to 15 watts, and they'll sit, unnoticed, behind your server.


    Give having good, tested, backup some thought. (And note it's better to have backup, before the disaster.)


    Let us know how it goes.

    It's about 10 degrees warmer inside the safe compared to the outside. Doesn't seem to be a problem. The server is a low power Mini ITX Intel Avoton C2550 Quad-Core Processor. It currently has five 3.5in hard drives in it with room for three more. Current temps:


    HDDs: average 104F


    CPU: 110F

    Those temps are well within any reasonable spec. It seems you got just the right combo for heat load and good performance.
    Good job.



    Sometimes the best solutions are "home brewed". :thumbup:

    You "drilled" it. Wow.


    I would have guessed it could be done with an oxy/acetylene torch but that would make a real mess of the inside, no matter which side was burnt.


    Gun safe's can be quite large but, I take you don't have problems with heat buildup?
    (If you're using an R-PI and a USB powered drive, they would run cool enough.)

    Regarding encryption:
    I can't help you in this area but I'm tempted to do some VM testing to see how it works in the open source world.
    ______________


    While my experience in this is, by my own admission, way out of date:


    I've seen commercial whole drive encryption used at work, and I've actually recovered a lost security token (a software key). It was a freaking nightmare. It was those experiences that taught me not to encrypt my personal drives. Further, depending on how it's implemented, whole drive encryption might complicate fixing what would otherwise be simple file system issues.
    (I'm thinking about using a self booting rescue disk with a virus scanner, and other scenarios.)


    Encryption is only good for "physical" drive protection. (I.E. to prevent someone from physically stealing your drives and the data on them.) As far as I know, whole drive encryption doesn't provide much protection from network attacks where the OS has been compromised which is much more likely.


    My advice? If you're worried about physically losing your data, lock your server in a closet.

    It would be scary if this would be the last step you consider. After you set up everything it's time for testing:

    • does it work when you pull out one drive?
    • does it work when you pull out two drives?
    • What happens when you insert a spare now?
    • What happens when you put back one of the 'failed' drives now?
    • How does your RAID cope with data corruption? Gets it detected?
    • Do the regular checks work?

    (And so on. Nobody at home does this but simply trusts in 'it must work since I spent already so much money and efforts on it' which transforms the whole approach in an untested waste of energy and ressources :) )

    I'll do this by the numbers, from the above. (The assumption in the following is, you used RAID6.)


    1. Yes. RAID6 will continue to run with up to two drive failures.
    2. Yes. After two failures you're still good and all would operate as if nothing happened. Your array may run, literally, for months in this condition. (I've seen bonehead field admin's do exactly that, with 1 failed drive, in a RAID5 array.) However, there's no margin left. If there's a 3rd failure, all is lost and recovery is not possible or, at least, practical.
    At this point, with 2 failures, in <RAID Management> the state is "Clean, degraded". Data is available.
    3. First note it's better to have a hot spare on-line but sometimes there are not enough slots in the case to house that many drivers.
    - In any case, if you insert "a spare" at this point, either by command line or GUI (in the GUI you'd use the "recover button" to add a drive), it is built into the array. The state is "Clean, degraded, recovering" with progress provided in percent. (It will take awhile.) I'd advise adding two drives at once rather than doing them one at a time. Note, these operations are a serious stress test for older drives.
    4. If you're running a test and there's nothing wrong with the drive, shutdown, insert the drive, wipe it and use grow (if it's a spare) or recover if you're adding it back to a degraded array.
    5. RAID does not work with data corruption. It uses parity to protect from drive failures. That's it. RAID presents what looks like a "single disk" to the operating system. Lastly, RAID is not good at detecting it's own errors. Back in the day it had no mechanism for detecting and / or correcting the small number of errors that it will inevitably write to the array. (I don't know if software RAID is better.)
    - File hashing (calculating and storing check sums, etc.) is more of a file system function which is layered on top of RAID. In some cases, add-on's like Snapraid add similar protection. A journaling file system (ext3,4,xfs, etc.) prevents common instances of file corruption (but not all). A CoW or Copy on Write file system seems to be what you're looking for to prevent "silent" file corruption. This is a complex subject but a decent primer for it can be found here. CoW file systems
    6. Personally, I'm using BTRFS for a number of reasons. ZFS is great, but the learning curve is substantial and it requires gobs of ram. (1GB ram per 1TB of storage.) If you're going to run "tests" on BTRFS here's a place where you can see the Windows equivalents of chkdsk (with options). BTRFSck Frankly, I wouldn't do repairs without doing extensive research.



    I'm at home, I trust next to nothing, and I tend to test everything before I use it. (This includes OMV.) When M$ stopped supporting what I wanted to do at a reasonable cost (I refuse to pay $500 or more), I looked elsewhere and testing was what brought me to OMV.


    What I told you above, I tested in a OMV 3.0.77 virtualbox VM, with 7 virtual drives. I simulated drive failures by removing 2 drives before booting the VM. In any case, you can do the same. If you have a client with a decent CPU and a bit of extra memory, you can test OMV to your hearts content. If you want to know for sure, there's no better way.

    I forgot to mention:


    It's very easy to add a drive and expand a RAID array. Even if OMV didn't support the process in the GUI, it's a two command line proposition.


    The following is a working example of those lines - mdadm commands:


    mdadm --add /dev/md0 /dev/sdf
    mdadm --grow /dev/md0 --raid-devices=5


    The first line adds a disk. The second line, sets off a restriping operation which integrates the new disk and grows the array.
    - assumes sda is the boot drive.
    - sdb through sde makes up a 4 drive array, md0.
    - sdf is the new drive added.


    BTW: Doing this operation from the command line will not "break" the OMV GUI.
    ________________________________________________________


    If you use the first line only "mdadm --add", the new drive becomes a hot spare for the array.
    If a drive fails, the hot spare is automatically added to the array.

    The basic steps are:


    1. Insert your drives. For RAID 6, there must be a minimum of 4.
    2. Under <Storage>, <Physical Disks>, <Wipe> each drive that's going into the array. (DON'T wipe your boot drive.) Using the "quick" wipe option is fine. (There's no need to use "secure" wipe. If you do, you'll be waiting awhile for completion.)
    3. Under <Storage>, <RAID Management>, click on <Create>. Select the level, "RAID6", and at least 4 drives.
    It's going to take awhile for the first "sync" to complete. (If you stay on this page, you'll see progress in "percent".)
    4. Under <Storage>, <File Systems>, click on <Create>. Pick the file system you want to use on the RAID array. (The array will have it's own device name such as, dev/md0. That's what you'll be using in the device name field, for the file system.)
    (Depending one what you do here, the array size, etc.,, formatting may take awhile.
    5. In the same location, <Storage>, <File Systems>, after the format is finished, click on <Mount>.


    In basic terms, that's about it. From there it's the creation of shares, configuring services, etc.

    Well, I prefer small SSDs over HDD or USB thumbdrives as boot media. SSDs are more energy efficient as HDDs and show less boot problems as USB drives ... I spent 28 bucks on an 32 GB SSD on sale ... not that much more as your USB drive.


    Disadvantage: I have to sacifice a SATA port.

    You know, I've been booting with flash media (SD cards) for a few years now and I had one (1) problem. That was early on and I suspect it was because I wasn't using the wear leveling plug in. (I learned a lesson in that event.) After that, again with the flash media plugin installed, there hasn't be one problem. Along other lines, it's super easy to clone a flash drive of any type. And given the minimal cost, I have a spare ready to go.


    But, you're right - there's nothing wrong with a $28 SSD. Where do you get 32GB SSD's? (For my info, are you using the flash media plugin?)

    It was a hard drive from another computer that I upgraded to an SSD, so I reused the hard drive for OMV instead of buying a separate drive for OMV after I decided to switch from a Raspberry Pi. I would have been fine with continuing to use the Pi if it weren't for the fact that I wanted gigabit speeds.

    (I got a detail in my post above wrong. It's not a 16GB USB drive. It's 32GB. While it's not really needed, that's extra room for temp and log files.)


    "Reusing" is good. My first experiments with OMV started with a Raspberry PI that I wasn't using, an 8GB SD card, and a used 1TB drive that was laying around. Now, with a 4TB drive for data, that experiment evolved into a full server and data backup.


    On the other hand, like you, I needed a 1G network interface mostly for imagining clients. The PI will do it, but at 100mbs it's too slow to do more than one client at time. (And even at that, it's still like watching grass grow.) The PI will serve files just fine, but it's limited beyond that particular function.


    I'm building an i3 server right now. Man, so far, this rig is flying! When one is used to an R-PI, an i3 with 12GB ram feels like WARP speed.

    That's the case for me. I didn't like the idea of a 500gb drive getting wasted on OMV. I still don't have much on the drive, but I can use it if I want to.

    Folks are spending big bucks on SSD's for the OMV boot drive!! After the server boots, nearly all of OMV (the NAS stuff) is in memory. At that point, after boot up, there's little to no difference in performance between fast or slow boot media. And let's be realistic - even with a programmed (once a week) maintenance reboot, who cares if the boot time is 2 minutes or 30 seconds?


    I'm building a new server and I'm dedicating a whole 16GB USB3.0 thumbdrive to OMV, as a boot drive. ($14 USD!) If everything (plugins and the like) is properly configured, there are very few good reasons to use anything larger.

    I looked up the price of that Lenovo Thinkserver, that's £400 + in the UK...... 8o

    400+ quid!!! You have got to be kidding! That's over $500 USD Wow...
    The last time I heard of something that outrageous,, the locals were dumping tea in the harbor! (Oh,, um,, sorry... :rolleyes: )


    Currently working through some issues regarding the school I do IT support for, but I'll explain that in another post....just to get your thoughts on what happened and it's final outcome....which I get the impression is not over yet....politically.

    Man, the "stuff" that goes on in public education can be appalling. In the US the cost of K-12 education, per student, is 3x more than it was when I got my high school diploma back in the day. Back then, we were #1 in the world. Now (at 3 times the cost), math and science scores (i.e. hard education) are down. Now, we're somewhere in mid to high 20's in achievement. In the US, as it seems, when government digs its' fingers into anything, it becomes increasingly expensive while performance continually drops. (It's inversely proportional. The more it costs, the less one gets.)


    And, as a sole point of focus in solving the problem, it's not teachers. Like any occupation, there are some problems there but I believe those issues are fixable. I believe the majority of problems in public education are (#1) political and (#2) management. Unfortunately, the problems with management in education, at all levels, stem from the problems created by politics.


    I'll be interesting in what you have to say on this topic.

    I'm not sure if I'm buying a server case, but I think I'm going to buy a regular motherboard.

    If you have a place to put it, there are a number of large inexpensive cases available that will warehouse a lot of hard drives. I have a "server" motherboard but, in the essentials, all it has over most regular MOBO's is ECC ram and a RAID utility.


    If you're really interested in RAID, you'll be hooking up more drives than average and it makes sense to keep your boot drive out of the array. So, when you shop,I believe you'd want to note the number of sata drive connections your new MOBO has to offer. Running RAID 1 is easy at only two sata ports. To run RAID5, even if you boot from a USB drive (which is what I'm doing), you'll need a minimum of 3 sata connections. I'd argue 4 sata connections is minimum for RAID5, because it allows for a standy spare. (That would be 5, if booting from a standard hard drive or SSD.)


    If you're you're looking for file parity protection, you almost might think about "Snapraid". OMV has a plugin for it. Snapriad requires a bit more hands on than standard RAID but it does give more flexability. Info on Snapraid, how it works and what it's for, can be found here. -> Snapraid

    Thanks! I asked because the other day a hard drive inside a laptop got some bad sectors and that made me think about what RAID would do.

    (When bad sectors start appearing, it's time to get your data off of the drive. Complete failure in the future is likely and it may be imminent.)


    Just some quick notes on RAID:
    RAID is NOT BACKUP. While it may protect you from a hard drive failure, there are many scenarios where you could lose the array. Good backup is a must and it doesn't have to be expensive.


    RAID 0 - The insane array. This one is designed solely for faster throughput, and makes no sense in a NAS role. If either of the two hard drives fail, it's over. Everything is lost. (If you're after performance for a Desktop PC, you'd be far better off with a single SSD.)
    RAID 1 - A mirrored disk set. The advantage is, if one disk fails, the other still has your data. The cost is one complete HD. (2 drives, at 4TB each = a 4TB array)
    RAID 5 - A stripped volume. The advantage is, if one disk fails, the other disks still have your data. (A minimum of 3 disks is required and there is no maximum limit, in theory.) There's a read and write performance increase. The cost is, 1 drive to store parity. (3 drives, at 4TB each = an 8TB array)



    Unless you're buying a server case and motherboard, the above covers most scenarios you may be considering.

    If a bad sector suddenly develops and a file is written to it, no. The file may be corrupted. Also, bad sectors are usually detected on read operations. However, if bad sector(s) are detected, RAID 1 will (likely) fail the disk. A disk doesn't have to completely die to be failed by RAID.


    If you're looking for bad sector protection, you might want to look at BTRFS or ZFS file systems. These file systems put check sum values on files, where a file will be rewritten if an error is detected. It's referred to, by some, as "bitrot" protection.

    In that case I would wonder if your Rsync Jobs actually ran.


    To manually run an Rsync job:
    Go to <Services>, <Rsync>, and click on one of your jobs and "run". A dialog box will open, then click "start".


    If the job is working, the dialog box will show you something like the following:
    (In my case, everything is up to date. If new files are copied, from the <source> to <destination>, the list will scroll by.)
    _____________________________________________________________________________
    Please wait, syncing </srv/e29e426a-9fde-458f-b398-b70108fff26e/> to </media/5eee31cc-c41b-46b4-b829-71951efd6bb5/ServerFolders/Backups> ...


    sending incremental file list


    sent 226,902 bytes received 1,001 bytes 9,302.16 bytes/sec
    total size is 6,568,917,731 speedup is 28,823.31
    _____________________________________________________________________________


    If you're not getting something like the above, the job is not working.


    BTW - what you see in this dialog window is also stored in the Rsync log.

    After a few tests I noticed that I don't lose the RAID when shut down OMV, but only when I shut down or reboot the physical machine.

    It sounds like you're connecting physical disks to a VM of OMV. (Is that correct?) If so, this problem may have something to do with the way the VM hosting software is presenting the disks to the OMV VM.


    After the physical machine boots, I'm guessing that you're starting auto-starting the VM hosting software and automating the start of the OMV VM. If that's the case, maybe inserting some delay (a few minutes), before starting the VM hosting software and OMV, might clear it up.


    What are you using? VMware, VirtualBox?

    Ok, there's a lot of parameters missing here.


    First, realize, that you haven't disclosed what version of OMV you're using. Beyond that, realize that the behavior of a VM, between reboots, is a condition of the VM host software. (Virtual Box, VMware, etc.?) Is your VMware set to "reset" between reboots?
    So, your question, about RAID is conditional on a daily 23.55 reset in a VM that might be resetting the VM to a previous state..?? Do you see the problem here? You're talking about "virtual hardware" in an environment that resets every night. Without doing a deep dive into your specific scenario, what are you trying to simulate?


    Here's a few questions I would ask of you:
    1. Did you "save your RAID 5 array" when prompted, after a configuration change?
    2. If you're running a simulation of OMV, as a 24x7 NAS, why would you power down every night?
    3. After recreating a RAID array, as you state it "resync's" where everything is working again, does nothing to shed light on a technical problem.


    If I was was you, first, I'd try to get a better understanding of what your VM hosting software is doing. Please note that your VM software is NOT perfectly simulating "reality". After you have that understanding, I suspect you'll have a better understanding of where your RAID arrays are disappearing to.

    Ich bin mir nicht sicher "genau", was du suchst, aber um den Samba-Zugriff auf bestimmte Benutzer zu beschränken, musst du die Datei smb.conf bearbeiten. Ein Beispiel finden Sie hier .. Samba-Beispiele (Regrets - Sie müssen Google Translate verwenden.)
    Wie in diesem Thread erwähnt, wird ein Samba "share" auf einer OMV "share" auf "Oberseite" geschichtet. Samba-Berechtigungen werden die Berechtigungen der Basis "Freigabe" nicht überschreiben.


    Beachten Sie, dass Sie "anderen" "schreiben" Zugriff auf die OMV-Basis "share" erlauben könnten. Und beschränken den Zugriff auf die Samba-Freigabe mit "force" user.