Posts by Shadow Wizard

    I have found a workaround!

    First, create your pool (DUH!)

    Then create a filesystem within that pool. We will call it "a"

    Then create a filesystem within the "a" filesystem. We will call it "b"

    Share filesystem "a". DO NOT share filesystem "b"

    If you need additional filesystems, create them within "b." do NOT create a filesystem within "a" as this will result in trigger the bug.


    I have done no testing as to what may occur if you share filesystems within "b" as for my purpose it is not needed, and I have already done hours of testing to get me this far.

    This is a bug in the import (which happens any time you look at the plugin) with names that might be a subset of another (I think). I haven't figured out how to fix it either.

    Hmm, okay. I have found a workaround. I am going to post it in its own reply so it may be easier to find for people in the future looking for it. I don't know if the workaround may help you find a way to squash the bug.


    I have also noticed, creating the "filesysystem" even in the command line invokes this bug.

    My thought on the case. OMV assignes each filesystem an incremental number, and expects new files systems to be created at the END of the list of filesystems. When creating a new "filesystem" within an existing one, it adds it some place in the middle of whatever list OMV looks at to assign numbers, instead at the end, resulting in the number previously assigned to the root filesystem to the new one.

    I may be WAY out there, but I know sometimes when I debug things (No where neat the complexity of OMV, or I would offer to help) sometimes a fresh viewpoint or idea helps me.

    So, I assume when I create a "Filesystem" it is creating a dataset. I am assuming that OMV just called it a filesystem for some reason. I assume this because following the directions I found to create a dataset using zfs create <poolname>/<datasetname> What is creates is identical in any way I can see to when I create a "filesystem" in the OMV GUI.

    I am gonna try posting again, but the last few posts have resulted in no replies.. However this time I think I may have found a bug. But don't know if this is expected behavior.

    I have included a few screenshots to show what is happening.

    I have a test system I use to test things out before going live on my main systems at home and at work. I use ZFS as my main filesystem for the data protection/etc.

    I have created a zpool using 4 disks in raidz2 (This is running on a VM, but on a physical machine the results are exactly the same) The pool is called tpool

    I create a shared folder in that zpool. IN this example it is called tpoolshare, and its absolute path is /tpool/tpoolshare

    The zpool and the share can be see in the screenshot before the separation bar.

    I then create a filesystem on that zpool called onegbtest within the tpool pool.

    the share changes itself to put it within the new filesystem created. (See last part of the screenshots) so the absolute path of the share becomes /tpool/onegbtest/tpoolshare

    I did not ask it to do that. I don't want it to do that.

    As a work around I have gone back and edited the share back to where I want it, however on my live system this means correcting several shares each time I create a new filesystem.

    Please advise what I am doing wrong, as I am sure it is just me.

    So, I used to be able to do this in OMV5.. I would create a filesystem within the existing ZFS file system, and enable a quote on that file system. That would kind of put a directory within a directory with a quota..

    However, with OMV6, I see no way to do this. How can I create a shared folder somewhere, on an existing filesystem, with a quota?

    So, I figured I would ask here, in case there is something I have not considered... *Checks for fire extinguishers to put out flamers, and troll traps to catch trolls*

    Okay, now that I have all I need...

    I run a small retail computer store. One of my plans with my current OMV server is to sell some cloud backup options to my customers.

    I have considered syncthing, and even owncloud. What I am wondering is if there are other options one would suggest.

    Well, in my testing, mind you that testing involved 3 50 GB drives in raidz1, total ~100 GB (You were right, typo) I added a bit of data.

    I then added 1 50 GB drive. (I am pretty sure it just added it as a basic vdev) Resulting in about 150 GB of space. I then added another 50 GB drive, resulting in 200 GB of space. (Same as previous drive) No more added was added to the pool. I then killed one of the last added 50 GB drives, and the whole pool collapsed, saying it was unavailable as there were not enough redundant drives available.

    So I am pretty sure, and think I remember hearing a while ago, if any of the vdevs become bad, the entire pool collapses.

    Thank you very much for all the information.... Now if someone would just help me with my CIFS issue in that post, I would be good to go... Once all my drives arrive.

    If you want the added space to have the same capabilities, yes. If you have a three drive raid-z1 vdev and you add one disk, the new disk will just be basic. You need to add another three drive raid-z1 vdev to truly expand the pool the right way. If you want to learn more about zfs, read this - https://pthree.org/2012/04/17/…l-zfs-on-debian-gnulinux/

    Thank you for the link, I have checked it out, however do to some disabilities I usually cannot learn from reading documentation. I have scanned it over, and that appears to once again be the case here.


    Moving on from that, yes, that is how I understood it, kind of.. But I thought you could add bigger vdevs, so long as they were (if you wanted the same fault tolerance) the same type of zraid.

    So just top make sure I totally understand I would like to purpose a scenario to be sure I understand everything correctly.

    I start off with a pool, like mentioned, with 3 50 GB drives in a vdev, raidz1, this gives me ~150 GB of space.

    I create a second vdev, this time consisting of 4 drives of 75 GB each, in zraid1, and add it to the pool

    Total space is ~150 + ~225 GB of space, total ~325.

    This system would be considered "acceptable" by traditional standards.

    It would have a fault tolerance of one drive. Or TWO drives (so long as the 2 drives were in different pools), but would fail is 2 drives failed in the same pool.

    Please let me know if I got anything wrong, or if that basically sums it up. Or again, if I am totally way off..

    Thanks again.

    Nope. You can only expand zfs by adding another vdev the same size as the first one.

    Are you sure it has to be the same size?

    Here is what I did.. PLease mind terms as I may get them wrong.

    I created a raidz1 pool with 4 50 GB HDDs, total space, about 150 GB.

    I then did a zpool add Testpool scsi-whateverthislongstringwas

    The pool was now about 200GB in size.

    I then did a zpool add Testpool scsi-whateverthenextlongstriongwas

    Then the pool was 250 GB in size.

    So it did seem to grow with each new drive. However there was no fault tolerance on the drives I added. I removed obe of the drives from the machine, and the whole pool was distroyed.

    Unles you mean each drive I added needed to be the same size, rather then the vdev needing to the the same size.

    Not trying to point fingers and say "Ha ha" I am just trying to understand, and thus learn.

    OMV is displaying what sysfs or UDEV provides. OMV does not call external commands like smartctl or other tools by intention because calling those commands is very expensive.

    So that leads me to a few other questions then. I have found that these are device ID's it seems. Is that correct? Its at least something Linux uses to identify the drives individually I am guessing.

    And either way, be it correct or not, how is this hexdecimal number assigned? Is it random? Or is based off something else? And if I put that drive in another system, would it have the same ID, it would it be reassigned? When is it assigned? Like it is assigned with a drive is formatted? And if whatever happened to assign it was done again (ie. if formatting assigned it) would doing it again assign a new one, or the same one?

    Exactly this, but it's also dependant on the data's fragmentation, but if you're looking for something specific related to time there isn't one, it can take hours, days, weeks

    No, I know there is nothing specific, or even really general on the amount of time, which seems kind of crazy.. You think they would have a formula or something that could lead to something with at least a range.. 'rebuilding should take between 2 and 5 days'. I would be very frustrating to do something that was not 100% necessary expecting it to take 2-3 hours, and have it take 2-3 weeks...

    I was basically just looking to know if it was pool size, or data sized that determined the rebuild time.

    Thanks.

    It's a something, OMV is only as good as the hardware it's run on, you could try ssh into OMV as root and run; hdparm -I /dev/sd? replace the ? with the drive reference in Storage -> Disks and see what that gives you. Compare that to the physical drive


    Then I would suggest you create a drive layout (which is what I do) and transfer that to a spreadsheet or word doc, this contains information where the drive is connected, make, model, serial number etc (you can get the make, model and serial number from the drive itself)


    If each of the cards are in HBA mode have you checked the information in the bios for each drive, the fact you are running two of the same cards albeit one is a 420i could be a mitigating factor

    Thats a good idea. Spreadsheet with info on all the drives. Location, serial number, notes, etc. Thanks you.

    According to the 2015 specs of the machine it has a 420i which states 2 x external and 4 x internal ports, you have 4 drives have you not considered connecting all 4 drives to the 420i in the machine and removing the other 420 from the system, process of elimination

    I don't know why it says there are 2 external ports, and 4 internal. All my ports are internal. But I have found a lot of conflicting information about that system online. I found one place that says the 420i can support 60 drives... WTF? I think its because its enterprise hardware, there isn't as much of it out there, and even less of it in the hands of people like myself that both give it a second life as something different when intended, and don't have as much knowledge of the workings of the hardware.

    And the reason I have kept everything in there is because I have 8 more drives OTW. That will fill up all of the available SAS ports provided by both controllers.

    So first off, because I have recently asked for help with one running in a VM, this server is not. It is a physical machine.

    Its actually a HP ProLiant ML350p Gen8 with a HP Smart Array P420 controller, as well as a HP Smart Array P420p (I think its p) both in HPA mode. 1 have 4 mechanical drives connected to the controllers, 3 3 TB drives, and 1 4 TB drive. To the SATA port on the mainboard I have a kingston SSD to boot from.

    When I go to Storage-disks the SN for every drive shows as about 17 what appears to be hexidecimal (numbers and letters, no letter above f) characters long, and all of them start with 500 (Even the drives from at least 2 different manufacturers)

    When I go to storage-smart-devices they come up as something totally different, and what I assume is the correct SN based on what I am seeing (No physical access to the machine at this time to check) as the kingston SSD has a SN that matches that which is located in the storage-disks section, and the mechanical drives have what appears the be the correct serial numbers. (the 2 drives of the same model have a small random string of chars, one from another manufacturer has a bit longer string. Bottom line, they look more like serial numbers then what is in the storage-drives section.


    Mainly posting this in case its a bug or something. It doesn't really affect me, except for the time in the future I look in the wrong place and then get mad cause I can't find the drive.

    Its not really that hard, managed to do one in my test rebuild. Was kind of surprised at the time it took. 3 min to resilver a zraid1 150GB or so, with less then a GB of data, on an nvme drive. All virtual drives in a VM mind you, but with less then 1 GB of data, I suspected it would take a few seconds.

    So that brings up the next question. Does the time it takes to resilver depend on the size of the pool, or the ammount of data stored on it.

    For example, will a pool that is 10 TB in size, and only has 100 GB of data on it take more or less than a pool that is 1TB in size, and has, say 700 GB of data on it to resilver?

    Is there a way to repair a degraded zfs zraid pool from the GUI, or does it need to be done via the command line? (Dead drive for example)

    **NOTE** Not an emergency. I am just testing. I put this here as I know some people have lost important data, and time may be critical. That is not the case here.

    Well, that was massively frustrating, but it looks like its solved.

    Don't beat yourself up. We all forget things. I do all the time. Works now, thats what matters.

    Now to see if that will resolve all my problems with compose, and manually installing portainer.. If not, I will be back for more help (new thread of course)

    Yea, I want to try and avoid that.

    It seems as though there may not be a "Have your cake and eat it too" solution for this. Nothing seems to be

    1) simple to set up

    2) provides redundancy

    3) can be expanded by even a single drive at a time.


    So unless there are any other suggestions, I guess I am going to need to deal with ZFS and the need to create whole pools to add more space.