Posts by milfaddict

    There is no need to be a douche. I would think that you being an admin would be a bit nicer.

    I know you think that your answer perfectly conveys what you meant it to but is did not. The confusion came from "Ceph Dashboard" which sounded like an interface/plugin for Ceph for OMV, I did not realize that it was the WebUI for the official Ceph project as I am still learning about this stuff and have only lightly researched Proxmox's implementation of Ceph.

    Shame on you for assuming people should know what you know and being a dick about it.

    It has been a while since I have tried OMV and I was wondering if OMV supports Ceph or another cluster file system that is as easy to set up as RAID-Z or has plans to.

    Yeah, editing system files in a CLI is way above what I am comfortable and capable of. I could easily do this in Windows. I ended up creating my pool in FreeNAS and importing it into OMV. Everything seems to be working... for now.

    @milfaddict In the zpool status output it looks strange that you have a "part1" added to two of your disks (example above). This looks like you specified a partition (dev/sdxa) instead of a disk (/dev/sdx) for two of the entries in your pool.

    None of my disks have partitions. I don't what the problem could be. This may sound stupid but maybe there is a bug in the way OMV handles more than 26 drives (sda-sdz)?

    Any idea when OMV's ZFS plugin will be updated?

    Shouldn't you already switched to another platform that works for you.Also have you tried to just create the array in commmand line instead of the plugin?

    I was going to just use FreeNAS but my old Marvell based HBA's was preventing me from doing so. I just got new LSI HBA's and flashed them. I thought my old HBA's might have been the cause of my problems with OMV so I went back to OMV (I have separate drives for OMV and FreeNAS) to try and see if switching out my HBA's made a difference.

    No I did not try the command line because even if I were successful, I still would not trust OMV if there is something wrong with ZFS on OMV. So until I know ZFS on OMV is rock solid I can't use it.

    So after all this time and after spending $600 on new LSI HBA's, I am still getting all the same errors I was getting before. I create my ZFS pool successfully, expand it and get errors, reboot and see that the second vdev only has 11 drives instead of 12 and the pool is degraded. Have you made any progress into fixing the horribly broken ZFS plugin yet?

    FreeNAS doesn't make you install a plugin to install another plugin just to use ZFS, just sayin'... maybe OMV core should include ZFS support like you do for Btrfs, again, just sayin'...

    So I have been trying other options like FreeNAS but FreeNAS and BSD does not support my Supermicro AOC-SASLP-MV8 HBA's because there is no driver for Marvell controllers. Could this be the reason I am having problems with ZFS and getting SMART to work?

    From what I have read, there is a Linux driver for Marvell controllers and OMV is based on Debian Linux even though it is a fork of FreeNAS which is based on FreeBSD for which there is apparently no Marvell driver.

    If my HBA's are the problem, would replacing them with Supermicro AOC-USAS-L8i cards solve my issues?

    @milfaddict: Did you made another try with less disks for a vdev, as I had suggested it in post #641? I have tried to explain why this could be helpful to narrow the cause.

    If I had such a lot of disks I would test it personally. Unfortunately I have not.

    So I deleted the pool and made a new Z1 pool and then tried expanding it with another Z1 vdev. I got the same errors as before. So I rebooted, deleted the pool and tried again, this time I used the first three drives in the list "sdaa, sdab, sdac" and then expanded using the last three drives "sdx, sdy, sdz" just to make sure OMV, NOT ME was trying to use a drive that was already in use. I still got all the same errors.

    No matter what RAID level or drives I select, I get the same errors.

    Pool details part 2.


    So when I started OMV, my ZFS pool had disappeared and I had to import it, not a huge deal but that does make me nervous.

    Before I installed the updated plugin, I decided to delete the old zpool. When I tried, I got this error.

    So I went ahead and installed the updated plugin. Everything went fine. When I went back into ZFS, there was no pool showing. So I tried to import it again and got this error. I am guessing the pool actually got deleted when I updated.

    Now I created the pool again with 12 drives Everything went fine until I tried expanding the pool with 12 more drives. I got this error. I then tried again only to get this error. So I rebooted, noticed the array grew, tried to expand again, got those same errors, rebooted, noticed the array had grown again. I am still missing 11TB and the array still says "DEGRADED".

    Two suggestions.

    1. As you have already mentioned. When I bring up the "Expand pool" dialog, it still shows all drives even if they are already in use, it would be nice if drives that are already part of a zpool or vdev were hidden.

    2. The drive list in the "Expand pool" dialog window only shows 3 drives at a time even if I expand the window. It would be really nice to be able to expand this area so I can see more drives at a time.

    Here are the details of my pool if it helps.

    OMV is good for a lot of people. I still don't understand why you need a 36 drive zfs array especially considering you say you have no linux experience?? And yes, I have had 20+ drives running on OMV without zfs.

    Because that is how many drives my servers hold and I don't want to have to add drives later and then have data unevenly spread across the pool. Also, I don't want to go and expand my array later with many TB on it only to have problems like this come up. There are many home users with 24, 36, 45, and 60 drive servers not to mention enterprise and government entities that have much larger arrays.

    Quote from ryecoaaron

    All you have done is complain about the quality of plugins, ask for super fast support from volunteers, and offered no support but threaten you will leave. If you try this on the freenas forum, you will have a very bad experience. Consider it lucky that I even try to maintain this plugin without any knowledge or experience. Give me one reason why myself or anyone else should?

    I have not complained but I have been frustrated with some of these technical issues and I am sure that frustration has shown in my writing. I have never meant for any of that to be directed toward anyone. I can not offer support because I am new to OMV and I have not threatened to do anything, I merely stated that I need to have this server up and running and if for whatever reason OMV can not satisfy my need, I will obviously need to try a different solution.

    I really do appreciate all the help the community has given me. As for your one reason, if some thing is broken, shouldn't those in charge of said thing that is broken do everything they can to fix said thing. I am not the only person who uses this plugin. Anyways... I am sorry if I insulted you. I hope this gets fixed and I thank you for all the hard work you and the mods/devs put in.

    I guess this is just another reason why I shouldn't be working on this plugin. I know the first two points are right but this lowers my interest in fixing this. I need to talk to a long time Solaris admin why zfs is weird this way.

    Well I hope some fixes this or I am going to have to say screw it and go try FreeNAS, OpenFiler, ZFSguru or Open Attic. I had such high hopes for Open Media Vault but these issue are making the experience a pain and is lowering my trust in OMV. Why is it that I seem to be running into problems with basic tasks like creating a ZFS pool.

    I need to get this storage server up and running ASAP and can't wait around for someone to fix this.

    Yes, that is how you would create that, and yes, it is confusing to say the least when you're looking for multiple-vdev setups. If I could code worth a damn I'd have already started fixing it, but I badly flubbed a simple one line edit, so yeah.

    Well I wish someone would. I have been having nothing but problems since I started using OMV.

    So When I created the pool with the first 12 drives, everything seemed to work just fine.

    Then when I went to expand the pool with 12 more drives, I first got this error.

    So I clicked "Ok" and tried again but then I got this error.

    So I rebooted and noticed the pool had expanded but said "DEGRADED".

    I then repeated those same steps, got the same errors, rebooted, noticed the pool had expanded again but still said "DEGRADED".

    So when I check the details, I can see that the second and third vdev only show 11 drives instead of 12.

    My raw storage space is 198TB, minus 49.5 for parity, I should have about 148.5TB but am only showing 137TB which is consistent with the 11TB missing due to the 2 missing drives.

    Someone please tell me what is going on and how I may fix it.

    You are the first I have seen that has described the plugin that way. Most OMV users are looking for drive pooling with redundancy and bitrot protection without the specifics of how that is done. And that type of user just happens to be what OMV is targeted at.

    I hope I didn't insult anyone, that was not my intention.

    Quote from ryecoaaron

    Trying things out in a VM would help a lot of the problems/questions you have.

    I thought about that but setting up VM's seems complicated and I am too stupid to understand any of that.