Beiträge von khuffmanjr

    No, no encryption. And remember, this works on a new OMV VM configured exactly the same way. If we can't track down what's happening here then I plan to use the new VM going forward and just wait to see if this happens again.


    TL;DR for anyone joining in at this phase: My thoughts are that there may be something going on with btrfs and the subvolume function; perhaps not an NFS issue at all then. The FS mounts fine and all data is visible, but the subvolume mount obscures only some of the directories in the subvolume. Very strange indeed. Anyway, since it is maybe not an issue with NFS I may find a better place to post this question where some BTRFS gurus' eyes may find this.

    Ok, here is the problem system's fstab. I'm not seeing any issues here. Maybe I'm missing it...



    And here is a directory listing of the mounted filesystem and the mounted subvolume. Obvious discrepancy here. Not sure what causes this:


    But it would be the same contents. The block device is mounted in /srv by fstab and the /srv/xxxxx/subvolumename is referenced as /export/subvolumename in fstab as well. These are all just front-end references to the same blocks so I cannot see how /srv/etc... would show the data but then the subvolume mount would not.


    And I'm afraid that instance of the system is not up at the moment. I am still using the stand-in deployment of OMV since it worked. For reference, I'll post the working system's relevant lines in fstab below. This shows how the block device and subvolume are mounted on this working system and how it looked on the broken system:


    Code
    # >>> [openmediavault]
    /dev/disk/by-uuid/d735ac41-b1e8-4326-aa0c-66487f5bc16c        /srv/dev-disk-by-uuid-d735ac41-b1e8-4326-aa0c-66487f5bc16c    btrfs    defaults,nofail    0 2
    /srv/dev-disk-by-uuid-d735ac41-b1e8-4326-aa0c-66487f5bc16c/media/        /export/media    none    bind,nofail    0 0
    # <<< [openmediavault]

    Greetings!


    I am running OMV 6.9.13-1 (Shaitan) and have been having a problem with NFS. So I've been using OMV for a few months now with mostly positive results. I can export data with NFS and access this data with multiple clients. After some time (maybe 3-4 weeks) my client mounts go stale and they lose access to the export. I can usually regain access to the mounts by doing some reboots of the clients and OMV. While all of that is pretty frustrating this is the part that really has me confused:


    After weeks of successful use and then working around the stale mounts issue, my clients cannot see all directories in the export. The data is there, verified from the OMV shell, but the clients see only a single directory. There are several directories in the path but clients see only one. I have worked around this in the past with some reboots of OMV or restarting of nfs-server service but today its just not coming around. No mater how much I reboot anything or restart services I just can't get the clients to see all the data. To be clear, it will work for weeks without issues and then all of a sudden they can't see everything.


    OMV shell stuff:


    Client shell stuff:


    As you can see, the data is in that folder frm OMV's perspective but the client can only see one directory. Again, I can get this working for weeks without issue and then boom, stale mounts...and when I get everything mounting again the clients just see one folder. I have tried disabling NFS3 and NFS4, one at a time, to see if that made a difference...it does not. It happens with either protocol.


    This is not a production environment but it is pretty darn important to my home users. I'm at a loss here as to what could be the issue. Permissions, at least at the level of those directories, are consistent so I doubt its a permission issue. Disregard the Cohesity stuff...I only care about the media folders.


    Any help would be appreciated. Thanks!

    Just an update for anyone that finds this in the future:


    I moved forward with the OMV VM plan. I installed proxmox and created a VM for OMV and passed a 9361-8i raid controller through to the VM. One port on the 9361 is connected to a SAS expander card, AEC-82885T, which is then connected to nine, at the moment, Exos 20TB SATA drives. I've got one set as a hot spare and seven others in a 5+2 RAID 6. I plan to add the last drive after background initialization of the array is finished. BGI is taking a looooong time, as expected. But I have started using the array already in OMV and everything is going well.


    As for the OMV config, I have installed the LVM plugin in OMV and used the RAID 6 array as a PV in LVM. I created a VG from that and then LV. The LV is used as a BTRFS Single filesystem and I've created several shares. I've been adding data the last couple days and things are pretty fast despite BGI still taking place. All in all, I'm happy with the results so far.


    I'll update here if anything noteworthy comes up. Thanks!

    If you don't use partitions on your raid array (omv doesn't), you don't need LVM to expand the filesystem.

    I will be adding more arrays to the LV over time. I only wish to grow any single array to so many drives and then I'll start another array to continue expanding capacity. Each array will represent a new PV in LVM.

    As for vgimport/export, why do you need that? As someone who moves VGs from one VM to another with automation at work hundreds of times per day, you don't need either.

    Just for the sake of being graceful. Its not a big deal...it just feels safer to inactivate and export the VG prior to an intentional movement of the LV from one server to another.

    So I've been playing around with the LVM plugin in OMV in a VM. I've been expanding and adding virtual disks and then expanding and adding LVM PVs and LVs. Everything works great from the UI and I can immediately expand my BTRFS filesystem once the LV has more space. Its amazingly smooth and quick to do these things from the UI and all can be done without going offline for any of it. This basically simulates expanding hardware raids (the PV in my tests) online so I can non-disruptively grow my filesystem. I can then add more raids (PVs) and do this several times over the life of my filesystem.


    With these new tests I'm really leaning toward the hardware raid6 option now. I'll start with a 15-bay chassis and I will start with a 6+2 with one hot spare and expand up to 12+2 with the hot spare. This will be with the Seagate Exos X20 SATA drives for this array. I'm thinking longterm that I will go with 24-bay Supermicro JBOD enclosures (maybe retail, but likely used market...as I can get them) and just run SAS drives. I think SAS drives will serve me better in the long run with these larger raid6 arrays I'm planning to use. They can be dual-ported too! Eventually, once I have the data on SAS, I can repurpose the SATA array for backup, maybe as a raid5 or so. I will have backup to some synology arrays until the time that I get the data off SATA and onto SAS, so I WILL be backing up in the mean time. :)


    I'm curious if the devs here would consider adding vgexport/vgimport workflows into the UI for LVM. My thoughts are that if we ever run into a scenario where in-place upgrades can't be done for some future version then we can at least export the VG and then import again once the OS/Software is replaced. I can do this from command line easily enough but I'm not sure how much we can get away with outside of the OMV UI and then still have seemless management from within OMV UI. I know I can do things such as mdraid operations from command line, as an example, and then OMV sees it all just fine, but will that always be the case?


    Anyway, I'm just thinking "out loud" as I go. I think this plan is near final/complete but I always appreciate more information so keep sharing if you have anything. Thanks!

    Nope. no zfs here. Just individual ext4 disks. I do pool them with mergerfs but I don't need to. Every client in my house (mostly kodi or Linux) can handle multiple shares.

    Nice. Yeah, any single disk can read at more than acceptable speeds for many workloads. I can see the appeal of the simpler design.


    Proxmox and OMV with kvm plugin are both KVM. There is no need to convert anything from proxmox other than the VM config. The kernel plugin even allows you to install the proxmox kernel.


    While it can be done, I recommend against installing proxmox on OMV because both OMV and proxmox will try maintaining conflicting configs. I am well aware the Proxmox has more features (been running it since 3.) but I use the kvm plugin myself and wonder why you need Proxmox?

    I use Proxmox Backup Server to do incremental-forever backups of my Proxmox VMs. It stores the incrementals as blobs and fidx files and I don't know how to mangle them back into a raw VM disk. See attached. I would love to have another proxmox server to recover to instead of plain KVM where I have to try and figure out how to get a VM disk out those blobs.



    That approach could fail if you have more than one client.

    Yeah, I just meant performance-wise. I meant to imply that performance won't be much of a concern for my large-block sequential workloads on a big raid 6 on hardware. I may have up to 4 clients watching at a time, for what its worth. We're talking 20-ish MBps here...not too bad.

    If you plan to expand to 15 drives and want raid, I would definitely use hardware raid. I got rid of raid years ago at home.

    I've been an enterprise storage professional since 2003 so I'm right at home with hardware raid. To be fair, I usually worked on FC arrays on the vendor side of things, but LSI-variant raid cards are quite comfortable territory for me. And I just hate the idea of ZFS/others preventing me from adding one drive at a time as I need storage. The vdev expansion limitations are really pushing me away so far. I'm curious what you use, having moved away from raid. I'll admit to assuming it is ZFS, but I guess its safest to just ask. :)


    My workloads are predominantly large-block, sequential in nature and I've learned long ago that any ol' potato storage device will stream a 4k movie at 20-40Mbps easily enough; so raid 6 won't be a hinderance. And any recoveries needed down the line will be sequentially read/written as well. Ending up with a 13+2 raid6 will suit me just fine. Then I'll start another array as needed.


    Run OMV on the system (not as a VM) and install the kvm plugin (I do). No passthrough needed for storage.


    And as for the OMV as a hypervisor, well, that could work. I run everything on Proxmox right now, so I'll have to see if I can recover/convert/copy/whatever my proxmox backups to KVM if the need arises and vice versa. Since OMV is just debian under the covers, couldn't I just install Proxmox alongside OMV? Proxmox supports installs on debian, but I wonder what the devs here think of adding Proxmox to OMV... Could be interesting!

    Last summer I hit $400 one month. This year was better as we made a few changes that affected efficiency. I think we hovered around $350 a couple of months. In particular, I'm quite impressed with the efficiency of two mini-split ACs I put in to takeover for older, different units. I've been familiar with mini-splits through my travels over the last couple decades but they haven't been popular here in the US until recently. I'm glad they are catching on...huge money-saver, imo.

    Well, I'm working on talking myself out of software raid. I have a raid controller and expanders on hand so I'm thinking of going with hardware raid 6 that can be expanded on the controller and then use btrfs single (btrfs for snapshots) in OMV for my filesystem. Hardware raid 6 will give me consistency checking/correction wheras mdraid will not.


    Thoughts?


    ...now what am I going to do with all that CPU and RAM? I guess I could run OMV in a VM and passthrough the raid card. Then I would have more hypervisor space to work with. That's another layer to upate though. I'm going to be working with close to 90TB of data right away - and growing - so whatever I go with needs to stick.


    Thanks!

    Great comments, all around. Thanks! I hadn't thought of swap, to be honest. I'll certainly gather my resources to support that activity.


    And I get the power vs location issue. I'm in the US so we don't have to worry as much about cost of power. I know in EU/Europe its much more of a concern. My biggest concern over power costs is around air conditioning. I'm in Texas so the summer A/C use drives up the bill. My computers contribute very little, relatively speaking.

    Greetings!


    Looking to build out a new OMV 6 system and I was wondering to what extent high capacity RAM will help my system, if any.


    NAS usecase:

    -- Primarily used for Plex media. Large block, sequential reads; writing 10GB-150GB per day

    -- Secondarily, this system will provide some backup storage for Proxmox VMs and a couple of linux computers

    -- I will do some limited photo storage/sharing with this as well


    Build (don't laugh):

    -- Supermicro X10 motherboard

    -- Xeon E5-2660 v3 10 core

    -- 256GB RAM DDR-2400

    -- OS will go on one or two nvme drives, haven't decided just yet; I generally install linux to mdraid raid1 arrays when I have the drives, and I do in the instance

    -- 9300-16i HBA

    -- 20TB Exos x20 (0007D) HDDs; will start with 5 devices in a raid 6 (3D+2P) and grow one drive at a time up to 15 max (13D+2P); I am, generally, comfortable with growing mdraid arrays


    I know its overkill but its just what I have available. This system, minus the 20TB HDDs, is what I'm using for my current Proxmox server but I'm replacing it with an EPYC system. So the way-too-much RAM is just kind of coming along with it. I figure its better than it sitting on a shelf somewhere. Will it help at all for anything relating to mdraid or other OMV processing?


    Also, knowing that this is the hardware I have, I'm open to any other thoughts you may have. Please feel free to go off-topic (topic = RAM) if you have something I may not have thought about. I know rebuild/grow times will be long, but I don't care as long as my large block seqential workloads still work fine during these processes. Thanks!


    Ken