Beiträge von devnet

    I've found that I can't delete shares when they are being used by other services. For example, I had docker utilizing a shared folder, I had to make sure that docker was no longer using the fileshare before I could delete it.


    Look closely at what is being used by other plugins/services...you can't delete something that is being used.

    Do you monitor your Docker containers for issues that they may hit while up and running? Do you collect performance metrics on them to make sure they are running optimally while you're not watching them?


    If so, what do you use to monitor them?

    Thanks for posting this. I'm about to undergo the same migration. I'm currently using mdadm raid1 and I have around 3-4 extra hard drives sitting around without use. A new motherboard is on the way with 6 SATA connections so I should have the ability to add a bunch of space into a pool. I figured I'd have to blank things out so had already planned it...sounds like you found the same. Anyway, appreciate the info!

    If you're talking about a btrfs raid1 and are fine 'wasting' 50% disk capacity for redundancy you can simply remain with this setup, add the other disks and rebalance your btrfs raid1 each time. In my opinion the less disks spin the better so I would try to end up with only the two 8 TB disks spinning. With btrfs it's again easy: add one 8TB disk to your pool, rebalance, remove one 2 TB, rebalance, add the other 8TB disk to your pool, rebalance, remove the other 2 TB, rebalance, done.


    Well I wanted more of a best practice with multiple drives. I use OMV mostly as a file share for media files and I run Plex from it to serve up the media. I have 2 shares that have a lot of downloads from my torrent machine go to it and I have 4 shares that have very little changes happen in them.


    So what would be the best practice for multiple disks accessed daily with small and large file creation/transfer or is this question to vague?


    I can make the determination for RAID setup at work all day long...but I don't run a media server there :)

    Hi all,


    I built an OMV install out of an old HP 550-150qe. This came with the following items I'm carrying forward:



    • Intel i5 - 4460
    • Hynix 8 GB (2 x 4GB) PC3L-12800
    • Western Digital - Red 2 TB 3.5" 5400RPM Internal Hard Drive
    • Western Digital - Red 2 TB 3.5" 5400RPM Internal Hard Drive


    I run plex and virtualbox headless on there currently (use VM for pihole). I outgrew the motherboard when I purchased 2 X 2 TB WD Red drives a few months ago. I wanted to upgrade further with some 8 TB drives since I'm coming close to capacity (1.5 TB) and obviously ran out of SATA ports on the mobo since it only has 3. I also transplanted the motherboard into a desktop style slim case so I'm actually out of space all together. Fast forward to this week and I've made a new build:


    https://pcpartpicker.com/list/4zKs9J


    So the build above will increase my storage quite a bit. The current setup is that I have software RAID (BTRFS underlying) on the 2 X 2TB drives and all my shares located there. So the question goes to...what should I do to migrate the data or expand my pools? Should I switch up to ZFS? Should I add union and switch to mergerFS? What would you do?


    With the new system, i'll have a M.2 drive to install the OS to and 6 SATA ports. I have a total of 4 WD Reds and I have 2 x 1TB white label NAS drives not currently in use.

    I am using zol and omv without issues for quite a while. I have no idea what is buggy about it. The only thing I realized is, that if you got damage in hdds, you might want to fix this from command line.


    These are brand new HD's. They show up fine in the "Disks" menu. I can format them to and from any other filesystem. They don't show up any longer in the "ZFS" menu as devices I am able to select. They're gone. SMART shows nothing wrong with the disks.


    I think the problem was I created a lot of mix and match pools at the beginning just experimenting with it...then destroyed them all to put it back to original. I'm sure some old pool cruft is causing the plugin issues.


    I'm pretty sure I could drop to command line and get this to work with zero issues...but the idea here was to do everything through the GUI (for the article) so that it could be a turnkey solution with very little lifting. So like I said, I'll have to try software

    Yeah ZFS plugin seems to be a bit buggy to be honest. I have been messing around with things creating pools and mixing and matching the drives I have (just to understand the raid levels which are somewhat different) and suddenly, all my devices disappear from the pool selection screen. No more pools for me now. It's persistent through reboots...they just don't exist to choose in the drop down any longer.


    I've searched through buglists here and found a couple of unresolved bugs that point to someone finding a solution that he can't remember. So I'm out of luck.


    Just doesn't seem like I'm going to be able to stick with this. :(


    I was doing an article for my blog about it too so I'm sad that it didn't stick. Will most likely have to just go with software RAID.

    Hiya all,


    New to OMV, but not Linux. I followed the big thread instructions on installing ZFS on OMV4. I installed a fresh install today, updated things and then enabled OMV Extras. Everything went as planned.


    I went to install ZFS and during the install received a bunch of errors. As I've never tried ZFS before and this is my first foray into building my own NAS with it (I just did ext4/ext3 with vanilla Linux previously) I thought I'd ask if anyone else has hit them during installation. I saw this thread a little too late and now I get this every time I click on ZFS in the tree:


    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; zfs list -H -t snapshot -o name,used,refer 2>&1' with exit code '1': The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them.


    Then if I click details:



    So I've enabled the OMV Testing Repo...done a complete update and then an upgrade...but no joy. Anyone have any tips/tricks to resolve this issue? I'm thinking ZFS didn't completely install from before but I'm not 100% sure what to do from this point onward because I'm not familiar with the packages that need installed/reinstalled again having done everything through the GUI.