Posts by savellm

    Hi there.


    When I try to install a VM via Cockpit I get the error: ERROR Requested operation is not valid: network 'default' is not active Domain installation does not appear to have been successful.


    I have restarted OMV no dice.

    On the System tab I can see network moving up and down, but it is 1 tab I am unable to click on unlike the rest.


    In OMV itself I have 1 Interface:


    I do have 2x 10gbe networks on my server but only using 1.


    Can anyone help?

    I dont have anything in good detail, but its just from reading and learning over time.

    People like crashtest and others that are stars.


    I used 2 vdevs of raidz2 because I wanted safety and i needed half my drives to move from my older OS.

    If I could have I'd have probably just done 1 large vdev of 18 drives in RaidZ2, just cause im not out for stupidly high perf.

    I ended up with 2x vdevs of 9 drives each cause then I get 4 drive failure and my worry was if I was running Z1, and a drive failed I didnt want another to go down during the rebuild. So Z2 and thats where I am. TBF I got like 96tb total with the 4 drives as parity so I'm not short of space :)

    I'd personally get more RAM for ZFS, while you can survive, it might be a bit better to double your RAM.

    I have a bunch of dockers running, Nextcloud, Emby, and like 12 others.

    I have a cache mirror of 2x 1tb SSD's running BTRFS, and then my main vault under ZFS with 2x vdev's of 9x8tb RaidZ2's


    Its all working so well.

    Remember if you are going ZFS, you cannot just add an extra drive to a vdev or pool. You can increase a pool with a new vdev.

    I'd recommend you get all the drives you think you are going to want/need at the start.


    OR just use snapraid and whatever else.

    You can add disks via zpool create and so on. That way you can make sure it creates via by id!

    Its how I did mine:

    zpool add -o ashift=12 vault raidz2 /dev/disk/by-id/ata-WDC_WD80EFAX-68KNBN0_VAGTMG3L


    I also recommend the Proxmox kernel before installing ZFS plugin, via OMV-Extras.

    Took the plunge and ran that command... It worked.

    Out of curiosity the data that was on that first vdev will it be normalised or balanced across to the other vdev?

    Or as new data comes in, it'll fill the new vdev until there is an equal amount of data on both vdev's?

    Pool is called 'vault'


    root@openmediavault:/dev/disk/by-id# zpool status

    pool: vault

    state: ONLINE

    scan: scrub repaired 0B in 0 days 00:39:39 with 0 errors on Sun Mar 15 07:33:14 2020

    config:


    NAME STATE READ WRITE CKSUM

    vault ONLINE 0 0 0

    raidz2-0 ONLINE 0 0 0

    ata-ST8000NE0021-2EN112_ZA15WTS6 ONLINE 0 0 0

    ata-ST8000NE0021-2EN112_ZA1654W8 ONLINE 0 0 0

    ata-ST8000VN0022-2EL112_ZA10D0AW ONLINE 0 0 0

    ata-ST8000VN0022-2EL112_ZA15J0LG ONLINE 0 0 0

    ata-ST8000VN0022-2EL112_ZA18JD9F ONLINE 0 0 0

    ata-ST8000VN0022-2EL112_ZA1A8RP0 ONLINE 0 0 0

    ata-WDC_WD80EFAX-68KNBN0_VAH1ZYPL ONLINE 0 0 0

    ata-WDC_WD80EFAX-68KNBN0_VAHZDX7L ONLINE 0 0 0

    ata-WDC_WD80EFAX-68KNBN0_VAJ7DYBL ONLINE 0 0 0


    errors: No known data errors

    root@openmediavault:/dev/disk/by-id# systemctl enable fstrim.timer

    Created symlink /etc/systemd/system/timers.target.wants/fstrim.timer → /lib/systemd/system/fstrim.timer.

    root@openmediavault:/dev/disk/by-id# systemctl status fstrim.service

    ● fstrim.service - Discard unused blocks on filesystems from /etc/fstab

    Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)

    Active: inactive (dead)

    Docs: man:fstrim(8)

    root@openmediavault:/dev/disk/by-id# systemctl status fstrim.timer

    ● fstrim.timer - Discard unused blocks once a week

    Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)

    Active: inactive (dead)

    Trigger: n/a

    Docs: man:fstrim

    macom is that right?


    And any way to add the smart tests via command? Or like in the UI to click create short set a time and then a select box for all drives?

    Like when you're creating a raid array and you can select drives... that would be so much better.

    root@openmediavault:/dev/disk/by-id# systemctl status fstrim.timer

    ● fstrim.timer - Discard unused blocks once a week

    Loaded: loaded (/lib/systemd/system/fstrim.timer; disabled; vendor preset: enabled)

    Active: inactive (dead)

    Trigger: n/a

    Docs: man:fstrim

    root@openmediavault:/dev/disk/by-id# systemctl status fstrim.service

    ● fstrim.service - Discard unused blocks on filesystems from /etc/fstab

    Loaded: loaded (/lib/systemd/system/fstrim.service; static; vendor preset: enabled)

    Active: inactive (dead)

    Docs: man:fstrim(8)

    Is there a way I can use the command line to add SMART tests?

    I have 24 drives, adding them through the UI is cumbersome and I feel like I'm going to end up adding multiple same drives.


    I wanted to do a command line where I could paste in a list of all my drives and do like a weekly short test, and then a monthly/bi-monthly long test.


    Also... Is there a way to do TRIM for the SSD's?

    So I created my zpool with this command:

    zpool create -o ashift=12 vault raidz2

    /dev/disk/by-id/ata-ST8000NE0021-2EN112_ZA15WTS6

    /dev/disk/by-id/ata-ST8000NE0021-2EN112_ZA1654W8

    /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA10D0AW

    /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA15J0LG

    /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA18JD9F

    /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1A8RP0

    /dev/disk/by-id/ata-WDC_WD80EFAX-68KNBN0_VAH1ZYPL

    /dev/disk/by-id/ata-WDC_WD80EFAX-68KNBN0_VAHZDX7L

    /dev/disk/by-id/ata-WDC_WD80EFAX-68KNBN0_VAJ7DYBL


    Now I have moved all my data across to this pool. I want to extend the size of the pool with more drives from the old server. Its a matching 9 pair, I was going to use:

    zpool add -o ashift=12 vault raidz2

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAGTMG3L

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAGZ6M7L

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAHAMJ5L

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAHAUVSL

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAHEU2VL

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAHZ1NDL

    /dev/disk/by-id/WDC_WD80EFAX-68KNBN0_VAJ7EV5L

    /dev/disk/by-id/WDC_WD80EFAX-68LHPN0_7HKUL9LN

    /dev/disk/by-id/WDC_WD80EFAX-68LHPN0_7SJ75XKW


    Is this the right way? Especially to keep the ashift=12?

    Ok so redid the container and used http validation.

    Forwarded http to 180.

    This works, I get a cert. So firewall is forwarding correctly.


    Now its working, and no idea why...

    I have port forwarding on and that should be working. But its not getting that far.

    This seems to be a nginx or container issue where its not starting the web server.


    So hitting the container is doing nothing. Its just not responding. So either a broken network to the container or something else.