Posts by Omvben

    yx59xme8nj3b1.png?width=960&crop=smart&auto=webp&v=enabled&s=ffced3ace3ab6d2ee4ed0adcb118f6d6f811cdd0

    I designed my network using docker macvlan networks so that I could segment the apps to be forced through my VPN using IP address rules in pfSense.


    But Nextcloud seems to run very slow and I suspect it is because it has to access MariaDB via IP addresses instead of direct via hostname.


    Do you think NextCloud would talk to MariaDB faster if didn't use VLAN (sub-interfaces)? Should I consider stacking all my docker containers on the same bridge network and access everything via port numbers?

    I have been running OMV 6 on a HP Microserver with 16GB RAM.

    It has been working flawlessly.

    As soon as i started playing with an elasticsearch docker instance. Now my server is all kinds of broken.


    It just seems to freeze up. Upon looking at the terminal I see the errors

    DMAR: ERROR: DMA PTE for vPFN 0x...


    Nr1SBd2.jpg


    I have to keep on restarting it. There is a small window of time before I can run commands.


    I can't seem to kill the elasticsearch docker container.


    Please help.

    I'm using 3 VLANs purely so that I can route traffic that comes from specific IP addresses down my VPN using pfSense rules.

    However, when I run nextcloud on a MACVLAN it feels super slow compared to using a bridge network. I assume this is because nextcloud communicates with the mysql database via IP address and goes via the router instead of direct?


    Is there a better way that I could route traffic from specific docker containers down my VPN tunnel using pfSense rules?

    I am running openmediavault 5.10.0-0.bpo.9-amd64 #1 SMP Debian 5.10.70-1~bpo10+1


    I have several macvlan docker containers running over 2 sub-interfaces (and the main interface)


    I like this setup because it enables me to create rules with my firewall for certain docker containers to go via a VPN and provide reporting.


    I need to manually add the ip addresses of the sub interfaces after a restart otherwise my docker containers are unable to reach the internet.


    How can I make the ip addr command stick after a reboot?


    --- sample codes below


    After a restart this is what my ip addr show (modified so only relevant devices show)


    I then have to add the ip addr manually


    Code
    ip addr add 10.10.50.2/24 brd 10.10.50.255 dev eno1.50
    ip addr add 10.10.90.2/24 brd 10.10.90.255 dev eno1.90

    Afterwards the ip addresses for my sub-interfaces are there and my docker containers can reach the internet

    Code
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000    link/ether 10:60:4b:92:bc:3c brd ff:ff:ff:ff:ff:ff    inet 10.10.10.2/24 brd 10.10.10.255 scope global eno1       valid_lft forever preferred_lft forever    inet6 fe80::1260:4bff:fe92:bc3c/64 scope link        valid_lft forever preferred_lft forever
    
    6: eno1.50@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default     link/ether 10:60:4b:92:bc:3c brd ff:ff:ff:ff:ff:ff    inet 10.10.50.2/24 brd 10.10.50.255 scope global eno1.50       valid_lft forever preferred_lft forever    inet6 fe80::1260:4bff:fe92:bc3c/64 scope link        valid_lft forever preferred_lft forever
    
    31: eno1.90@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default     link/ether 10:60:4b:92:bc:3c brd ff:ff:ff:ff:ff:ff    inet 10.10.90.2/24 brd 10.10.90.255 scope global eno1.90       valid_lft forever preferred_lft forever    inet6 fe80::1260:4bff:fe92:bc3c/64 scope link        valid_lft forever preferred_lft forever

    I have installed a 4th 10tb hard drive in my 4 bay NAS.

    I want to create a daily backup of a 1TB and a monthly backup of my 8TB MergerFS.


    I have created a local Repo using the OMV interface.

    And also created an Archive.

    But when I run the Archive I get an error


    Failed to create/acquire the lock /srv/dev-disk-by-uuid-61b5a137-8749-4dc7-8746-be5476260206/backup/lock (timeout).


    What other settings should I be looking at?

    I have a 4 bay hard drive HP micro-server running OMV5 with 3 bays being used:


    • 1TB - personal files
    • 4TB - MergerFS media files with
    • 4TB - MergerFS media files (8TB total)


    What would be the better (on-site) redundancy option if I got a disk for the 4th bay.


    Get a 10TB for the 4th bay and set it up as a SnapRaid parity drive? (potentially just a 6TB or another 4TB to save money?)

    OR

    Get a 10TB for the 4th bay and schedule a rsync job so that the 10TB has a copy of the 1TB+8TB file systems?


    I eventually plan to create an offsite 10TB backup using a raspberry pi, vpn, rsync and a 10TB external drive.


    (i should mention i'm on a budget)

    I have a solid OMV5 instance with

    1x1TB - data

    1x4TB - media

    1x4TB - SnapRaid parity


    Setup with LUKS encryption and SnapRaid with one of the 4TB's as a partity drive.


    My 4TB media drive is full. Very full. I found a 6TB drive which I have added to my instance.


    I want to create a 8TB mergefs using the two 4TBs and use the 6TB as the new Snapraid parity drive.


    Can I create a mergerfs with the two 4TB drives using `/srv/dev-by-****` folders even though one drive is full? Will the data distribute itself over the new 8TB mergefs?


    Is it safe to enable the mergefs without a backup first?

    I ended up deleting the 2 yaml files that were in /etc/netplan

    Created my own customnet.yaml file which contained this







    Typed the command

    Code
    netplan apply

    Then restarted the network with

    Code
    systemctl restart networking


    FINALLY!

    I have been trying to get my existing OMV instance running via a VLAN.

    I've established that OMV is using systemd-networkd but /etc/systemd/network folder is empty and /etc/systemd/network.conf file has little to no configured items.


    Where and how can I configure my Ethernet (eno1) interface to be a vlan?


    I have no access to the OMV gui at this stage.

    LUKS only protects you from the physical theft of your drives. This means, if a theft breaks in and steals your server, he won't be able to get anything from your hard drives. LUKS does nothing to prevent "over the wire" hacks which represents the vast majority of compromises.

    How much performance would be affected by encryption? MY CPU has AES.

    UPDATE: emby definitely doesn't like playing movies whilst my server is doing an rsync (with the server writing to the same disk that the media is hosted on).

    Thanks crashtest

    I have moved my docker storage to the 120GB SSD (which is also encrypted). Initial tests playing multiple movies at once with different devices seems to be good. I'll do more testing.


    LUKS encryption is not critical. I wanted my server to be encrypted as I plan to store personal data on it. If the performance was affected from the encryption then I would be okay without it.


    Thank you for your support.

    I have a new OMV instance. I setup LUKS encryption, ext4 fs, unionfs and snap raid.

    I have 1x1TB for data, 1x4TB for data and 1x4TB for parity.

    The unionfs is on the 1TB and 4TB data drives.

    I have copied over 3TB of data to the union FS.

    Now i think i'd prefer not to have the uniofs and just use the 4TB for Media and the 1TB for personal data.


    Can i remove the unionfs and still access the data that has been copied over. I'm pretty sure unionfs has copied all the data to the 4TB drive anyway

    You can rule out unionfs as a source of the problem if you temporarily create a new library in Plex that doesn't use the union as a media source. J.

    Hi crashtest .


    Yes the Dockers are stored on the default location. The boot drive is a 30GB Flash drive.

    I forgot to mention I also have a 120GB SSD which I map the config files to.
    Eg my docker run file looks like this.


    docker run -d \

    --name emby \

    -v /srv/dev-disk-by-uuid-832de4f2-73ac-400a-b84b-f1abfa3e27ac/config/emby:/config \ #the 120GB SSD

    -v /srv/8dfcfa9c-f641-412b-843f-606afffd9344/media:/media \ # the unionfs data store

    --device /dev/dri:/dev/dri \

    --network mynet \

    --ip 10.10.10.130 \

    -p 8096:8096 \

    -p 8920:8920 \

    -e UID=1000 \

    -e GID=100 \

    --restart unless-stopped \

    emby/embyserver:latest


    Would changing my docker location to the 120GB SSD make a difference? I was considering doing this anyway.

    It seems to be when multiple application access the file system.


    I am going to add an extra 8TB drive (i was planning to anyway and add it to my OMV instance without encryption, snap raid or union fs and see how emby performs on that. Then encrypt and see how it performs etc..


    Isolating the issue. I'll keep this post informed.

    So i have migrated from years of FreeNAS to OMV5.

    My setup is a HP Microserver 2.5Ghz CPU, 16GB RAM.

    I used to run two 4TB HD's using ZFS mirror, which worked flawlessly.


    My setup now is

    • OMV5,
    • Luks Encryption enabled,
    • ext4 file system,
    • SnapRaid enabled (1TB data, 4TB data, 4TB parity)
    • unionfs.
    • docker containers
    • Boot drive is a 30GB flash
    • I also have a 120GB SSD which i am using for config. )(also encrypted)

    I am running emby server in a docker container with its own macvlan network.


    My network is Cat6 Ethernet. My chromecast that plays movies is also on Ethernet.


    When i was on FreeNAS my high definition movies played over my network flawlessly!

    Now they are pausing/buffering randomly which is most annoying given how long it took me to rebuild this server!


    My problem is, with this build the buffering problem could be so many reasons. My gut feel is the unionfs and snapraid.


    Could somebody provide advice at where/how i should investigate?

    Thanks Agricola .

    I found a 1TB lying around.


    So now my setup is unionfs + snapraid with

    1TB + 4TB data

    4TB parity


    But yes i plan to get an 8TB and use that for parity. And then in a few months when i have saved money get another 8TB. So it will be

    4TB + 4TB

    and 8TB + 8TB.

    I would like to run these mirrored with 'proper' raid. as raid feels a little more stable/reliable than snapraid.

    I would like to setup a Grafana dashboard to monitor my OMV server. (on a docker container on my OMV instance)

    I'd like to monitor CPU, RAM and Hard Disk IO and Disk space.


    Which monitoring/reporting tools are already installed on OMV to send data to influxdb?


    After a quick search there seems to be a tool called Prometheus which looks interesting but i'd prefer to use existing tools.